Object-processing neural efficiency differentiates object from spatial visualizers.
Motes, Michael A; Malach, Rafael; Kozhevnikov, Maria
2008-11-19
The visual system processes object properties and spatial properties in distinct subsystems, and we hypothesized that this distinction might extend to individual differences in visual processing. We conducted a functional MRI study investigating the neural underpinnings of individual differences in object versus spatial visual processing. Nine participants of high object-processing ability ('object' visualizers) and eight participants of high spatial-processing ability ('spatial' visualizers) were scanned, while they performed an object-processing task. Object visualizers showed lower bilateral neural activity in lateral occipital complex and lower right-lateralized neural activity in dorsolateral prefrontal cortex. The data indicate that high object-processing ability is associated with more efficient use of visual-object resources, resulting in less neural activity in the object-processing pathway.
Neural network system for purposeful behavior based on foveal visual preprocessor
NASA Astrophysics Data System (ADS)
Golovan, Alexander V.; Shevtsova, Natalia A.; Klepatch, Arkadi A.
1996-10-01
Biologically plausible model of the system with an adaptive behavior in a priori environment and resistant to impairment has been developed. The system consists of input, learning, and output subsystems. The first subsystems classifies input patterns presented as n-dimensional vectors in accordance with some associative rule. The second one being a neural network determines adaptive responses of the system to input patterns. Arranged neural groups coding possible input patterns and appropriate output responses are formed during learning by means of negative reinforcement. Output subsystem maps a neural network activity into the system behavior in the environment. The system developed has been studied by computer simulation imitating a collision-free motion of a mobile robot. After some learning period the system 'moves' along a road without collisions. It is shown that in spite of impairment of some neural network elements the system functions reliably after relearning. Foveal visual preprocessor model developed earlier has been tested to form a kind of visual input to the system.
A neural-visualization IDS for honeynet data.
Herrero, Álvaro; Zurutuza, Urko; Corchado, Emilio
2012-04-01
Neural intelligent systems can provide a visualization of the network traffic for security staff, in order to reduce the widely known high false-positive rate associated with misuse-based Intrusion Detection Systems (IDSs). Unlike previous work, this study proposes an unsupervised neural models that generate an intuitive visualization of the captured traffic, rather than network statistics. These snapshots of network events are immensely useful for security personnel that monitor network behavior. The system is based on the use of different neural projection and unsupervised methods for the visual inspection of honeypot data, and may be seen as a complementary network security tool that sheds light on internal data structures through visual inspection of the traffic itself. Furthermore, it is intended to facilitate verification and assessment of Snort performance (a well-known and widely-used misuse-based IDS), through the visualization of attack patterns. Empirical verification and comparison of the proposed projection methods are performed in a real domain, where two different case studies are defined and analyzed.
A case for spiking neural network simulation based on configurable multiple-FPGA systems.
Yang, Shufan; Wu, Qiang; Li, Renfa
2011-09-01
Recent neuropsychological research has begun to reveal that neurons encode information in the timing of spikes. Spiking neural network simulations are a flexible and powerful method for investigating the behaviour of neuronal systems. Simulation of the spiking neural networks in software is unable to rapidly generate output spikes in large-scale of neural network. An alternative approach, hardware implementation of such system, provides the possibility to generate independent spikes precisely and simultaneously output spike waves in real time, under the premise that spiking neural network can take full advantage of hardware inherent parallelism. We introduce a configurable FPGA-oriented hardware platform for spiking neural network simulation in this work. We aim to use this platform to combine the speed of dedicated hardware with the programmability of software so that it might allow neuroscientists to put together sophisticated computation experiments of their own model. A feed-forward hierarchy network is developed as a case study to describe the operation of biological neural systems (such as orientation selectivity of visual cortex) and computational models of such systems. This model demonstrates how a feed-forward neural network constructs the circuitry required for orientation selectivity and provides platform for reaching a deeper understanding of the primate visual system. In the future, larger scale models based on this framework can be used to replicate the actual architecture in visual cortex, leading to more detailed predictions and insights into visual perception phenomenon.
Alvarez, George A.; Nakayama, Ken; Konkle, Talia
2016-01-01
Visual search is a ubiquitous visual behavior, and efficient search is essential for survival. Different cognitive models have explained the speed and accuracy of search based either on the dynamics of attention or on similarity of item representations. Here, we examined the extent to which performance on a visual search task can be predicted from the stable representational architecture of the visual system, independent of attentional dynamics. Participants performed a visual search task with 28 conditions reflecting different pairs of categories (e.g., searching for a face among cars, body among hammers, etc.). The time it took participants to find the target item varied as a function of category combination. In a separate group of participants, we measured the neural responses to these object categories when items were presented in isolation. Using representational similarity analysis, we then examined whether the similarity of neural responses across different subdivisions of the visual system had the requisite structure needed to predict visual search performance. Overall, we found strong brain/behavior correlations across most of the higher-level visual system, including both the ventral and dorsal pathways when considering both macroscale sectors as well as smaller mesoscale regions. These results suggest that visual search for real-world object categories is well predicted by the stable, task-independent architecture of the visual system. NEW & NOTEWORTHY Here, we ask which neural regions have neural response patterns that correlate with behavioral performance in a visual processing task. We found that the representational structure across all of high-level visual cortex has the requisite structure to predict behavior. Furthermore, when directly comparing different neural regions, we found that they all had highly similar category-level representational structures. These results point to a ubiquitous and uniform representational structure in high-level visual cortex underlying visual object processing. PMID:27832600
Multiplexing in the primate motion pathway.
Huk, Alexander C
2012-06-01
This article begins by reviewing recent work on 3D motion processing in the primate visual system. Some of these results suggest that 3D motion signals may be processed in the same circuitry already known to compute 2D motion signals. Such "multiplexing" has implications for the study of visual cortical circuits and neural signals. A more explicit appreciation of multiplexing--and the computations required for demultiplexing--may enrich the study of the visual system by emphasizing the importance of a structured and balanced "encoding/decoding" framework. In addition to providing a fresh perspective on how successive stages of visual processing might be approached, multiplexing also raises caveats about the value of "neural correlates" for understanding neural computation.
Vividness of Visual Imagery Depends on the Neural Overlap with Perception in Visual Areas.
Dijkstra, Nadine; Bosch, Sander E; van Gerven, Marcel A J
2017-02-01
Research into the neural correlates of individual differences in imagery vividness point to an important role of the early visual cortex. However, there is also great fluctuation of vividness within individuals, such that only looking at differences between people necessarily obscures the picture. In this study, we show that variation in moment-to-moment experienced vividness of visual imagery, within human subjects, depends on the activity of a large network of brain areas, including frontal, parietal, and visual areas. Furthermore, using a novel multivariate analysis technique, we show that the neural overlap between imagery and perception in the entire visual system correlates with experienced imagery vividness. This shows that the neural basis of imagery vividness is much more complicated than studies of individual differences seemed to suggest. Visual imagery is the ability to visualize objects that are not in our direct line of sight: something that is important for memory, spatial reasoning, and many other tasks. It is known that the better people are at visual imagery, the better they can perform these tasks. However, the neural correlates of moment-to-moment variation in visual imagery remain unclear. In this study, we show that the more the neural response during imagery is similar to the neural response during perception, the more vivid or perception-like the imagery experience is. Copyright © 2017 the authors 0270-6474/17/371367-07$15.00/0.
Loss of Neurofilament Labeling in the Primary Visual Cortex of Monocularly Deprived Monkeys
Duffy, Kevin R.; Livingstone, Margaret S.
2009-01-01
Visual experience during early life is important for the development of neural organizations that support visual function. Closing one eye (monocular deprivation) during this sensitive period can cause a reorganization of neural connections within the visual system that leaves the deprived eye functionally disconnected. We have assessed the pattern of neurofilament labeling in monocularly deprived macaque monkeys to examine the possibility that a cytoskeleton change contributes to deprivation-induced reorganization of neural connections within the primary visual cortex (V-1). Monocular deprivation for three months starting around the time of birth caused a significant loss of neurofilament labeling within deprived-eye ocular dominance columns. Three months of monocular deprivation initiated in adulthood did not produce a loss of neurofilament labeling. The evidence that neurofilament loss was found only when deprivation occurred during the sensitive period supports the notion that the loss permits restructuring of deprived-eye neural connections within the visual system. These results provide evidence that, in addition to reorganization of LGN inputs, the intrinsic circuitry of V-1 neurons is altered when monocular deprivation occurs early in development. PMID:15563721
Neural Mechanisms of Selective Visual Attention.
Moore, Tirin; Zirnsak, Marc
2017-01-03
Selective visual attention describes the tendency of visual processing to be confined largely to stimuli that are relevant to behavior. It is among the most fundamental of cognitive functions, particularly in humans and other primates for whom vision is the dominant sense. We review recent progress in identifying the neural mechanisms of selective visual attention. We discuss evidence from studies of different varieties of selective attention and examine how these varieties alter the processing of stimuli by neurons within the visual system, current knowledge of their causal basis, and methods for assessing attentional dysfunctions. In addition, we identify some key questions that remain in identifying the neural mechanisms that give rise to the selective processing of visual information.
Neural codes of seeing architectural styles
Choo, Heeyoung; Nasar, Jack L.; Nikrahei, Bardia; Walther, Dirk B.
2017-01-01
Images of iconic buildings, such as the CN Tower, instantly transport us to specific places, such as Toronto. Despite the substantial impact of architectural design on people’s visual experience of built environments, we know little about its neural representation in the human brain. In the present study, we have found patterns of neural activity associated with specific architectural styles in several high-level visual brain regions, but not in primary visual cortex (V1). This finding suggests that the neural correlates of the visual perception of architectural styles stem from style-specific complex visual structure beyond the simple features computed in V1. Surprisingly, the network of brain regions representing architectural styles included the fusiform face area (FFA) in addition to several scene-selective regions. Hierarchical clustering of error patterns further revealed that the FFA participated to a much larger extent in the neural encoding of architectural styles than entry-level scene categories. We conclude that the FFA is involved in fine-grained neural encoding of scenes at a subordinate-level, in our case, architectural styles of buildings. This study for the first time shows how the human visual system encodes visual aspects of architecture, one of the predominant and longest-lasting artefacts of human culture. PMID:28071765
Neural codes of seeing architectural styles.
Choo, Heeyoung; Nasar, Jack L; Nikrahei, Bardia; Walther, Dirk B
2017-01-10
Images of iconic buildings, such as the CN Tower, instantly transport us to specific places, such as Toronto. Despite the substantial impact of architectural design on people's visual experience of built environments, we know little about its neural representation in the human brain. In the present study, we have found patterns of neural activity associated with specific architectural styles in several high-level visual brain regions, but not in primary visual cortex (V1). This finding suggests that the neural correlates of the visual perception of architectural styles stem from style-specific complex visual structure beyond the simple features computed in V1. Surprisingly, the network of brain regions representing architectural styles included the fusiform face area (FFA) in addition to several scene-selective regions. Hierarchical clustering of error patterns further revealed that the FFA participated to a much larger extent in the neural encoding of architectural styles than entry-level scene categories. We conclude that the FFA is involved in fine-grained neural encoding of scenes at a subordinate-level, in our case, architectural styles of buildings. This study for the first time shows how the human visual system encodes visual aspects of architecture, one of the predominant and longest-lasting artefacts of human culture.
Modeling for Visual Feature Extraction Using Spiking Neural Networks
NASA Astrophysics Data System (ADS)
Kimura, Ichiro; Kuroe, Yasuaki; Kotera, Hiromichi; Murata, Tomoya
This paper develops models for “visual feature extraction” in biological systems by using “spiking neural network (SNN)”. The SNN is promising for developing the models because the information is encoded and processed by spike trains similar to biological neural networks. Two architectures of SNN are proposed for modeling the directionally selective and the motion parallax cell in neuro-sensory systems and they are trained so as to possess actual biological responses of each cell. To validate the developed models, their representation ability is investigated and their visual feature extraction mechanisms are discussed from the neurophysiological viewpoint. It is expected that this study can be the first step to developing a sensor system similar to the biological systems and also a complementary approach to investigating the function of the brain.
Scalable and Interactive Segmentation and Visualization of Neural Processes in EM Datasets
Jeong, Won-Ki; Beyer, Johanna; Hadwiger, Markus; Vazquez, Amelio; Pfister, Hanspeter; Whitaker, Ross T.
2011-01-01
Recent advances in scanning technology provide high resolution EM (Electron Microscopy) datasets that allow neuroscientists to reconstruct complex neural connections in a nervous system. However, due to the enormous size and complexity of the resulting data, segmentation and visualization of neural processes in EM data is usually a difficult and very time-consuming task. In this paper, we present NeuroTrace, a novel EM volume segmentation and visualization system that consists of two parts: a semi-automatic multiphase level set segmentation with 3D tracking for reconstruction of neural processes, and a specialized volume rendering approach for visualization of EM volumes. It employs view-dependent on-demand filtering and evaluation of a local histogram edge metric, as well as on-the-fly interpolation and ray-casting of implicit surfaces for segmented neural structures. Both methods are implemented on the GPU for interactive performance. NeuroTrace is designed to be scalable to large datasets and data-parallel hardware architectures. A comparison of NeuroTrace with a commonly used manual EM segmentation tool shows that our interactive workflow is faster and easier to use for the reconstruction of complex neural processes. PMID:19834227
Optical and neural anisotropy in peripheral vision
Zheleznyak, Len; Barbot, Antoine; Ghosh, Atanu; Yoon, Geunyoung
2016-01-01
Optical blur in the peripheral retina is known to be highly anisotropic due to nonrotationally symmetric wavefront aberrations such as astigmatism and coma. At the neural level, the visual system exhibits anisotropies in orientation sensitivity across the visual field. In the fovea, the visual system shows higher sensitivity for cardinal over diagonal orientations, which is referred to as the oblique effect. However, in the peripheral retina, the neural visual system becomes more sensitive to radially-oriented signals, a phenomenon known as the meridional effect. Here, we examined the relative contributions of optics and neural processing to the meridional effect in 10 participants at 0°, 10°, and 20° in the temporal retina. Optical anisotropy was quantified by measuring the eye's habitual wavefront aberrations. Alternatively, neural anisotropy was evaluated by measuring contrast sensitivity (at 2 and 4 cyc/deg) while correcting the eye's aberrations with an adaptive optics vision simulator, thus bypassing any optical factors. As eccentricity increased, optical and neural anisotropy increased in magnitude. The average ratio of horizontal to vertical optical MTF (at 2 and 4 cyc/deg) at 0°, 10°, and 20° was 0.96 ± 0.14, 1.41 ± 0.54 and 2.15 ± 1.38, respectively. Similarly, the average ratio of horizontal to vertical contrast sensitivity with full optical correction at 0°, 10°, and 20° was 0.99 ± 0.15, 1.28 ± 0.28 and 1.75 ± 0.80, respectively. These results indicate that the neural system's orientation sensitivity coincides with habitual blur orientation. These findings support the neural origin of the meridional effect and raise important questions regarding the role of peripheral anisotropic optical quality in developing the meridional effect and emmetropization. PMID:26928220
Fox, Christopher J; Barton, Jason J S
2007-01-05
The neural representation of facial expression within the human visual system is not well defined. Using an adaptation paradigm, we examined aftereffects on expression perception produced by various stimuli. Adapting to a face, which was used to create morphs between two expressions, substantially biased expression perception within the morphed faces away from the adapting expression. This adaptation was not based on low-level image properties, as a different image of the same person displaying that expression produced equally robust aftereffects. Smaller but significant aftereffects were generated by images of different individuals, irrespective of gender. Non-face visual, auditory, or verbal representations of emotion did not generate significant aftereffects. These results suggest that adaptation affects at least two neural representations of expression: one specific to the individual (not the image), and one that represents expression across different facial identities. The identity-independent aftereffect suggests the existence of a 'visual semantic' for facial expression in the human visual system.
Sensori-motor experience leads to changes in visual processing in the developing brain.
James, Karin Harman
2010-03-01
Since Broca's studies on language processing, cortical functional specialization has been considered to be integral to efficient neural processing. A fundamental question in cognitive neuroscience concerns the type of learning that is required for functional specialization to develop. To address this issue with respect to the development of neural specialization for letters, we used functional magnetic resonance imaging (fMRI) to compare brain activation patterns in pre-school children before and after different letter-learning conditions: a sensori-motor group practised printing letters during the learning phase, while the control group practised visual recognition. Results demonstrated an overall left-hemisphere bias for processing letters in these pre-literate participants, but, more interestingly, showed enhanced blood oxygen-level-dependent activation in the visual association cortex during letter perception only after sensori-motor (printing) learning. It is concluded that sensori-motor experience augments processing in the visual system of pre-school children. The change of activation in these neural circuits provides important evidence that 'learning-by-doing' can lay the foundation for, and potentially strengthen, the neural systems used for visual letter recognition.
Automated Visual Cognitive Tasks for Recording Neural Activity Using a Floor Projection Maze
Kent, Brendon W.; Yang, Fang-Chi; Burwell, Rebecca D.
2014-01-01
Neuropsychological tasks used in primates to investigate mechanisms of learning and memory are typically visually guided cognitive tasks. We have developed visual cognitive tasks for rats using the Floor Projection Maze1,2 that are optimized for visual abilities of rats permitting stronger comparisons of experimental findings with other species. In order to investigate neural correlates of learning and memory, we have integrated electrophysiological recordings into fully automated cognitive tasks on the Floor Projection Maze1,2. Behavioral software interfaced with an animal tracking system allows monitoring of the animal's behavior with precise control of image presentation and reward contingencies for better trained animals. Integration with an in vivo electrophysiological recording system enables examination of behavioral correlates of neural activity at selected epochs of a given cognitive task. We describe protocols for a model system that combines automated visual presentation of information to rodents and intracranial reward with electrophysiological approaches. Our model system offers a sophisticated set of tools as a framework for other cognitive tasks to better isolate and identify specific mechanisms contributing to particular cognitive processes. PMID:24638057
Normalization as a canonical neural computation
Carandini, Matteo; Heeger, David J.
2012-01-01
There is increasing evidence that the brain relies on a set of canonical neural computations, repeating them across brain regions and modalities to apply similar operations to different problems. A promising candidate for such a computation is normalization, in which the responses of neurons are divided by a common factor that typically includes the summed activity of a pool of neurons. Normalization was developed to explain responses in the primary visual cortex and is now thought to operate throughout the visual system, and in many other sensory modalities and brain regions. Normalization may underlie operations such as the representation of odours, the modulatory effects of visual attention, the encoding of value and the integration of multisensory information. Its presence in such a diversity of neural systems in multiple species, from invertebrates to mammals, suggests that it serves as a canonical neural computation. PMID:22108672
Vision and visual navigation in nocturnal insects.
Warrant, Eric; Dacke, Marie
2011-01-01
With their highly sensitive visual systems, nocturnal insects have evolved a remarkable capacity to discriminate colors, orient themselves using faint celestial cues, fly unimpeded through a complicated habitat, and navigate to and from a nest using learned visual landmarks. Even though the compound eyes of nocturnal insects are significantly more sensitive to light than those of their closely related diurnal relatives, their photoreceptors absorb photons at very low rates in dim light, even during demanding nocturnal visual tasks. To explain this apparent paradox, it is hypothesized that the necessary bridge between retinal signaling and visual behavior is a neural strategy of spatial and temporal summation at a higher level in the visual system. Exactly where in the visual system this summation takes place, and the nature of the neural circuitry that is involved, is currently unknown but provides a promising avenue for future research.
Kwon, Hyeok Gyu; Jang, Sung Ho
2014-08-22
A few studies have reported on the neural connectivity of some neural structures of the visual system in the human brain. However, little is known about the neural connectivity of the lateral geniculate body (LGB). In the current study, using diffusion tensor tractography (DTT), we attempted to investigate the neural connectivity of the LGB in normal subjects. A total of 52 healthy subjects were recruited for this study. A seed region of interest was placed on the LGB using the FMRIB Software Library which is a probabilistic tractography method based on a multi-fiber model. Connectivity was defined as the incidence of connection between the LGB and target brain areas at the threshold of 5, 25, and 50 streamlines. In addition, connectivity represented the percentage of connection in all hemispheres of 52 subjects. We found the following characteristics of connectivity of the LGB at the threshold of 5 streamline: (1) high connectivity to the corpus callosum (91.3%) and the contralateral temporal cortex (56.7%) via the corpus callosum, (2) high connectivity to the ipsilateral cerebral cortex: the temporal lobe (100%), primary visual cortex (95.2%), and visual association cortex (77.9%). The LGB appeared to have high connectivity to the corpus callosum and both temporal cortexes as well as the ipsilateral occipital cortex. We believe that the results of this study would be helpful in investigation of the neural network associated with the visual system and brain plasticity of the visual system after brain injury. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Improving the performance of the amblyopic visual system
Levi, Dennis M.; Li, Roger W.
2008-01-01
Experience-dependent plasticity is closely linked with the development of sensory function; however, there is also growing evidence for plasticity in the adult visual system. This review re-examines the notion of a sensitive period for the treatment of amblyopia in the light of recent experimental and clinical evidence for neural plasticity. One recently proposed method for improving the effectiveness and efficiency of treatment that has received considerable attention is ‘perceptual learning’. Specifically, both children and adults with amblyopia can improve their perceptual performance through extensive practice on a challenging visual task. The results suggest that perceptual learning may be effective in improving a range of visual performance and, importantly, the improvements may transfer to visual acuity. Recent studies have sought to explore the limits and time course of perceptual learning as an adjunct to occlusion and to investigate the neural mechanisms underlying the visual improvement. These findings, along with the results of new clinical trials, suggest that it might be time to reconsider our notions about neural plasticity in amblyopia. PMID:19008199
Xia, Jing; Zhang, Wei; Jiang, Yizhou; Li, You; Chen, Qi
2018-05-16
Practice and experiences gradually shape the central nervous system, from the synaptic level to large-scale neural networks. In natural multisensory environment, even when inundated by streams of information from multiple sensory modalities, our brain does not give equal weight to different modalities. Rather, visual information more frequently receives preferential processing and eventually dominates consciousness and behavior, i.e., visual dominance. It remains unknown, however, the supra-modal and modality-specific practice effect during cross-modal selective attention, and moreover whether the practice effect shows similar modality preferences as the visual dominance effect in the multisensory environment. To answer the above two questions, we adopted a cross-modal selective attention paradigm in conjunction with the hybrid fMRI design. Behaviorally, visual performance significantly improved while auditory performance remained constant with practice, indicating that visual attention more flexibly adapted behavior with practice than auditory attention. At the neural level, the practice effect was associated with decreasing neural activity in the frontoparietal executive network and increasing activity in the default mode network, which occurred independently of the modality attended, i.e., the supra-modal mechanisms. On the other hand, functional decoupling between the auditory and the visual system was observed with the progress of practice, which varied as a function of the modality attended. The auditory system was functionally decoupled with both the dorsal and ventral visual stream during auditory attention while was decoupled only with the ventral visual stream during visual attention. To efficiently suppress the irrelevant visual information with practice, auditory attention needs to additionally decouple the auditory system from the dorsal visual stream. The modality-specific mechanisms, together with the behavioral effect, thus support the visual dominance model in terms of the practice effect during cross-modal selective attention. Copyright © 2018 Elsevier Ltd. All rights reserved.
Cognitive processing in the primary visual cortex: from perception to memory.
Supèr, Hans
2002-01-01
The primary visual cortex is the first cortical area of the visual system that receives information from the external visual world. Based on the receptive field characteristics of the neurons in this area, it has been assumed that the primary visual cortex is a pure sensory area extracting basic elements of the visual scene. This information is then subsequently further processed upstream in the higher-order visual areas and provides us with perception and storage of the visual environment. However, recent findings show that such neural implementations are observed in the primary visual cortex. These neural correlates are expressed by the modulated activity of the late response of a neuron to a stimulus, and most likely depend on recurrent interactions between several areas of the visual system. This favors the concept of a distributed nature of visual processing in perceptual organization.
Phototaxis and the origin of visual eyes
Randel, Nadine
2016-01-01
Vision allows animals to detect spatial differences in environmental light levels. High-resolution image-forming eyes evolved from low-resolution eyes via increases in photoreceptor cell number, improvements in optics and changes in the neural circuits that process spatially resolved photoreceptor input. However, the evolutionary origins of the first low-resolution visual systems have been unclear. We propose that the lowest resolving (two-pixel) visual systems could initially have functioned in visual phototaxis. During visual phototaxis, such elementary visual systems compare light on either side of the body to regulate phototactic turns. Another, even simpler and non-visual strategy is characteristic of helical phototaxis, mediated by sensory–motor eyespots. The recent mapping of the complete neural circuitry (connectome) of an elementary visual system in the larva of the annelid Platynereis dumerilii sheds new light on the possible paths from non-visual to visual phototaxis and to image-forming vision. We outline an evolutionary scenario focusing on the neuronal circuitry to account for these transitions. We also present a comprehensive review of the structure of phototactic eyes in invertebrate larvae and assign them to the non-visual and visual categories. We propose that non-visual systems may have preceded visual phototactic systems in evolution that in turn may have repeatedly served as intermediates during the evolution of image-forming eyes. PMID:26598725
Markert, H; Kaufmann, U; Kara Kayikci, Z; Palm, G
2009-03-01
Language understanding is a long-standing problem in computer science. However, the human brain is capable of processing complex languages with seemingly no difficulties. This paper shows a model for language understanding using biologically plausible neural networks composed of associative memories. The model is able to deal with ambiguities on the single word and grammatical level. The language system is embedded into a robot in order to demonstrate the correct semantical understanding of the input sentences by letting the robot perform corresponding actions. For that purpose, a simple neural action planning system has been combined with neural networks for visual object recognition and visual attention control mechanisms.
Klaver, Peter; Latal, Beatrice; Martin, Ernst
2015-01-01
Very low birth weight (VLBW) premature born infants have a high risk to develop visual perceptual and learning deficits as well as widespread functional and structural brain abnormalities during infancy and childhood. Whether and how prematurity alters neural specialization within visual neural networks is still unknown. We used functional and structural brain imaging to examine the visual semantic system of VLBW born (<1250 g, gestational age 25-32 weeks) adolescents (13-15 years, n = 11, 3 males) and matched term born control participants (13-15 years, n = 11, 3 males). Neurocognitive assessment revealed no group differences except for lower scores on an adaptive visuomotor integration test. All adolescents were scanned while viewing pictures of animals and tools and scrambled versions of these pictures. Both groups demonstrated animal and tool category related neural networks. Term born adolescents showed tool category related neural activity, i.e. tool pictures elicited more activity than animal pictures, in temporal and parietal brain areas. Animal category related activity was found in the occipital, temporal and frontal cortex. VLBW born adolescents showed reduced tool category related activity in the dorsal visual stream compared with controls, specifically the left anterior intraparietal sulcus, and enhanced animal category related activity in the left middle occipital gyrus and right lingual gyrus. Lower birth weight of VLBW adolescents correlated with larger thickness of the pericalcarine gyrus in the occipital cortex and smaller surface area of the superior temporal gyrus in the lateral temporal cortex. Moreover, larger thickness of the pericalcarine gyrus and smaller surface area of the superior temporal gyrus correlated with reduced tool category related activity in the parietal cortex. Together, our data suggest that very low birth weight predicts alterations of higher order visual semantic networks, particularly in the dorsal stream. The differences in neural specialization may be associated with aberrant cortical development of areas in the visual system that develop early in childhood. Copyright © 2014 Elsevier Ltd. All rights reserved.
Visualizing the spinal neuronal dynamics of locomotion
NASA Astrophysics Data System (ADS)
Subramanian, Kalpathi R.; Bashor, D. P.; Miller, M. T.; Foster, J. A.
2004-06-01
Modern imaging and simulation techniques have enhanced system-level understanding of neural function. In this article, we present an application of interactive visualization to understanding neuronal dynamics causing locomotion of a single hip joint, based on pattern generator output of the spinal cord. Our earlier work visualized cell-level responses of multiple neuronal populations. However, the spatial relationships were abstract, making communication with colleagues difficult. We propose two approaches to overcome this: (1) building a 3D anatomical model of the spinal cord with neurons distributed inside, animated by the simulation and (2) adding limb movements predicted by neuronal activity. The new system was tested using a cat walking central pattern generator driving a pair of opposed spinal motoneuron pools. Output of opposing motoneuron pools was combined into a single metric, called "Net Neural Drive", which generated angular limb movement in proportion to its magnitude. Net neural drive constitutes a new description of limb movement control. The combination of spatial and temporal information in the visualizations elegantly conveys the neural activity of the output elements (motoneurons), as well as the resulting movement. The new system encompasses five biological levels of organization from ion channels to observed behavior. The system is easily scalable, and provides an efficient interactive platform for rapid hypothesis testing.
A neural correlate of working memory in the monkey primary visual cortex.
Supèr, H; Spekreijse, H; Lamme, V A
2001-07-06
The brain frequently needs to store information for short periods. In vision, this means that the perceptual correlate of a stimulus has to be maintained temporally once the stimulus has been removed from the visual scene. However, it is not known how the visual system transfers sensory information into a memory component. Here, we identify a neural correlate of working memory in the monkey primary visual cortex (V1). We propose that this component may link sensory activity with memory activity.
Lightness computation by the human visual system
NASA Astrophysics Data System (ADS)
Rudd, Michael E.
2017-05-01
A model of achromatic color computation by the human visual system is presented, which is shown to account in an exact quantitative way for a large body of appearance matching data collected with simple visual displays. The model equations are closely related to those of the original Retinex model of Land and McCann. However, the present model differs in important ways from Land and McCann's theory in that it invokes additional biological and perceptual mechanisms, including contrast gain control, different inherent neural gains for incremental, and decremental luminance steps, and two types of top-down influence on the perceptual weights applied to local luminance steps in the display: edge classification and spatial integration attentional windowing. Arguments are presented to support the claim that these various visual processes must be instantiated by a particular underlying neural architecture. By pointing to correspondences between the architecture of the model and findings from visual neurophysiology, this paper suggests that edge classification involves a top-down gating of neural edge responses in early visual cortex (cortical areas V1 and/or V2) while spatial integration windowing occurs in cortical area V4 or beyond.
Neural Summation in the Hawkmoth Visual System Extends the Limits of Vision in Dim Light.
Stöckl, Anna Lisa; O'Carroll, David Charles; Warrant, Eric James
2016-03-21
Most of the world's animals are active in dim light and depend on good vision for the tasks of daily life. Many have evolved visual adaptations that permit a performance superior to that of manmade imaging devices [1]. In insects, a major model visual system, nocturnal species show impressive visual abilities ranging from flight control [2, 3], to color discrimination [4, 5], to navigation using visual landmarks [6-8] or dim celestial compass cues [9, 10]. In addition to optical adaptations that improve their sensitivity in dim light [11], neural summation of light in space and time-which enhances the coarser and slower features of the scene at the expense of noisier finer and faster features-has been suggested to improve sensitivity in theoretical [12-14], anatomical [15-17], and behavioral [18-20] studies. How these summation strategies function neurally is, however, presently unknown. Here, we quantified spatial and temporal summation in the motion vision pathway of a nocturnal hawkmoth. We show that spatial and temporal summation combine supralinearly to substantially increase contrast sensitivity and visual information rate over four decades of light intensity, enabling hawkmoths to see at light levels 100 times dimmer than without summation. Our results reveal how visual motion is calculated neurally in dim light and how spatial and temporal summation improve sensitivity while simultaneously maximizing spatial and temporal resolution, thus extending models of insect motion vision derived predominantly from diurnal flies. Moreover, the summation strategies we have revealed may benefit manmade vision systems optimized for variable light levels [21]. Copyright © 2016 Elsevier Ltd. All rights reserved.
Coubard, Olivier A.; Urbanski, Marika; Bourlon, Clémence; Gaumet, Marie
2014-01-01
Vision is a complex function, which is achieved by movements of the eyes to properly foveate targets at any location in 3D space and to continuously refresh neural information in the different visual pathways. The visual system involves five main routes originating in the retinas but varying in their destination within the brain: the occipital cortex, but also the superior colliculus (SC), the pretectum, the supra-chiasmatic nucleus, the nucleus of the optic tract and terminal dorsal, medial and lateral nuclei. Visual pathway architecture obeys systematization in sagittal and transversal planes so that visual information from left/right and upper/lower hemi-retinas, corresponding respectively to right/left and lower/upper visual fields, is processed ipsilaterally and ipsialtitudinally to hemi-retinas in left/right hemispheres and upper/lower fibers. Organic neurovisual deficits may occur at any level of this circuitry from the optic nerve to subcortical and cortical destinations, resulting in low or high-level visual deficits. In this didactic review article, we provide a panorama of the neural bases of eye movements and visual systems, and of related neurovisual deficits. Additionally, we briefly review the different schools of rehabilitation of organic neurovisual deficits, and show that whatever the emphasis is put on action or perception, benefits may be observed at both motor and perceptual levels. Given the extent of its neural bases in the brain, vision in its motor and perceptual aspects is also a useful tool to assess and modulate central nervous system (CNS) in general. PMID:25538575
Liu, Rong; Zhou, Jiawei; Zhao, Haoxin; Dai, Yun; Zhang, Yudong; Tang, Yong; Zhou, Yifeng
2014-01-01
This study aimed to explore the neural development status of the visual system of children (around 8 years old) using contrast sensitivity. We achieved this by eliminating the influence of higher order aberrations (HOAs) with adaptive optics correction. We measured HOAs, modulation transfer functions (MTFs) and contrast sensitivity functions (CSFs) of six children and five adults with both corrected and uncorrected HOAs. We found that when HOAs were corrected, children and adults both showed improvements in MTF and CSF. However, the CSF of children was still lower than the adult level, indicating the difference in contrast sensitivity between groups cannot be explained by differences in optical factors. Further study showed that the difference between the groups also could not be explained by differences in non-visual factors. With these results we concluded that the neural systems underlying vision in children of around 8 years old are still immature in contrast sensitivity. PMID:24732728
Visual Circuit Development Requires Patterned Activity Mediated by Retinal Acetylcholine Receptors
Burbridge, Timothy J.; Xu, Hong-Ping; Ackman, James B.; Ge, Xinxin; Zhang, Yueyi; Ye, Mei-Jun; Zhou, Z. Jimmy; Xu, Jian; Contractor, Anis; Crair, Michael C.
2014-01-01
SUMMARY The elaboration of nascent synaptic connections into highly ordered neural circuits is an integral feature of the developing vertebrate nervous system. In sensory systems, patterned spontaneous activity before the onset of sensation is thought to influence this process, but this conclusion remains controversial largely due to the inherent difficulty recording neural activity in early development. Here, we describe novel genetic and pharmacological manipulations of spontaneous retinal activity, assayed in vivo, that demonstrate a causal link between retinal waves and visual circuit refinement. We also report a de-coupling of downstream activity in retinorecipient regions of the developing brain after retinal wave disruption. Significantly, we show that the spatiotemporal characteristics of retinal waves affect the development of specific visual circuits. These results conclusively establish retinal waves as necessary and instructive for circuit refinement in the developing nervous system and reveal how neural circuits adjust to altered patterns of activity prior to experience. PMID:25466916
Neural Pathways Conveying Novisual Information to the Visual Cortex
2013-01-01
The visual cortex has been traditionally considered as a stimulus-driven, unimodal system with a hierarchical organization. However, recent animal and human studies have shown that the visual cortex responds to non-visual stimuli, especially in individuals with visual deprivation congenitally, indicating the supramodal nature of the functional representation in the visual cortex. To understand the neural substrates of the cross-modal processing of the non-visual signals in the visual cortex, we firstly showed the supramodal nature of the visual cortex. We then reviewed how the nonvisual signals reach the visual cortex. Moreover, we discussed if these non-visual pathways are reshaped by early visual deprivation. Finally, the open question about the nature (stimulus-driven or top-down) of non-visual signals is also discussed. PMID:23840972
Immunostaining to visualize murine enteric nervous system development.
Barlow-Anacker, Amanda J; Erickson, Christopher S; Epstein, Miles L; Gosain, Ankush
2015-04-29
The enteric nervous system is formed by neural crest cells that proliferate, migrate and colonize the gut. Following colonization, neural crest cells must then differentiate into neurons with markers specific for their neurotransmitter phenotype. Cholinergic neurons, a major neurotransmitter phenotype in the enteric nervous system, are identified by staining for choline acetyltransferase (ChAT), the synthesizing enzyme for acetylcholine. Historical efforts to visualize cholinergic neurons have been hampered by antibodies with differing specificities to central nervous system versus peripheral nervous system ChAT. We and others have overcome this limitation by using an antibody against placental ChAT, which recognizes both central and peripheral ChAT, to successfully visualize embryonic enteric cholinergic neurons. Additionally, we have compared this antibody to genetic reporters for ChAT and shown that the antibody is more reliable during embryogenesis. This protocol describes a technique for dissecting, fixing and immunostaining of the murine embryonic gastrointestinal tract to visualize enteric nervous system neurotransmitter expression.
Processing Of Visual Information In Primate Brains
NASA Technical Reports Server (NTRS)
Anderson, Charles H.; Van Essen, David C.
1991-01-01
Report reviews and analyzes information-processing strategies and pathways in primate retina and visual cortex. Of interest both in biological fields and in such related computational fields as artificial neural networks. Focuses on data from macaque, which has superb visual system similar to that of humans. Authors stress concept of "good engineering" in understanding visual system.
Lazar, Aurel A; Slutskiy, Yevgeniy B; Zhou, Yiyin
2015-03-01
Past work demonstrated how monochromatic visual stimuli could be faithfully encoded and decoded under Nyquist-type rate conditions. Color visual stimuli were then traditionally encoded and decoded in multiple separate monochromatic channels. The brain, however, appears to mix information about color channels at the earliest stages of the visual system, including the retina itself. If information about color is mixed and encoded by a common pool of neurons, how can colors be demixed and perceived? We present Color Video Time Encoding Machines (Color Video TEMs) for encoding color visual stimuli that take into account a variety of color representations within a single neural circuit. We then derive a Color Video Time Decoding Machine (Color Video TDM) algorithm for color demixing and reconstruction of color visual scenes from spikes produced by a population of visual neurons. In addition, we formulate Color Video Channel Identification Machines (Color Video CIMs) for functionally identifying color visual processing performed by a spiking neural circuit. Furthermore, we derive a duality between TDMs and CIMs that unifies the two and leads to a general theory of neural information representation for stereoscopic color vision. We provide examples demonstrating that a massively parallel color visual neural circuit can be first identified with arbitrary precision and its spike trains can be subsequently used to reconstruct the encoded stimuli. We argue that evaluation of the functional identification methodology can be effectively and intuitively performed in the stimulus space. In this space, a signal reconstructed from spike trains generated by the identified neural circuit can be compared to the original stimulus. Copyright © 2014 Elsevier Ltd. All rights reserved.
Holistic neural coding of Chinese character forms in bilateral ventral visual system.
Mo, Ce; Yu, Mengxia; Seger, Carol; Mo, Lei
2015-02-01
How are Chinese characters recognized and represented in the brain of skilled readers? Functional MRI fast adaptation technique was used to address this question. We found that neural adaptation effects were limited to identical characters in bilateral ventral visual system while no activation reduction was observed for partially overlapping characters regardless of the spatial location of the shared sub-character components, suggesting highly selective neuronal tuning to whole characters. The consistent neural profile across the entire ventral visual cortex indicates that Chinese characters are represented as mutually distinctive wholes rather than combinations of sub-character components, which presents a salient contrast to the left-lateralized, simple-to-complex neural representations of alphabetic words. Our findings thus revealed the cultural modulation effect on both local neuronal activity patterns and functional anatomical regions associated with written symbol recognition. Moreover, the cross-language discrepancy in written symbol recognition mechanism might stem from the language-specific early-stage learning experience. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
The neural basis of visual dominance in the context of audio-visual object processing.
Schmid, Carmen; Büchel, Christian; Rose, Michael
2011-03-01
Visual dominance refers to the observation that in bimodal environments vision often has an advantage over other senses in human. Therefore, a better memory performance for visual compared to, e.g., auditory material is assumed. However, the reason for this preferential processing and the relation to the memory formation is largely unknown. In this fMRI experiment, we manipulated cross-modal competition and attention, two factors that both modulate bimodal stimulus processing and can affect memory formation. Pictures and sounds of objects were presented simultaneously in two levels of recognisability, thus manipulating the amount of cross-modal competition. Attention was manipulated via task instruction and directed either to the visual or the auditory modality. The factorial design allowed a direct comparison of the effects between both modalities. The resulting memory performance showed that visual dominance was limited to a distinct task setting. Visual was superior to auditory object memory only when allocating attention towards the competing modality. During encoding, cross-modal competition and attention towards the opponent domain reduced fMRI signals in both neural systems, but cross-modal competition was more pronounced in the auditory system and only in auditory cortex this competition was further modulated by attention. Furthermore, neural activity reduction in auditory cortex during encoding was closely related to the behavioural auditory memory impairment. These results indicate that visual dominance emerges from a less pronounced vulnerability of the visual system against competition from the auditory domain. Copyright © 2010 Elsevier Inc. All rights reserved.
A novel role for visual perspective cues in the neural computation of depth.
Kim, HyungGoo R; Angelaki, Dora E; DeAngelis, Gregory C
2015-01-01
As we explore a scene, our eye movements add global patterns of motion to the retinal image, complicating visual motion produced by self-motion or moving objects. Conventionally, it has been assumed that extraretinal signals, such as efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. We consider an alternative possibility: namely, that the visual system can infer eye rotations from global patterns of image motion. We visually simulated combinations of eye translation and rotation, including perspective distortions that change dynamically over time. We found that incorporating these 'dynamic perspective' cues allowed the visual system to generate selectivity for depth sign from motion parallax in macaque cortical area MT, a computation that was previously thought to require extraretinal signals regarding eye velocity. Our findings suggest neural mechanisms that analyze global patterns of visual motion to perform computations that require knowledge of eye rotations.
Visual attention mitigates information loss in small- and large-scale neural codes
Sprague, Thomas C; Saproo, Sameer; Serences, John T
2015-01-01
Summary The visual system transforms complex inputs into robust and parsimonious neural codes that efficiently guide behavior. Because neural communication is stochastic, the amount of encoded visual information necessarily decreases with each synapse. This constraint requires processing sensory signals in a manner that protects information about relevant stimuli from degradation. Such selective processing – or selective attention – is implemented via several mechanisms, including neural gain and changes in tuning properties. However, examining each of these effects in isolation obscures their joint impact on the fidelity of stimulus feature representations by large-scale population codes. Instead, large-scale activity patterns can be used to reconstruct representations of relevant and irrelevant stimuli, providing a holistic understanding about how neuron-level modulations collectively impact stimulus encoding. PMID:25769502
Developmental trajectory of neural specialization for letter and number visual processing.
Park, Joonkoo; van den Berg, Berry; Chiang, Crystal; Woldorff, Marty G; Brannon, Elizabeth M
2018-05-01
Adult neuroimaging studies have demonstrated dissociable neural activation patterns in the visual cortex in response to letters (Latin alphabet) and numbers (Arabic numerals), which suggest a strong experiential influence of reading and mathematics on the human visual system. Here, developmental trajectories in the event-related potential (ERP) patterns evoked by visual processing of letters, numbers, and false fonts were examined in four different age groups (7-, 10-, 15-year-olds, and young adults). The 15-year-olds and adults showed greater neural sensitivity to letters over numbers in the left visual cortex and the reverse pattern in the right visual cortex, extending previous findings in adults to teenagers. In marked contrast, 7- and 10-year-olds did not show this dissociable neural pattern. Furthermore, the contrast of familiar stimuli (letters or numbers) versus unfamiliar ones (false fonts) showed stark ERP differences between the younger (7- and 10-year-olds) and the older (15-year-olds and adults) participants. These results suggest that both coarse (familiar versus unfamiliar) and fine (letters versus numbers) tuning for letters and numbers continue throughout childhood and early adolescence, demonstrating a profound impact of uniquely human cultural inventions on visual cognition and its development. © 2017 John Wiley & Sons Ltd.
Machine Vision Within The Framework Of Collective Neural Assemblies
NASA Astrophysics Data System (ADS)
Gupta, Madan M.; Knopf, George K.
1990-03-01
The proposed mechanism for designing a robust machine vision system is based on the dynamic activity generated by the various neural populations embedded in nervous tissue. It is postulated that a hierarchy of anatomically distinct tissue regions are involved in visual sensory information processing. Each region may be represented as a planar sheet of densely interconnected neural circuits. Spatially localized aggregates of these circuits represent collective neural assemblies. Four dynamically coupled neural populations are assumed to exist within each assembly. In this paper we present a state-variable model for a tissue sheet derived from empirical studies of population dynamics. Each population is modelled as a nonlinear second-order system. It is possible to emulate certain observed physiological and psychophysiological phenomena of biological vision by properly programming the interconnective gains . Important early visual phenomena such as temporal and spatial noise insensitivity, contrast sensitivity and edge enhancement will be discussed for a one-dimensional tissue model.
Marcar, Valentine L; Baselgia, Silvana; Lüthi-Eisenegger, Barbara; Jäncke, Lutz
2018-03-01
Retinal input processing in the human visual system involves a phasic and tonic neural response. We investigated the role of the magno- and parvocellular systems by comparing the influence of the active neural population size and its discharge activity on the amplitude and latency of four VEP components. We recorded the scalp electric potential of 20 human volunteers viewing a series of dartboard images presented as a pattern reversing and pattern on-/offset stimulus. These patterns were designed to vary both neural population size coding the temporal- and spatial luminance contrast property and the discharge activity of the population involved in a systematic manner. When the VEP amplitude reflected the size of the neural population coding the temporal luminance contrast property of the image, the influence of luminance contrast followed the contrast response function of the parvocellular system. When the VEP amplitude reflected the size of the neural population responding to the spatial luminance contrast property the image, the influence of luminance contrast followed the contrast response function of the magnocellular system. The latencies of the VEP components examined exhibited the same behavior across our stimulus series. This investigation demonstrates the complex interplay of the magno- and parvocellular systems on the neural response as captured by the VEP. It also demonstrates a linear relationship between stimulus property, neural response, and the VEP and reveals the importance of feedback projections in modulating the ongoing neural response. In doing so, it corroborates the conclusions of our previous study.
The ventral visual pathway: an expanded neural framework for the processing of object quality.
Kravitz, Dwight J; Saleem, Kadharbatcha S; Baker, Chris I; Ungerleider, Leslie G; Mishkin, Mortimer
2013-01-01
Since the original characterization of the ventral visual pathway, our knowledge of its neuroanatomy, functional properties, and extrinsic targets has grown considerably. Here we synthesize this recent evidence and propose that the ventral pathway is best understood as a recurrent occipitotemporal network containing neural representations of object quality both utilized and constrained by at least six distinct cortical and subcortical systems. Each system serves its own specialized behavioral, cognitive, or affective function, collectively providing the raison d'être for the ventral visual pathway. This expanded framework contrasts with the depiction of the ventral visual pathway as a largely serial staged hierarchy culminating in singular object representations and more parsimoniously incorporates attentional, contextual, and feedback effects. Published by Elsevier Ltd.
Personality dimensions of people who suffer from visual stress.
Hollis, J; Allen, P M; Fleischmann, D; Aulak, R
2007-11-01
Personality dimensions of participants who suffer from visual stress were compared with those of normal participants using the Eysenck Personality Inventory. Extraversion-Introversion scores showed no significant differences between the participants who suffered visual stress and those who were classified as normal. By contrast, significant differences were found between the normal participants and those with visual stress in respect of Neuroticism-Stability. These differences accord with Eysenck's personality theory which states that those who score highly on the neuroticism scale do so because they have a neurological system with a low threshold such that their neurological system is easily activated by external stimuli. The findings also relate directly to the theory of visual stress proposed by Wilkins which postulates that visual stress results from an excess of neural activity. The data may indicate that the excess activity is likely to be localised at particular neurological regions or neural processes.
Visual attention mitigates information loss in small- and large-scale neural codes.
Sprague, Thomas C; Saproo, Sameer; Serences, John T
2015-04-01
The visual system transforms complex inputs into robust and parsimonious neural codes that efficiently guide behavior. Because neural communication is stochastic, the amount of encoded visual information necessarily decreases with each synapse. This constraint requires that sensory signals are processed in a manner that protects information about relevant stimuli from degradation. Such selective processing--or selective attention--is implemented via several mechanisms, including neural gain and changes in tuning properties. However, examining each of these effects in isolation obscures their joint impact on the fidelity of stimulus feature representations by large-scale population codes. Instead, large-scale activity patterns can be used to reconstruct representations of relevant and irrelevant stimuli, thereby providing a holistic understanding about how neuron-level modulations collectively impact stimulus encoding. Copyright © 2015 Elsevier Ltd. All rights reserved.
Caudell, Thomas P; Xiao, Yunhai; Healy, Michael J
2003-01-01
eLoom is an open source graph simulation software tool, developed at the University of New Mexico (UNM), that enables users to specify and simulate neural network models. Its specification language and libraries enables users to construct and simulate arbitrary, potentially hierarchical network structures on serial and parallel processing systems. In addition, eLoom is integrated with UNM's Flatland, an open source virtual environments development tool to provide real-time visualizations of the network structure and activity. Visualization is a useful method for understanding both learning and computation in artificial neural networks. Through 3D animated pictorially representations of the state and flow of information in the network, a better understanding of network functionality is achieved. ART-1, LAPART-II, MLP, and SOM neural networks are presented to illustrate eLoom and Flatland's capabilities.
NASA Technical Reports Server (NTRS)
Decker, Arthur J.; Krasowski, Michael J.
1991-01-01
The goal is to develop an approach to automating the alignment and adjustment of optical measurement, visualization, inspection, and control systems. Classical controls, expert systems, and neural networks are three approaches to automating the alignment of an optical system. Neural networks were chosen for this project and the judgements that led to this decision are presented. Neural networks were used to automate the alignment of the ubiquitous laser-beam-smoothing spatial filter. The results and future plans of the project are presented.
Marzullo, Timothy Charles; Lehmkuhle, Mark J; Gage, Gregory J; Kipke, Daryl R
2010-04-01
Closed-loop neural interface technology that combines neural ensemble decoding with simultaneous electrical microstimulation feedback is hypothesized to improve deep brain stimulation techniques, neuromotor prosthetic applications, and epilepsy treatment. Here we describe our iterative results in a rat model of a sensory and motor neurophysiological feedback control system. Three rats were chronically implanted with microelectrode arrays in both the motor and visual cortices. The rats were subsequently trained over a period of weeks to modulate their motor cortex ensemble unit activity upon delivery of intra-cortical microstimulation (ICMS) of the visual cortex in order to receive a food reward. Rats were given continuous feedback via visual cortex ICMS during the response periods that was representative of the motor cortex ensemble dynamics. Analysis revealed that the feedback provided the animals with indicators of the behavioral trials. At the hardware level, this preparation provides a tractable test model for improving the technology of closed-loop neural devices.
A Neurobehavioral Model of Flexible Spatial Language Behaviors
Lipinski, John; Schneegans, Sebastian; Sandamirskaya, Yulia; Spencer, John P.; Schöner, Gregor
2012-01-01
We propose a neural dynamic model that specifies how low-level visual processes can be integrated with higher level cognition to achieve flexible spatial language behaviors. This model uses real-word visual input that is linked to relational spatial descriptions through a neural mechanism for reference frame transformations. We demonstrate that the system can extract spatial relations from visual scenes, select items based on relational spatial descriptions, and perform reference object selection in a single unified architecture. We further show that the performance of the system is consistent with behavioral data in humans by simulating results from 2 independent empirical studies, 1 spatial term rating task and 1 study of reference object selection behavior. The architecture we present thereby achieves a high degree of task flexibility under realistic stimulus conditions. At the same time, it also provides a detailed neural grounding for complex behavioral and cognitive processes. PMID:21517224
Hardware Neural Network for a Visual Inspection System
NASA Astrophysics Data System (ADS)
Chun, Seungwoo; Hayakawa, Yoshihiro; Nakajima, Koji
The visual inspection of defects in products is heavily dependent on human experience and instinct. In this situation, it is difficult to reduce the production costs and to shorten the inspection time and hence the total process time. Consequently people involved in this area desire an automatic inspection system. In this paper, we propose a hardware neural network, which is expected to provide high-speed operation for automatic inspection of products. Since neural networks can learn, this is a suitable method for self-adjustment of criteria for classification. To achieve high-speed operation, we use parallel and pipelining techniques. Furthermore, we use a piecewise linear function instead of a conventional activation function in order to save hardware resources. Consequently, our proposed hardware neural network achieved 6GCPS and 2GCUPS, which in our test sample proved to be sufficiently fast.
Higher-order neural processing tunes motion neurons to visual ecology in three species of hawkmoths.
Stöckl, A L; O'Carroll, D; Warrant, E J
2017-06-28
To sample information optimally, sensory systems must adapt to the ecological demands of each animal species. These adaptations can occur peripherally, in the anatomical structures of sensory organs and their receptors; and centrally, as higher-order neural processing in the brain. While a rich body of investigations has focused on peripheral adaptations, our understanding is sparse when it comes to central mechanisms. We quantified how peripheral adaptations in the eyes, and central adaptations in the wide-field motion vision system, set the trade-off between resolution and sensitivity in three species of hawkmoths active at very different light levels: nocturnal Deilephila elpenor, crepuscular Manduca sexta , and diurnal Macroglossum stellatarum. Using optical measurements and physiological recordings from the photoreceptors and wide-field motion neurons in the lobula complex, we demonstrate that all three species use spatial and temporal summation to improve visual performance in dim light. The diurnal Macroglossum relies least on summation, but can only see at brighter intensities. Manduca, with large sensitive eyes, relies less on neural summation than the smaller eyed Deilephila , but both species attain similar visual performance at nocturnal light levels. Our results reveal how the visual systems of these three hawkmoth species are intimately matched to their visual ecologies. © 2017 The Author(s).
Interactions of Top-Down and Bottom-Up Mechanisms in Human Visual Cortex
McMains, Stephanie; Kastner, Sabine
2011-01-01
Multiple stimuli present in the visual field at the same time compete for neural representation by mutually suppressing their evoked activity throughout visual cortex, providing a neural correlate for the limited processing capacity of the visual system. Competitive interactions among stimuli can be counteracted by top-down, goal-directed mechanisms such as attention, and by bottom-up, stimulus-driven mechanisms. Because these two processes cooperate in everyday life to bias processing toward behaviorally relevant or particularly salient stimuli, it has proven difficult to study interactions between top-down and bottom-up mechanisms. Here, we used an experimental paradigm in which we first isolated the effects of a bottom-up influence on neural competition by parametrically varying the degree of perceptual grouping in displays that were not attended. Second, we probed the effects of directed attention on the competitive interactions induced with the parametric design. We found that the amount of attentional modulation varied linearly with the degree of competition left unresolved by bottom-up processes, such that attentional modulation was greatest when neural competition was little influenced by bottom-up mechanisms and smallest when competition was strongly influenced by bottom-up mechanisms. These findings suggest that the strength of attentional modulation in the visual system is constrained by the degree to which competitive interactions have been resolved by bottom-up processes related to the segmentation of scenes into candidate objects. PMID:21228167
A massively asynchronous, parallel brain.
Zeki, Semir
2015-05-19
Whether the visual brain uses a parallel or a serial, hierarchical, strategy to process visual signals, the end result appears to be that different attributes of the visual scene are perceived asynchronously--with colour leading form (orientation) by 40 ms and direction of motion by about 80 ms. Whatever the neural root of this asynchrony, it creates a problem that has not been properly addressed, namely how visual attributes that are perceived asynchronously over brief time windows after stimulus onset are bound together in the longer term to give us a unified experience of the visual world, in which all attributes are apparently seen in perfect registration. In this review, I suggest that there is no central neural clock in the (visual) brain that synchronizes the activity of different processing systems. More likely, activity in each of the parallel processing-perceptual systems of the visual brain is reset independently, making of the brain a massively asynchronous organ, just like the new generation of more efficient computers promise to be. Given the asynchronous operations of the brain, it is likely that the results of activities in the different processing-perceptual systems are not bound by physiological interactions between cells in the specialized visual areas, but post-perceptually, outside the visual brain.
The ventral visual pathway: An expanded neural framework for the processing of object quality
Kravitz, Dwight J.; Saleem, Kadharbatcha S.; Baker, Chris I.; Ungerleider, Leslie G.; Mishkin, Mortimer
2012-01-01
Since the original characterization of the ventral visual pathway our knowledge of its neuroanatomy, functional properties, and extrinsic targets has grown considerably. Here we synthesize this recent evidence and propose that the ventral pathway is best understood as a recurrent occipitotemporal network containing neural representations of object quality both utilized and constrained by at least six distinct cortical and subcortical systems. Each system serves its own specialized behavioral, cognitive, or affective function, collectively providing the raison d’etre for the ventral visual pathway. This expanded framework contrasts with the depiction of the ventral visual pathway as a largely serial staged hierarchy that culminates in singular object representations for utilization mainly by ventrolateral prefrontal cortex and, more parsimoniously than this account, incorporates attentional, contextual, and feedback effects. PMID:23265839
A novel role for visual perspective cues in the neural computation of depth
Kim, HyungGoo R.; Angelaki, Dora E.; DeAngelis, Gregory C.
2014-01-01
As we explore a scene, our eye movements add global patterns of motion to the retinal image, complicating visual motion produced by self-motion or moving objects. Conventionally, it has been assumed that extra-retinal signals, such as efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. We consider an alternative possibility: namely, that the visual system can infer eye rotations from global patterns of image motion. We visually simulated combinations of eye translation and rotation, including perspective distortions that change dynamically over time. We demonstrate that incorporating these “dynamic perspective” cues allows the visual system to generate selectivity for depth sign from motion parallax in macaque area MT, a computation that was previously thought to require extra-retinal signals regarding eye velocity. Our findings suggest novel neural mechanisms that analyze global patterns of visual motion to perform computations that require knowledge of eye rotations. PMID:25436667
People can understand descriptions of motion without activating visual motion brain regions
Dravida, Swethasri; Saxe, Rebecca; Bedny, Marina
2013-01-01
What is the relationship between our perceptual and linguistic neural representations of the same event? We approached this question by asking whether visual perception of motion and understanding linguistic depictions of motion rely on the same neural architecture. The same group of participants took part in two language tasks and one visual task. In task 1, participants made semantic similarity judgments with high motion (e.g., “to bounce”) and low motion (e.g., “to look”) words. In task 2, participants made plausibility judgments for passages describing movement (“A centaur hurled a spear … ”) or cognitive events (“A gentleman loved cheese …”). Task 3 was a visual motion localizer in which participants viewed animations of point-light walkers, randomly moving dots, and stationary dots changing in luminance. Based on the visual motion localizer we identified classic visual motion areas of the temporal (MT/MST and STS) and parietal cortex (inferior and superior parietal lobules). We find that these visual cortical areas are largely distinct from neural responses to linguistic depictions of motion. Motion words did not activate any part of the visual motion system. Motion passages produced a small response in the right superior parietal lobule, but none of the temporal motion regions. These results suggest that (1) as compared to words, rich language stimuli such as passages are more likely to evoke mental imagery and more likely to affect perceptual circuits and (2) effects of language on the visual system are more likely in secondary perceptual areas as compared to early sensory areas. We conclude that language and visual perception constitute distinct but interacting systems. PMID:24009592
Neural attractor network for application in visual field data classification.
Fink, Wolfgang
2004-07-07
The purpose was to introduce a novel method for computer-based classification of visual field data derived from perimetric examination, that may act as a 'counsellor', providing an independent 'second opinion' to the diagnosing physician. The classification system consists of a Hopfield-type neural attractor network that obtains its input data from perimetric examination results. An iterative relaxation process determines the states of the neurons dynamically. Therefore, even 'noisy' perimetric output, e.g., early stages of a disease, may eventually be classified correctly according to the predefined idealized visual field defect (scotoma) patterns, stored as attractors of the network, that are found with diseases of the eye, optic nerve and the central nervous system. Preliminary tests of the classification system on real visual field data derived from perimetric examinations have shown a classification success of over 80%. Some of the main advantages of the Hopfield-attractor-network-based approach over feed-forward type neural networks are: (1) network architecture is defined by the classification problem; (2) no training is required to determine the neural coupling strengths; (3) assignment of an auto-diagnosis confidence level is possible by means of an overlap parameter and the Hamming distance. In conclusion, the novel method for computer-based classification of visual field data, presented here, furnishes a valuable first overview and an independent 'second opinion' in judging perimetric examination results, pointing towards a final diagnosis by a physician. It should not be considered a substitute for the diagnosing physician. Thanks to the worldwide accessibility of the Internet, the classification system offers a promising perspective towards modern computer-assisted diagnosis in both medicine and tele-medicine, for example and in particular, with respect to non-ophthalmic clinics or in communities where perimetric expertise is not readily available.
The neural basis of visual behaviors in the larval zebrafish
Portugues, Ruben; Engert, Florian
2015-01-01
We review visually guided behaviors in larval zebrafish and summarise what is known about the neural processing that results in these behaviors, paying particular attention to the progress made in the last 2 years. Using the examples of the optokinetic reflex, the optomotor response, prey tracking and the visual startle response, we illustrate how the larval zebrafish presents us with a very promising model vertebrate system that allows neurocientists to integrate functional and behavioral studies and from which we can expect illuminating insights in the near future. PMID:19896836
Suzuki, Daichi G; Murakami, Yasunori; Escriva, Hector; Wada, Hiroshi
2015-02-01
Vertebrates are equipped with so-called camera eyes, which provide them with image-forming vision. Vertebrate image-forming vision evolved independently from that of other animals and is regarded as a key innovation for enhancing predatory ability and ecological success. Evolutionary changes in the neural circuits, particularly the visual center, were central for the acquisition of image-forming vision. However, the evolutionary steps, from protochordates to jaw-less primitive vertebrates and then to jawed vertebrates, remain largely unknown. To bridge this gap, we present the detailed development of retinofugal projections in the lamprey, the neuroarchitecture in amphioxus, and the brain patterning in both animals. Both the lateral eye in larval lamprey and the frontal eye in amphioxus project to a light-detecting visual center in the caudal prosencephalic region marked by Pax6, which possibly represents the ancestral state of the chordate visual system. Our results indicate that the visual system of the larval lamprey represents an evolutionarily primitive state, forming a link from protochordates to vertebrates and providing a new perspective of brain evolution based on developmental mechanisms and neural functions. © 2014 Wiley Periodicals, Inc.
A Conserved Developmental Mechanism Builds Complex Visual Systems in Insects and Vertebrates
Joly, Jean-Stéphane; Recher, Gaelle; Brombin, Alessandro; Ngo, Kathy; Hartenstein, Volker
2016-01-01
The visual systems of vertebrates and many other bilaterian clades consist of complex neural structures guiding a wide spectrum of behaviors. Homologies at the level of cell types and even discrete neural circuits have been proposed, but many questions of how the architecture of visual neuropils evolved among different phyla remain open. In this review we argue that the profound conservation of genetic and developmental steps generating the eye and its target neuropils in fish and fruit flies supports a homology between some core elements of bilaterian visual circuitries. Fish retina and tectum, and fly optic lobe, develop from a partitioned, unidirectionally proliferating neurectodermal domain that combines slowly dividing neuroepithelial stem cells and rapidly amplifying progenitors with shared genetic signatures to generate large numbers and different types of neurons in a temporally ordered way. This peculiar ‘conveyor belt neurogenesis’ could play an essential role in generating the topographically ordered circuitry of the visual system. PMID:27780043
The what, where and how of auditory-object perception.
Bizley, Jennifer K; Cohen, Yale E
2013-10-01
The fundamental perceptual unit in hearing is the 'auditory object'. Similar to visual objects, auditory objects are the computational result of the auditory system's capacity to detect, extract, segregate and group spectrotemporal regularities in the acoustic environment; the multitude of acoustic stimuli around us together form the auditory scene. However, unlike the visual scene, resolving the component objects within the auditory scene crucially depends on their temporal structure. Neural correlates of auditory objects are found throughout the auditory system. However, neural responses do not become correlated with a listener's perceptual reports until the level of the cortex. The roles of different neural structures and the contribution of different cognitive states to the perception of auditory objects are not yet fully understood.
The what, where and how of auditory-object perception
Bizley, Jennifer K.; Cohen, Yale E.
2014-01-01
The fundamental perceptual unit in hearing is the ‘auditory object’. Similar to visual objects, auditory objects are the computational result of the auditory system's capacity to detect, extract, segregate and group spectrotemporal regularities in the acoustic environment; the multitude of acoustic stimuli around us together form the auditory scene. However, unlike the visual scene, resolving the component objects within the auditory scene crucially depends on their temporal structure. Neural correlates of auditory objects are found throughout the auditory system. However, neural responses do not become correlated with a listener's perceptual reports until the level of the cortex. The roles of different neural structures and the contribution of different cognitive states to the perception of auditory objects are not yet fully understood. PMID:24052177
Visualization of suspicious lesions in breast MRI based on intelligent neural systems
NASA Astrophysics Data System (ADS)
Twellmann, Thorsten; Lange, Oliver; Nattkemper, Tim Wilhelm; Meyer-Bäse, Anke
2006-05-01
Intelligent medical systems based on supervised and unsupervised artificial neural networks are applied to the automatic visualization and classification of suspicious lesions in breast MRI. These systems represent an important component of future sophisticated computer-aided diagnosis systems and enable the extraction of spatial and temporal features of dynamic MRI data stemming from patients with confirmed lesion diagnosis. By taking into account the heterogenity of the cancerous tissue, these techniques reveal the malignant, benign and normal kinetic signals and and provide a regional subclassification of pathological breast tissue. Intelligent medical systems are expected to have substantial implications in healthcare politics by contributing to the diagnosis of indeterminate breast lesions by non-invasive imaging.
A State Space Model for Spatial Updating of Remembered Visual Targets during Eye Movements
Mohsenzadeh, Yalda; Dash, Suryadeep; Crawford, J. Douglas
2016-01-01
In the oculomotor system, spatial updating is the ability to aim a saccade toward a remembered visual target position despite intervening eye movements. Although this has been the subject of extensive experimental investigation, there is still no unifying theoretical framework to explain the neural mechanism for this phenomenon, and how it influences visual signals in the brain. Here, we propose a unified state-space model (SSM) to account for the dynamics of spatial updating during two types of eye movement; saccades and smooth pursuit. Our proposed model is a non-linear SSM and implemented through a recurrent radial-basis-function neural network in a dual Extended Kalman filter (EKF) structure. The model parameters and internal states (remembered target position) are estimated sequentially using the EKF method. The proposed model replicates two fundamental experimental observations: continuous gaze-centered updating of visual memory-related activity during smooth pursuit, and predictive remapping of visual memory activity before and during saccades. Moreover, our model makes the new prediction that, when uncertainty of input signals is incorporated in the model, neural population activity and receptive fields expand just before and during saccades. These results suggest that visual remapping and motor updating are part of a common visuomotor mechanism, and that subjective perceptual constancy arises in part from training the visual system on motor tasks. PMID:27242452
Beyond perceptual expertise: revisiting the neural substrates of expert object recognition
Harel, Assaf; Kravitz, Dwight; Baker, Chris I.
2013-01-01
Real-world expertise provides a valuable opportunity to understand how experience shapes human behavior and neural function. In the visual domain, the study of expert object recognition, such as in car enthusiasts or bird watchers, has produced a large, growing, and often-controversial literature. Here, we synthesize this literature, focusing primarily on results from functional brain imaging, and propose an interactive framework that incorporates the impact of high-level factors, such as attention and conceptual knowledge, in supporting expertise. This framework contrasts with the perceptual view of object expertise that has concentrated largely on stimulus-driven processing in visual cortex. One prominent version of this perceptual account has almost exclusively focused on the relation of expertise to face processing and, in terms of the neural substrates, has centered on face-selective cortical regions such as the Fusiform Face Area (FFA). We discuss the limitations of this face-centric approach as well as the more general perceptual view, and highlight that expert related activity is: (i) found throughout visual cortex, not just FFA, with a strong relationship between neural response and behavioral expertise even in the earliest stages of visual processing, (ii) found outside visual cortex in areas such as parietal and prefrontal cortices, and (iii) modulated by the attentional engagement of the observer suggesting that it is neither automatic nor driven solely by stimulus properties. These findings strongly support a framework in which object expertise emerges from extensive interactions within and between the visual system and other cognitive systems, resulting in widespread, distributed patterns of expertise-related activity across the entire cortex. PMID:24409134
Neural mechanisms of limb position estimation in the primate brain.
Shi, Ying; Buneo, Christopher A
2011-01-01
Understanding the neural mechanisms of limb position estimation is important both for comprehending the neural control of goal directed arm movements and for developing neuroprosthetic systems designed to replace lost limb function. Here we examined the role of area 5 of the posterior parietal cortex in estimating limb position based on visual and somatic (proprioceptive, efference copy) signals. Single unit recordings were obtained as monkeys reached to visual targets presented in a semi-immersive virtual reality environment. On half of the trials animals were required to maintain their limb position at these targets while receiving both visual and non-visual feedback of their arm position, while on the other trials visual feedback was withheld. When examined individually, many area 5 neurons were tuned to the position of the limb in the workspace but very few neurons modulated their firing rates based on the presence/absence of visual feedback. At the population level however decoding of limb position was somewhat more accurate when visual feedback was provided. These findings support a role for area 5 in limb position estimation but also suggest that visual signals regarding limb position are only weakly represented in this area, and only at the population level.
Tabei, Ken-ichi; Satoh, Masayuki; Kida, Hirotaka; Kizaki, Moeni; Sakuma, Haruno; Sakuma, Hajime; Tomimoto, Hidekazu
2015-01-01
Research on the neural processing of optical illusions can provide clues for understanding the neural mechanisms underlying visual perception. Previous studies have shown that some visual areas contribute to the perception of optical illusions such as the Kanizsa triangle and Müller-Lyer figure; however, the neural mechanisms underlying the processing of these and other optical illusions have not been clearly identified. Using functional magnetic resonance imaging (fMRI), we determined which brain regions are active during the perception of optical illusions. For our study, we enrolled 18 participants. The illusory optical stimuli consisted of many kana letters, which are Japanese phonograms. During the shape task, participants stated aloud whether they perceived the shapes of two optical illusions as being the same or not. During the word task, participants read aloud the kana letters in the stimuli. A direct comparison between the shape and word tasks showed activation of the right inferior frontal gyrus, left medial frontal gyrus, and right pulvinar. It is well known that there are two visual pathways, the geniculate and extrageniculate systems, which belong to the higher-level and primary visual systems, respectively. The pulvinar belongs to the latter system, and the findings of the present study suggest that the extrageniculate system is involved in the cognitive processing of optical illusions. PMID:26083375
Comparing visual representations across human fMRI and computational vision
Leeds, Daniel D.; Seibert, Darren A.; Pyles, John A.; Tarr, Michael J.
2013-01-01
Feedforward visual object perception recruits a cortical network that is assumed to be hierarchical, progressing from basic visual features to complete object representations. However, the nature of the intermediate features related to this transformation remains poorly understood. Here, we explore how well different computer vision recognition models account for neural object encoding across the human cortical visual pathway as measured using fMRI. These neural data, collected during the viewing of 60 images of real-world objects, were analyzed with a searchlight procedure as in Kriegeskorte, Goebel, and Bandettini (2006): Within each searchlight sphere, the obtained patterns of neural activity for all 60 objects were compared to model responses for each computer recognition algorithm using representational dissimilarity analysis (Kriegeskorte et al., 2008). Although each of the computer vision methods significantly accounted for some of the neural data, among the different models, the scale invariant feature transform (Lowe, 2004), encoding local visual properties gathered from “interest points,” was best able to accurately and consistently account for stimulus representations within the ventral pathway. More generally, when present, significance was observed in regions of the ventral-temporal cortex associated with intermediate-level object perception. Differences in model effectiveness and the neural location of significant matches may be attributable to the fact that each model implements a different featural basis for representing objects (e.g., more holistic or more parts-based). Overall, we conclude that well-known computer vision recognition systems may serve as viable proxies for theories of intermediate visual object representation. PMID:24273227
Aging and the interaction of sensory cortical function and structure.
Peiffer, Ann M; Hugenschmidt, Christina E; Maldjian, Joseph A; Casanova, Ramon; Srikanth, Ryali; Hayasaka, Satoru; Burdette, Jonathan H; Kraft, Robert A; Laurienti, Paul J
2009-01-01
Even the healthiest older adults experience changes in cognitive and sensory function. Studies show that older adults have reduced neural responses to sensory information. However, it is well known that sensory systems do not act in isolation but function cooperatively to either enhance or suppress neural responses to individual environmental stimuli. Very little research has been dedicated to understanding how aging affects the interactions between sensory systems, especially cross-modal deactivations or the ability of one sensory system (e.g., audition) to suppress the neural responses in another sensory system cortex (e.g., vision). Such cross-modal interactions have been implicated in attentional shifts between sensory modalities and could account for increased distractibility in older adults. To assess age-related changes in cross-modal deactivations, functional MRI studies were performed in 61 adults between 18 and 80 years old during simple auditory and visual discrimination tasks. Results within visual cortex confirmed previous findings of decreased responses to visual stimuli for older adults. Age-related changes in the visual cortical response to auditory stimuli were, however, much more complex and suggested an alteration with age in the functional interactions between the senses. Ventral visual cortical regions exhibited cross-modal deactivations in younger but not older adults, whereas more dorsal aspects of visual cortex were suppressed in older but not younger adults. These differences in deactivation also remained after adjusting for age-related reductions in brain volume of sensory cortex. Thus, functional differences in cortical activity between older and younger adults cannot solely be accounted for by differences in gray matter volume. (c) 2007 Wiley-Liss, Inc.
Surfing a spike wave down the ventral stream.
VanRullen, Rufin; Thorpe, Simon J
2002-10-01
Numerous theories of neural processing, often motivated by experimental observations, have explored the computational properties of neural codes based on the absolute or relative timing of spikes in spike trains. Spiking neuron models and theories however, as well as their experimental counterparts, have generally been limited to the simulation or observation of isolated neurons, isolated spike trains, or reduced neural populations. Such theories would therefore seem inappropriate to capture the properties of a neural code relying on temporal spike patterns distributed across large neuronal populations. Here we report a range of computer simulations and theoretical considerations that were designed to explore the possibilities of one such code and its relevance for visual processing. In a unified framework where the relation between stimulus saliency and spike relative timing plays the central role, we describe how the ventral stream of the visual system could process natural input scenes and extract meaningful information, both rapidly and reliably. The first wave of spikes generated in the retina in response to a visual stimulation carries information explicitly in its spatio-temporal structure: the most salient information is represented by the first spikes over the population. This spike wave, propagating through a hierarchy of visual areas, is regenerated at each processing stage, where its temporal structure can be modified by (i). the selectivity of the cortical neurons, (ii). lateral interactions and (iii). top-down attentional influences from higher order cortical areas. The resulting model could account for the remarkable efficiency and rapidity of processing observed in the primate visual system.
Wu, Jinglong; Chen, Kewei; Imajyo, Satoshi; Ohno, Seiichiro; Kanazawa, Susumu
2013-01-01
In human visual cortex, the primary visual cortex (V1) is considered to be essential for visual information processing; the fusiform face area (FFA) and parahippocampal place area (PPA) are considered as face-selective region and places-selective region, respectively. Recently, a functional magnetic resonance imaging (fMRI) study showed that the neural activity ratios between V1 and FFA were constant as eccentricities increasing in central visual field. However, in wide visual field, the neural activity relationships between V1 and FFA or V1 and PPA are still unclear. In this work, using fMRI and wide-view present system, we tried to address this issue by measuring neural activities in V1, FFA and PPA for the images of faces and houses aligning in 4 eccentricities and 4 meridians. Then, we further calculated ratio relative to V1 (RRV1) as comparing the neural responses amplitudes in FFA or PPA with those in V1. We found V1, FFA, and PPA showed significant different neural activities to faces and houses in 3 dimensions of eccentricity, meridian, and region. Most importantly, the RRV1s in FFA and PPA also exhibited significant differences in 3 dimensions. In the dimension of eccentricity, both FFA and PPA showed smaller RRV1s at central position than those at peripheral positions. In meridian dimension, both FFA and PPA showed larger RRV1s at upper vertical positions than those at lower vertical positions. In the dimension of region, FFA had larger RRV1s than PPA. We proposed that these differential RRV1s indicated FFA and PPA might have different processing strategies for encoding the wide field visual information from V1. These different processing strategies might depend on the retinal position at which faces or houses are typically observed in daily life. We posited a role of experience in shaping the information processing strategies in the ventral visual cortex. PMID:23991147
A massively asynchronous, parallel brain
Zeki, Semir
2015-01-01
Whether the visual brain uses a parallel or a serial, hierarchical, strategy to process visual signals, the end result appears to be that different attributes of the visual scene are perceived asynchronously—with colour leading form (orientation) by 40 ms and direction of motion by about 80 ms. Whatever the neural root of this asynchrony, it creates a problem that has not been properly addressed, namely how visual attributes that are perceived asynchronously over brief time windows after stimulus onset are bound together in the longer term to give us a unified experience of the visual world, in which all attributes are apparently seen in perfect registration. In this review, I suggest that there is no central neural clock in the (visual) brain that synchronizes the activity of different processing systems. More likely, activity in each of the parallel processing-perceptual systems of the visual brain is reset independently, making of the brain a massively asynchronous organ, just like the new generation of more efficient computers promise to be. Given the asynchronous operations of the brain, it is likely that the results of activities in the different processing-perceptual systems are not bound by physiological interactions between cells in the specialized visual areas, but post-perceptually, outside the visual brain. PMID:25823871
Superior visual performance in nocturnal insects: neural principles and bio-inspired technologies
NASA Astrophysics Data System (ADS)
Warrant, Eric J.
2016-04-01
At night, our visual capacities are severely reduced, with a complete loss in our ability to see colour and a dramatic loss in our ability to see fine spatial and temporal details. This is not the case for many nocturnal animals, notably insects. Our recent work, particularly on fast-flying moths and bees and on ball-rolling dung beetles, has shown that nocturnal animals are able to distinguish colours, to detect faint movements, to learn visual landmarks, to orient to the faint pattern of polarised light produced by the moon and to navigate using the stars. These impressive visual abilities are the result of exquisitely adapted eyes and visual systems, the product of millions of years of evolution. Nocturnal animals typically have highly sensitive eye designs and visual neural circuitry that is optimised for extracting reliable information from dim and noisy visual images. Even though we are only at the threshold of understanding the neural mechanisms responsible for reliable nocturnal vision, growing evidence suggests that the neural summation of photons in space and time is critically important: even though vision in dim light becomes necessarily coarser and slower, it also becomes significantly more reliable. We explored the benefits of spatiotemporal summation by creating a computer algorithm that mimicked nocturnal visual processing strategies. This algorithm dramatically increased the reliability of video collected in dim light, including the preservation of colour, strengthening evidence that summation strategies are essential for nocturnal vision.
Distinct neural substrates for visual short-term memory of actions.
Cai, Ying; Urgolites, Zhisen; Wood, Justin; Chen, Chuansheng; Li, Siyao; Chen, Antao; Xue, Gui
2018-06-26
Fundamental theories of human cognition have long posited that the short-term maintenance of actions is supported by one of the "core knowledge" systems of human visual cognition, yet its neural substrates are still not well understood. In particular, it is unclear whether the visual short-term memory (VSTM) of actions has distinct neural substrates or, as proposed by the spatio-object architecture of VSTM, shares them with VSTM of objects and spatial locations. In two experiments, we tested these two competing hypotheses by directly contrasting the neural substrates for VSTM of actions with those for objects and locations. Our results showed that the bilateral middle temporal cortex (MT) was specifically involved in VSTM of actions because its activation and its functional connectivity with the frontal-parietal network (FPN) were only modulated by the memory load of actions, but not by that of objects/agents or locations. Moreover, the brain regions involved in the maintenance of spatial location information (i.e., superior parietal lobule, SPL) was also recruited during the maintenance of actions, consistent with the temporal-spatial nature of actions. Meanwhile, the frontoparietal network (FPN) was commonly involved in all types of VSTM and showed flexible functional connectivity with the domain-specific regions, depending on the current working memory tasks. Together, our results provide clear evidence for a distinct neural system for maintaining actions in VSTM, which supports the core knowledge system theory and the domain-specific and domain-general architectures of VSTM. © 2018 Wiley Periodicals, Inc.
Invariant recognition drives neural representations of action sequences
Poggio, Tomaso
2017-01-01
Recognizing the actions of others from visual stimuli is a crucial aspect of human perception that allows individuals to respond to social cues. Humans are able to discriminate between similar actions despite transformations, like changes in viewpoint or actor, that substantially alter the visual appearance of a scene. This ability to generalize across complex transformations is a hallmark of human visual intelligence. Advances in understanding action recognition at the neural level have not always translated into precise accounts of the computational principles underlying what representations of action sequences are constructed by human visual cortex. Here we test the hypothesis that invariant action discrimination might fill this gap. Recently, the study of artificial systems for static object perception has produced models, Convolutional Neural Networks (CNNs), that achieve human level performance in complex discriminative tasks. Within this class, architectures that better support invariant object recognition also produce image representations that better match those implied by human and primate neural data. However, whether these models produce representations of action sequences that support recognition across complex transformations and closely follow neural representations of actions remains unknown. Here we show that spatiotemporal CNNs accurately categorize video stimuli into action classes, and that deliberate model modifications that improve performance on an invariant action recognition task lead to data representations that better match human neural recordings. Our results support our hypothesis that performance on invariant discrimination dictates the neural representations of actions computed in the brain. These results broaden the scope of the invariant recognition framework for understanding visual intelligence from perception of inanimate objects and faces in static images to the study of human perception of action sequences. PMID:29253864
The neural basis of visual behaviors in the larval zebrafish.
Portugues, Ruben; Engert, Florian
2009-12-01
We review visually guided behaviors in larval zebrafish and summarise what is known about the neural processing that results in these behaviors, paying particular attention to the progress made in the last 2 years. Using the examples of the optokinetic reflex, the optomotor response, prey tracking and the visual startle response, we illustrate how the larval zebrafish presents us with a very promising model vertebrate system that allows neurocientists to integrate functional and behavioral studies and from which we can expect illuminating insights in the near future. Copyright 2009 Elsevier Ltd. All rights reserved.
2010-01-01
Background Imbalances in the regulation of pro-inflammatory cytokines have been increasingly correlated with a number of severe and prevalent neurodevelopmental disorders, including autism spectrum disorder, schizophrenia and Down syndrome. Although several studies have shown that cytokines have potent effects on neural function, their role in neural development is still poorly understood. In this study, we investigated the link between abnormal cytokine levels and neural development using the Xenopus laevis tadpole visual system, a model frequently used to examine the anatomical and functional development of neural circuits. Results Using a test for a visually guided behavior that requires normal visual system development, we examined the long-term effects of prolonged developmental exposure to three pro-inflammatory cytokines with known neural functions: interleukin (IL)-1β, IL-6 and tumor necrosis factor (TNF)-α. We found that all cytokines affected the development of normal visually guided behavior. Neuroanatomical imaging of the visual projection showed that none of the cytokines caused any gross abnormalities in the anatomical organization of this projection, suggesting that they may be acting at the level of neuronal microcircuits. We further tested the effects of TNF-α on the electrophysiological properties of the retinotectal circuit and found that long-term developmental exposure to TNF-α resulted in enhanced spontaneous excitatory synaptic transmission in tectal neurons, increased AMPA/NMDA ratios of retinotectal synapses, and a decrease in the number of immature synapses containing only NMDA receptors, consistent with premature maturation and stabilization of these synapses. Local interconnectivity within the tectum also appeared to remain widespread, as shown by increased recurrent polysynaptic activity, and was similar to what is seen in more immature, less refined tectal circuits. TNF-α treatment also enhanced the overall growth of tectal cell dendrites. Finally, we found that TNF-α-reared tadpoles had increased susceptibility to pentylenetetrazol-induced seizures. Conclusions Taken together our data are consistent with a model in which TNF-α causes premature stabilization of developing synapses within the tectum, therefore preventing normal refinement and synapse elimination that occurs during development, leading to increased local connectivity and epilepsy. This experimental model also provides an integrative approach to understanding the effects of cytokines on the development of neural circuits and may provide novel insights into the etiology underlying some neurodevelopmental disorders. PMID:20067608
Lee, Ryan H; Mills, Elizabeth A; Schwartz, Neil; Bell, Mark R; Deeg, Katherine E; Ruthazer, Edward S; Marsh-Armstrong, Nicholas; Aizenman, Carlos D
2010-01-12
Imbalances in the regulation of pro-inflammatory cytokines have been increasingly correlated with a number of severe and prevalent neurodevelopmental disorders, including autism spectrum disorder, schizophrenia and Down syndrome. Although several studies have shown that cytokines have potent effects on neural function, their role in neural development is still poorly understood. In this study, we investigated the link between abnormal cytokine levels and neural development using the Xenopus laevis tadpole visual system, a model frequently used to examine the anatomical and functional development of neural circuits. Using a test for a visually guided behavior that requires normal visual system development, we examined the long-term effects of prolonged developmental exposure to three pro-inflammatory cytokines with known neural functions: interleukin (IL)-1beta, IL-6 and tumor necrosis factor (TNF)-alpha. We found that all cytokines affected the development of normal visually guided behavior. Neuroanatomical imaging of the visual projection showed that none of the cytokines caused any gross abnormalities in the anatomical organization of this projection, suggesting that they may be acting at the level of neuronal microcircuits. We further tested the effects of TNF-alpha on the electrophysiological properties of the retinotectal circuit and found that long-term developmental exposure to TNF-alpha resulted in enhanced spontaneous excitatory synaptic transmission in tectal neurons, increased AMPA/NMDA ratios of retinotectal synapses, and a decrease in the number of immature synapses containing only NMDA receptors, consistent with premature maturation and stabilization of these synapses. Local interconnectivity within the tectum also appeared to remain widespread, as shown by increased recurrent polysynaptic activity, and was similar to what is seen in more immature, less refined tectal circuits. TNF-alpha treatment also enhanced the overall growth of tectal cell dendrites. Finally, we found that TNF-alpha-reared tadpoles had increased susceptibility to pentylenetetrazol-induced seizures. Taken together our data are consistent with a model in which TNF-alpha causes premature stabilization of developing synapses within the tectum, therefore preventing normal refinement and synapse elimination that occurs during development, leading to increased local connectivity and epilepsy. This experimental model also provides an integrative approach to understanding the effects of cytokines on the development of neural circuits and may provide novel insights into the etiology underlying some neurodevelopmental disorders.
Prefrontal contributions to visual selective attention.
Squire, Ryan F; Noudoost, Behrad; Schafer, Robert J; Moore, Tirin
2013-07-08
The faculty of attention endows us with the capacity to process important sensory information selectively while disregarding information that is potentially distracting. Much of our understanding of the neural circuitry underlying this fundamental cognitive function comes from neurophysiological studies within the visual modality. Past evidence suggests that a principal function of the prefrontal cortex (PFC) is selective attention and that this function involves the modulation of sensory signals within posterior cortices. In this review, we discuss recent progress in identifying the specific prefrontal circuits controlling visual attention and its neural correlates within the primate visual system. In addition, we examine the persisting challenge of precisely defining how behavior should be affected when attentional function is lost.
Proulx, Michael J.; Gwinnutt, James; Dell’Erba, Sara; Levy-Tzedek, Shelly; de Sousa, Alexandra A.; Brown, David J.
2015-01-01
Vision is the dominant sense for perception-for-action in humans and other higher primates. Advances in sight restoration now utilize the other intact senses to provide information that is normally sensed visually through sensory substitution to replace missing visual information. Sensory substitution devices translate visual information from a sensor, such as a camera or ultrasound device, into a format that the auditory or tactile systems can detect and process, so the visually impaired can see through hearing or touch. Online control of action is essential for many daily tasks such as pointing, grasping and navigating, and adapting to a sensory substitution device successfully requires extensive learning. Here we review the research on sensory substitution for vision restoration in the context of providing the means of online control for action in the blind or blindfolded. It appears that the use of sensory substitution devices utilizes the neural visual system; this suggests the hypothesis that sensory substitution draws on the same underlying mechanisms as unimpaired visual control of action. Here we review the current state of the art for sensory substitution approaches to object recognition, localization, and navigation, and the potential these approaches have for revealing a metamodal behavioral and neural basis for the online control of action. PMID:26599473
ERIC Educational Resources Information Center
Mavritsaki, Eirini; Heinke, Dietmar; Allen, Harriet; Deco, Gustavo; Humphreys, Glyn W.
2011-01-01
We present the case for a role of biologically plausible neural network modeling in bridging the gap between physiology and behavior. We argue that spiking-level networks can allow "vertical" translation between physiological properties of neural systems and emergent "whole-system" performance--enabling psychological results to be simulated from…
Visual Neuroscience: Unique Neural System for Flight Stabilization in Hummingbirds.
Ibbotson, M R
2017-01-23
The pretectal visual motion processing area in the hummingbird brain is unlike that in other birds: instead of emphasizing detection of horizontal movements, it codes for motion in all directions through 360°, possibly offering precise visual stability control during hovering. Copyright © 2017 Elsevier Ltd. All rights reserved.
Amsel, Ben D
2011-04-01
Empirically derived semantic feature norms categorized into different types of knowledge (e.g., visual, functional, auditory) can be summed to create number-of-feature counts per knowledge type. Initial evidence suggests several such knowledge types may be recruited during language comprehension. The present study provides a more detailed understanding of the timecourse and intensity of influence of several such knowledge types on real-time neural activity. A linear mixed-effects model was applied to single trial event-related potentials for 207 visually presented concrete words measured on total number of features (semantic richness), imageability, and number of visual motion, color, visual form, smell, taste, sound, and function features. Significant influences of multiple feature types occurred before 200ms, suggesting parallel neural computation of word form and conceptual knowledge during language comprehension. Function and visual motion features most prominently influenced neural activity, underscoring the importance of action-related knowledge in computing word meaning. The dynamic time courses and topographies of these effects are most consistent with a flexible conceptual system wherein temporally dynamic recruitment of representations in modal and supramodal cortex are a crucial element of the constellation of processes constituting word meaning computation in the brain. Copyright © 2011 Elsevier Ltd. All rights reserved.
A unified dynamic neural field model of goal directed eye movements
NASA Astrophysics Data System (ADS)
Quinton, J. C.; Goffart, L.
2018-01-01
Primates heavily rely on their visual system, which exploits signals of graded precision based on the eccentricity of the target in the visual field. The interactions with the environment involve actively selecting and focusing on visual targets or regions of interest, instead of contemplating an omnidirectional visual flow. Eye-movements specifically allow foveating targets and track their motion. Once a target is brought within the central visual field, eye-movements are usually classified into catch-up saccades (jumping from one orientation or fixation to another) and smooth pursuit (continuously tracking a target with low velocity). Building on existing dynamic neural field equations, we introduce a novel model that incorporates internal projections to better estimate the current target location (associated to a peak of activity). Such estimate is then used to trigger an eye movement, leading to qualitatively different behaviours depending on the dynamics of the whole oculomotor system: (1) fixational eye-movements due to small variations in the weights of projections when the target is stationary, (2) interceptive and catch-up saccades when peaks build and relax on the neural field, (3) smooth pursuit when the peak stabilises near the centre of the field, the system reaching a fixed point attractor. Learning is nevertheless required for tracking a rapidly moving target, and the proposed model thus replicates recent results in the monkey, in which repeated exercise permits the maintenance of the target within in the central visual field at its current (here-and-now) location, despite the delays involved in transmitting retinal signals to the oculomotor neurons.
NASA Astrophysics Data System (ADS)
Hatfield, Fraser N.; Dehmeshki, Jamshid
1998-09-01
Neurosurgery is an extremely specialized area of medical practice, requiring many years of training. It has been suggested that virtual reality models of the complex structures within the brain may aid in the training of neurosurgeons as well as playing an important role in the preparation for surgery. This paper focuses on the application of a probabilistic neural network to the automatic segmentation of the ventricles from magnetic resonance images of the brain, and their three dimensional visualization.
Davis, Zachary W.; Chapman, Barbara
2015-01-01
Visually evoked activity is necessary for the normal development of the visual system. However, little is known about the capacity for patterned spontaneous activity to drive the maturation of receptive fields before visual experience. Retinal waves provide instructive retinotopic information for the anatomical organization of the visual thalamus. To determine whether retinal waves also drive the maturation of functional responses, we increased the frequency of retinal waves pharmacologically in the ferret (Mustela putorius furo) during a period of retinogeniculate development before eye opening. The development of geniculate receptive fields after receiving these increased neural activities was measured using single-unit electrophysiology. We found that increased retinal waves accelerate the developmental reduction of geniculate receptive field sizes. This reduction is due to a decrease in receptive field center size rather than an increase in inhibitory surround strength. This work reveals an instructive role for patterned spontaneous activity in guiding the functional development of neural circuits. SIGNIFICANCE STATEMENT Patterned spontaneous neural activity that occurs during development is known to be necessary for the proper formation of neural circuits. However, it is unknown whether the spontaneous activity alone is sufficient to drive the maturation of the functional properties of neurons. Our work demonstrates for the first time an acceleration in the maturation of neural function as a consequence of driving patterned spontaneous activity during development. This work has implications for our understanding of how neural circuits can be modified actively to improve function prematurely or to recover from injury with guided interventions of patterned neural activity. PMID:26511250
Charboneau, Evonne J.; Dietrich, Mary S.; Park, Sohee; Cao, Aize; Watkins, Tristan J; Blackford, Jennifer U; Benningfield, Margaret M.; Martin, Peter R.; Buchowski, Maciej S.; Cowan, Ronald L.
2013-01-01
Craving is a major motivator underlying drug use and relapse but the neural correlates of cannabis craving are not well understood. This study sought to determine whether visual cannabis cues increase cannabis craving and whether cue-induced craving is associated with regional brain activation in cannabis-dependent individuals. Cannabis craving was assessed in 16 cannabis-dependent adult volunteers while they viewed cannabis cues during a functional MRI (fMRI) scan. The Marijuana Craving Questionnaire was administered immediately before and after each of three cannabis cue-exposure fMRI runs. FMRI blood-oxygenation-level-dependent (BOLD) signal intensity was determined in regions activated by cannabis cues to examine the relationship of regional brain activation to cannabis craving. Craving scores increased significantly following exposure to visual cannabis cues. Visual cues activated multiple brain regions, including inferior orbital frontal cortex, posterior cingulate gyrus, parahippocampal gyrus, hippocampus, amygdala, superior temporal pole, and occipital cortex. Craving scores at baseline and at the end of all three runs were significantly correlated with brain activation during the first fMRI run only, in the limbic system (including amygdala and hippocampus) and paralimbic system (superior temporal pole), and visual regions (occipital cortex). Cannabis cues increased craving in cannabis-dependent individuals and this increase was associated with activation in the limbic, paralimbic, and visual systems during the first fMRI run, but not subsequent fMRI runs. These results suggest that these regions may mediate visually cued aspects of drug craving. This study provides preliminary evidence for the neural basis of cue-induced cannabis craving and suggests possible neural targets for interventions targeted at treating cannabis dependence. PMID:24035535
How Deep Neural Networks Can Improve Emotion Recognition on Video Data
2016-09-25
HOW DEEP NEURAL NETWORKS CAN IMPROVE EMOTION RECOGNITION ON VIDEO DATA Pooya Khorrami1 , Tom Le Paine1, Kevin Brady2, Charlie Dagli2, Thomas S...this work, we present a system that per- forms emotion recognition on video data using both con- volutional neural networks (CNNs) and recurrent...neural net- works (RNNs). We present our findings on videos from the Audio/Visual+Emotion Challenge (AV+EC2015). In our experiments, we analyze the effects
A Multimodal Neural Network Recruited by Expertise with Musical Notation
ERIC Educational Resources Information Center
Wong, Yetta Kwailing; Gauthier, Isabel
2010-01-01
Prior neuroimaging work on visual perceptual expertise has focused on changes in the visual system, ignoring possible effects of acquiring expert visual skills in nonvisual areas. We investigated expertise for reading musical notation, a skill likely to be associated with multimodal abilities. We compared brain activity in music-reading experts…
A Neural Theory of Visual Attention: Bridging Cognition and Neurophysiology
ERIC Educational Resources Information Center
Bundesen, Claus; Habekost, Thomas; Kyllingsbaek, Soren
2005-01-01
A neural theory of visual attention (NTVA) is presented. NTVA is a neural interpretation of C. Bundesen's (1990) theory of visual attention (TVA). In NTVA, visual processing capacity is distributed across stimuli by dynamic remapping of receptive fields of cortical cells such that more processing resources (cells) are devoted to behaviorally…
Hu, Bin; Yue, Shigang; Zhang, Zhuhong
All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.
Advances in color science: from retina to behavior
Chatterjee, Soumya; Field, Greg D.; Horwitz, Gregory D.; Johnson, Elizabeth N.; Koida, Kowa; Mancuso, Katherine
2010-01-01
Color has become a premier model system for understanding how information is processed by neural circuits, and for investigating the relationships among genes, neural circuits and perception. Both the physical stimulus for color and the perceptual output experienced as color are quite well characterized, but the neural mechanisms that underlie the transformation from stimulus to perception are incompletely understood. The past several years have seen important scientific and technical advances that are changing our understanding of these mechanisms. Here, and in the accompanying minisymposium, we review the latest findings and hypotheses regarding color computations in the retina, primary visual cortex and higher-order visual areas, focusing on non-human primates, a model of human color vision. PMID:21068298
Multisensory guidance of orienting behavior.
Maier, Joost X; Groh, Jennifer M
2009-12-01
We use both vision and audition when localizing objects and events in our environment. However, these sensory systems receive spatial information in different coordinate systems: sounds are localized using inter-aural and spectral cues, yielding a head-centered representation of space, whereas the visual system uses an eye-centered representation of space, based on the site of activation on the retina. In addition, the visual system employs a place-coded, retinotopic map of space, whereas the auditory system's representational format is characterized by broad spatial tuning and a lack of topographical organization. A common view is that the brain needs to reconcile these differences in order to control behavior, such as orienting gaze to the location of a sound source. To accomplish this, it seems that either auditory spatial information must be transformed from a head-centered rate code to an eye-centered map to match the frame of reference used by the visual system, or vice versa. Here, we review a number of studies that have focused on the neural basis underlying such transformations in the primate auditory system. Although, these studies have found some evidence for such transformations, many differences in the way the auditory and visual system encode space exist throughout the auditory pathway. We will review these differences at the neural level, and will discuss them in relation to differences in the way auditory and visual information is used in guiding orienting movements.
Intrusive Images in Psychological Disorders
Brewin, Chris R.; Gregory, James D.; Lipton, Michelle; Burgess, Neil
2010-01-01
Involuntary images and visual memories are prominent in many types of psychopathology. Patients with posttraumatic stress disorder, other anxiety disorders, depression, eating disorders, and psychosis frequently report repeated visual intrusions corresponding to a small number of real or imaginary events, usually extremely vivid, detailed, and with highly distressing content. Both memory and imagery appear to rely on common networks involving medial prefrontal regions, posterior regions in the medial and lateral parietal cortices, the lateral temporal cortex, and the medial temporal lobe. Evidence from cognitive psychology and neuroscience implies distinct neural bases to abstract, flexible, contextualized representations (C-reps) and to inflexible, sensory-bound representations (S-reps). We revise our previous dual representation theory of posttraumatic stress disorder to place it within a neural systems model of healthy memory and imagery. The revised model is used to explain how the different types of distressing visual intrusions associated with clinical disorders arise, in terms of the need for correct interaction between the neural systems supporting S-reps and C-reps via visuospatial working memory. Finally, we discuss the treatment implications of the new model and relate it to existing forms of psychological therapy. PMID:20063969
Endogenous modulation of human visual cortex activity improves perception at twilight.
Cordani, Lorenzo; Tagliazucchi, Enzo; Vetter, Céline; Hassemer, Christian; Roenneberg, Till; Stehle, Jörg H; Kell, Christian A
2018-04-10
Perception, particularly in the visual domain, is drastically influenced by rhythmic changes in ambient lighting conditions. Anticipation of daylight changes by the circadian system is critical for survival. However, the neural bases of time-of-day-dependent modulation in human perception are not yet understood. We used fMRI to study brain dynamics during resting-state and close-to-threshold visual perception repeatedly at six times of the day. Here we report that resting-state signal variance drops endogenously at times coinciding with dawn and dusk, notably in sensory cortices only. In parallel, perception-related signal variance in visual cortices decreases and correlates negatively with detection performance, identifying an anticipatory mechanism that compensates for the deteriorated visual signal quality at dawn and dusk. Generally, our findings imply that decreases in spontaneous neural activity improve close-to-threshold perception.
Nonretinotopic visual processing in the brain.
Melcher, David; Morrone, Maria Concetta
2015-01-01
A basic principle in visual neuroscience is the retinotopic organization of neural receptive fields. Here, we review behavioral, neurophysiological, and neuroimaging evidence for nonretinotopic processing of visual stimuli. A number of behavioral studies have shown perception depending on object or external-space coordinate systems, in addition to retinal coordinates. Both single-cell neurophysiology and neuroimaging have provided evidence for the modulation of neural firing by gaze position and processing of visual information based on craniotopic or spatiotopic coordinates. Transient remapping of the spatial and temporal properties of neurons contingent on saccadic eye movements has been demonstrated in visual cortex, as well as frontal and parietal areas involved in saliency/priority maps, and is a good candidate to mediate some of the spatial invariance demonstrated by perception. Recent studies suggest that spatiotopic selectivity depends on a low spatial resolution system of maps that operates over a longer time frame than retinotopic processing and is strongly modulated by high-level cognitive factors such as attention. The interaction of an initial and rapid retinotopic processing stage, tied to new fixations, and a longer lasting but less precise nonretinotopic level of visual representation could underlie the perception of both a detailed and a stable visual world across saccadic eye movements.
Kitada, Ryo; Okamoto, Yuko; Sasaki, Akihiro T.; Kochiyama, Takanori; Miyahara, Motohide; Lederman, Susan J.; Sadato, Norihiro
2012-01-01
Face perception is critical for social communication. Given its fundamental importance in the course of evolution, the innate neural mechanisms can anticipate the computations necessary for representing faces. However, the effect of visual deprivation on the formation of neural mechanisms that underlie face perception is largely unknown. We previously showed that sighted individuals can recognize basic facial expressions by haptics surprisingly well. Moreover, the inferior frontal gyrus (IFG) and posterior superior temporal sulcus (pSTS) in the sighted subjects are involved in haptic and visual recognition of facial expressions. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI) experiments to determine the nature of the neural representation that subserves the recognition of basic facial expressions in early blind individuals. In a psychophysical experiment, both early blind and sighted subjects haptically identified basic facial expressions at levels well above chance. In the subsequent fMRI experiment, both groups haptically identified facial expressions and shoe types (control). The sighted subjects then completed the same task visually. Within brain regions activated by the visual and haptic identification of facial expressions (relative to that of shoes) in the sighted group, corresponding haptic identification in the early blind activated regions in the inferior frontal and middle temporal gyri. These results suggest that the neural system that underlies the recognition of basic facial expressions develops supramodally even in the absence of early visual experience. PMID:23372547
Kitada, Ryo; Okamoto, Yuko; Sasaki, Akihiro T; Kochiyama, Takanori; Miyahara, Motohide; Lederman, Susan J; Sadato, Norihiro
2013-01-01
Face perception is critical for social communication. Given its fundamental importance in the course of evolution, the innate neural mechanisms can anticipate the computations necessary for representing faces. However, the effect of visual deprivation on the formation of neural mechanisms that underlie face perception is largely unknown. We previously showed that sighted individuals can recognize basic facial expressions by haptics surprisingly well. Moreover, the inferior frontal gyrus (IFG) and posterior superior temporal sulcus (pSTS) in the sighted subjects are involved in haptic and visual recognition of facial expressions. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI) experiments to determine the nature of the neural representation that subserves the recognition of basic facial expressions in early blind individuals. In a psychophysical experiment, both early blind and sighted subjects haptically identified basic facial expressions at levels well above chance. In the subsequent fMRI experiment, both groups haptically identified facial expressions and shoe types (control). The sighted subjects then completed the same task visually. Within brain regions activated by the visual and haptic identification of facial expressions (relative to that of shoes) in the sighted group, corresponding haptic identification in the early blind activated regions in the inferior frontal and middle temporal gyri. These results suggest that the neural system that underlies the recognition of basic facial expressions develops supramodally even in the absence of early visual experience.
A System for Video Surveillance and Monitoring CMU VSAM Final Report
1999-11-30
motion-based skeletonization, neural network , spatio-temporal salience Patterns inside image chips, spurious motion rejection, model -based... network of sensors with respect to the model coordinate system, computation of 3D geolocation estimates, and graphical display of object hypotheses...rithms have been developed. The first uses view dependent visual properties to train a neural network classifier to recognize four classes: single
The Role of Lamination in Neocortical Function
1991-12-20
U. Studies of the Tectofugal System: Tectal pathways to the telencephalon in birds and mammals. The tecto-thalamo-telencephalic visual pathway is...significance of lamination of the telencephalon . Visual Structures and Integrated Functions, Research Notes in Neural Computing (Michael Arbib and J6rg
Pattern recognition neural-net by spatial mapping of biology visual field
NASA Astrophysics Data System (ADS)
Lin, Xin; Mori, Masahiko
2000-05-01
The method of spatial mapping in biology vision field is applied to artificial neural networks for pattern recognition. By the coordinate transform that is called the complex-logarithm mapping and Fourier transform, the input images are transformed into scale- rotation- and shift- invariant patterns, and then fed into a multilayer neural network for learning and recognition. The results of computer simulation and an optical experimental system are described.
Blindsight and Unconscious Vision: What They Teach Us about the Human Visual System
Ajina, Sara; Bridge, Holly
2017-01-01
Damage to the primary visual cortex removes the major input from the eyes to the brain, causing significant visual loss as patients are unable to perceive the side of the world contralateral to the damage. Some patients, however, retain the ability to detect visual information within this blind region; this is known as blindsight. By studying the visual pathways that underlie this residual vision in patients, we can uncover additional aspects of the human visual system that likely contribute to normal visual function but cannot be revealed under physiological conditions. In this review, we discuss the residual abilities and neural activity that have been described in blindsight and the implications of these findings for understanding the intact system. PMID:27777337
Model of rhythmic ball bouncing using a visually controlled neural oscillator.
Avrin, Guillaume; Siegler, Isabelle A; Makarov, Maria; Rodriguez-Ayerbe, Pedro
2017-10-01
The present paper investigates the sensory-driven modulations of central pattern generator dynamics that can be expected to reproduce human behavior during rhythmic hybrid tasks. We propose a theoretical model of human sensorimotor behavior able to account for the observed data from the ball-bouncing task. The novel control architecture is composed of a Matsuoka neural oscillator coupled with the environment through visual sensory feedback. The architecture's ability to reproduce human-like performance during the ball-bouncing task in the presence of perturbations is quantified by comparison of simulated and recorded trials. The results suggest that human visual control of the task is achieved online. The adaptive behavior is made possible by a parametric and state control of the limit cycle emerging from the interaction of the rhythmic pattern generator, the musculoskeletal system, and the environment. NEW & NOTEWORTHY The study demonstrates that a behavioral model based on a neural oscillator controlled by visual information is able to accurately reproduce human modulations in a motor action with respect to sensory information during the rhythmic ball-bouncing task. The model attractor dynamics emerging from the interaction between the neuromusculoskeletal system and the environment met task requirements, environmental constraints, and human behavioral choices without relying on movement planning and explicit internal models of the environment. Copyright © 2017 the American Physiological Society.
Similarities in neural activations of face and Chinese character discrimination.
Liu, Jiangang; Tian, Jie; Li, Jun; Gong, Qiyong; Lee, Kang
2009-02-18
This study compared Chinese participants' visual discrimination of Chinese faces with that of Chinese characters, which are highly similar to faces on a variety of dimensions. Both Chinese faces and characters activated the bilateral middle fusiform with high levels of correlations. These findings suggest that although the expertise systems for faces and written symbols are known to be anatomically differentiated at the later stages of processing to serve face processing or written-symbol-specific processing purposes, they may share similar neural structures in the ventral occipitotemporal cortex at the stages of visual processing.
Benchmarking Spike-Based Visual Recognition: A Dataset and Evaluation
Liu, Qian; Pineda-García, Garibaldi; Stromatias, Evangelos; Serrano-Gotarredona, Teresa; Furber, Steve B.
2016-01-01
Today, increasing attention is being paid to research into spike-based neural computation both to gain a better understanding of the brain and to explore biologically-inspired computation. Within this field, the primate visual pathway and its hierarchical organization have been extensively studied. Spiking Neural Networks (SNNs), inspired by the understanding of observed biological structure and function, have been successfully applied to visual recognition and classification tasks. In addition, implementations on neuromorphic hardware have enabled large-scale networks to run in (or even faster than) real time, making spike-based neural vision processing accessible on mobile robots. Neuromorphic sensors such as silicon retinas are able to feed such mobile systems with real-time visual stimuli. A new set of vision benchmarks for spike-based neural processing are now needed to measure progress quantitatively within this rapidly advancing field. We propose that a large dataset of spike-based visual stimuli is needed to provide meaningful comparisons between different systems, and a corresponding evaluation methodology is also required to measure the performance of SNN models and their hardware implementations. In this paper we first propose an initial NE (Neuromorphic Engineering) dataset based on standard computer vision benchmarksand that uses digits from the MNIST database. This dataset is compatible with the state of current research on spike-based image recognition. The corresponding spike trains are produced using a range of techniques: rate-based Poisson spike generation, rank order encoding, and recorded output from a silicon retina with both flashing and oscillating input stimuli. In addition, a complementary evaluation methodology is presented to assess both model-level and hardware-level performance. Finally, we demonstrate the use of the dataset and the evaluation methodology using two SNN models to validate the performance of the models and their hardware implementations. With this dataset we hope to (1) promote meaningful comparison between algorithms in the field of neural computation, (2) allow comparison with conventional image recognition methods, (3) provide an assessment of the state of the art in spike-based visual recognition, and (4) help researchers identify future directions and advance the field. PMID:27853419
Benchmarking Spike-Based Visual Recognition: A Dataset and Evaluation.
Liu, Qian; Pineda-García, Garibaldi; Stromatias, Evangelos; Serrano-Gotarredona, Teresa; Furber, Steve B
2016-01-01
Today, increasing attention is being paid to research into spike-based neural computation both to gain a better understanding of the brain and to explore biologically-inspired computation. Within this field, the primate visual pathway and its hierarchical organization have been extensively studied. Spiking Neural Networks (SNNs), inspired by the understanding of observed biological structure and function, have been successfully applied to visual recognition and classification tasks. In addition, implementations on neuromorphic hardware have enabled large-scale networks to run in (or even faster than) real time, making spike-based neural vision processing accessible on mobile robots. Neuromorphic sensors such as silicon retinas are able to feed such mobile systems with real-time visual stimuli. A new set of vision benchmarks for spike-based neural processing are now needed to measure progress quantitatively within this rapidly advancing field. We propose that a large dataset of spike-based visual stimuli is needed to provide meaningful comparisons between different systems, and a corresponding evaluation methodology is also required to measure the performance of SNN models and their hardware implementations. In this paper we first propose an initial NE (Neuromorphic Engineering) dataset based on standard computer vision benchmarksand that uses digits from the MNIST database. This dataset is compatible with the state of current research on spike-based image recognition. The corresponding spike trains are produced using a range of techniques: rate-based Poisson spike generation, rank order encoding, and recorded output from a silicon retina with both flashing and oscillating input stimuli. In addition, a complementary evaluation methodology is presented to assess both model-level and hardware-level performance. Finally, we demonstrate the use of the dataset and the evaluation methodology using two SNN models to validate the performance of the models and their hardware implementations. With this dataset we hope to (1) promote meaningful comparison between algorithms in the field of neural computation, (2) allow comparison with conventional image recognition methods, (3) provide an assessment of the state of the art in spike-based visual recognition, and (4) help researchers identify future directions and advance the field.
Two takes on the social brain: a comparison of theory of mind tasks.
Gobbini, Maria Ida; Koralek, Aaron C; Bryan, Ronald E; Montgomery, Kimberly J; Haxby, James V
2007-11-01
We compared two tasks that are widely used in research on mentalizing--false belief stories and animations of rigid geometric shapes that depict social interactions--to investigate whether the neural systems that mediate the representation of others' mental states are consistent across these tasks. Whereas false belief stories activated primarily the anterior paracingulate cortex (APC), the posterior cingulate cortex/precuneus (PCC/PC), and the temporo-parietal junction (TPJ)--components of the distributed neural system for theory of mind (ToM)--the social animations activated an extensive region along nearly the full extent of the superior temporal sulcus, including a locus in the posterior superior temporal sulcus (pSTS), as well as the frontal operculum and inferior parietal lobule (IPL)--components of the distributed neural system for action understanding--and the fusiform gyrus. These results suggest that the representation of covert mental states that may predict behavior and the representation of intentions that are implied by perceived actions involve distinct neural systems. These results show that the TPJ and the pSTS play dissociable roles in mentalizing and are parts of different distributed neural systems. Because the social animations do not depict articulated body movements, these results also highlight that the perception of the kinematics of actions is not necessary to activate the mirror neuron system, suggesting that this system plays a general role in the representation of intentions and goals of actions. Furthermore, these results suggest that the fusiform gyrus plays a general role in the representation of visual stimuli that signify agency, independent of visual form.
A neural theory of visual attention and short-term memory (NTVA).
Bundesen, Claus; Habekost, Thomas; Kyllingsbæk, Søren
2011-05-01
The neural theory of visual attention and short-term memory (NTVA) proposed by Bundesen, Habekost, and Kyllingsbæk (2005) is reviewed. In NTVA, filtering (selection of objects) changes the number of cortical neurons in which an object is represented so that this number increases with the behavioural importance of the object. Another mechanism of selection, pigeonholing (selection of features), scales the level of activation in neurons coding for a particular feature. By these mechanisms, behaviourally important objects and features are likely to win the competition to become encoded into visual short-term memory (VSTM). The VSTM system is conceived as a feedback mechanism that sustains activity in the neurons that have won the attentional competition. NTVA accounts both for a wide range of attentional effects in human performance (reaction times and error rates) and a wide range of effects observed in firing rates of single cells in the primate visual system. Copyright © 2010 Elsevier Ltd. All rights reserved.
Prentice Award Lecture 2011: Removing the Brakes on Plasticity in the Amblyopic Brain
Levi, Dennis M.
2012-01-01
Experience-dependent plasticity is closely linked with the development of sensory function. Beyond this sensitive period, developmental plasticity is actively limited; however, new studies provide growing evidence for plasticity in the adult visual system. The amblyopic visual system is an excellent model for examining the “brakes” that limit recovery of function beyond the critical period. While amblyopia can often be reversed when treated early, conventional treatment is generally not undertaken in older children and adults. However new clinical and experimental studies in both animals and humans provide evidence for neural plasticity beyond the critical period. The results suggest that perceptual learning and video game play may be effective in improving a range of visual performance measures and importantly the improvements may transfer to better visual acuity and stereopsis. These findings, along with the results of new clinical trials, suggest that it might be time to re-consider our notions about neural plasticity in amblyopia. PMID:22581119
James, Karin H; Atwood, Thea P
2009-02-01
Functional specialization in the brain is considered a hallmark of efficient processing. It is therefore not surprising that there are brain areas specialized for processing letters. To better understand the causes of functional specialization for letters, we explore the emergence of this pattern of response in the ventral processing stream through a training paradigm. Previously, we hypothesized that the specialized response pattern seen during letter perception may be due in part to our experience in writing letters. The work presented here investigates whether or not this aspect of letter processing-the integration of sensorimotor systems through writing-leads to functional specialization in the visual system. To test this idea, we investigated whether or not different types of experiences with letter-like stimuli ("pseudoletters") led to functional specialization similar to that which exists for letters. Neural activation patterns were measured using functional magnetic resonance imaging (fMRI) before and after three different types of training sessions. Participants were trained to recognize pseudoletters by writing, typing, or purely visual practice. Results suggested that only after writing practice did neural activation patterns to pseudoletters resemble patterns seen for letters. That is, neural activation in the left fusiform and dorsal precentral gyrus was greater when participants viewed pseudoletters than other, similar stimuli but only after writing experience. Neural activation also increased after typing practice in the right fusiform and left precentral gyrus, suggesting that in some areas, any motor experience may change visual processing. The results of this experiment suggest an intimate interaction among perceptual and motor systems during pseudoletter perception that may be extended to everyday letter perception.
Sight and blindness in the same person: Gating in the visual system.
Strasburger, Hans; Waldvogel, Bruno
2015-12-01
We present the case of a patient having dissociative identity disorder (DID) who-after 15 years of misdiagnosed cortical blindness--step-by-step regained sight during psychotherapeutic treatment. At first only a few personality states regained vision whereas others remained blind. This could be confirmed by electrophysiological measurement, in which visual evoked potentials (VEPs) were absent in the blind personality states but were normal and stable in the seeing states. A switch between these states could happen within seconds. We assume a top-down modulation of activity in the primary visual pathway as a neural basis of such psychogenic blindness, possibly at the level of the thalamus. VEPs therefore do not allow separating psychogenic blindness from organic disruption of the visual pathway. In summary, psychogenic blindness seems to suppress visual information at an early neural stage. © 2015 The Institute of Psychology, Chinese Academy of Sciences and Wiley Publishing Asia Pty Ltd.
Sharpening of Hierarchical Visual Feature Representations of Blurred Images.
Abdelhack, Mohamed; Kamitani, Yukiyasu
2018-01-01
The robustness of the visual system lies in its ability to perceive degraded images. This is achieved through interacting bottom-up, recurrent, and top-down pathways that process the visual input in concordance with stored prior information. The interaction mechanism by which they integrate visual input and prior information is still enigmatic. We present a new approach using deep neural network (DNN) representation to reveal the effects of such integration on degraded visual inputs. We transformed measured human brain activity resulting from viewing blurred images to the hierarchical representation space derived from a feedforward DNN. Transformed representations were found to veer toward the original nonblurred image and away from the blurred stimulus image. This indicated deblurring or sharpening in the neural representation, and possibly in our perception. We anticipate these results will help unravel the interplay mechanism between bottom-up, recurrent, and top-down pathways, leading to more comprehensive models of vision.
Remembering the Past and Imagining the Future: A Neural Model of Spatial Memory and Imagery
ERIC Educational Resources Information Center
Byrne, Patrick; Becker, Suzanna; Burgess, Neil
2007-01-01
The authors model the neural mechanisms underlying spatial cognition, integrating neuronal systems and behavioral data, and address the relationships between long-term memory, short-term memory, and imagery, and between egocentric and allocentric and visual and ideothetic representations. Long-term spatial memory is modeled as attractor dynamics…
Visual cortex responses reflect temporal structure of continuous quasi-rhythmic sensory stimulation.
Keitel, Christian; Thut, Gregor; Gross, Joachim
2017-02-01
Neural processing of dynamic continuous visual input, and cognitive influences thereon, are frequently studied in paradigms employing strictly rhythmic stimulation. However, the temporal structure of natural stimuli is hardly ever fully rhythmic but possesses certain spectral bandwidths (e.g. lip movements in speech, gestures). Examining periodic brain responses elicited by strictly rhythmic stimulation might thus represent ideal, yet isolated cases. Here, we tested how the visual system reflects quasi-rhythmic stimulation with frequencies continuously varying within ranges of classical theta (4-7Hz), alpha (8-13Hz) and beta bands (14-20Hz) using EEG. Our findings substantiate a systematic and sustained neural phase-locking to stimulation in all three frequency ranges. Further, we found that allocation of spatial attention enhances EEG-stimulus locking to theta- and alpha-band stimulation. Our results bridge recent findings regarding phase locking ("entrainment") to quasi-rhythmic visual input and "frequency-tagging" experiments employing strictly rhythmic stimulation. We propose that sustained EEG-stimulus locking can be considered as a continuous neural signature of processing dynamic sensory input in early visual cortices. Accordingly, EEG-stimulus locking serves to trace the temporal evolution of rhythmic as well as quasi-rhythmic visual input and is subject to attentional bias. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Rapid whole brain imaging of neural activity in freely behaving larval zebrafish (Danio rerio)
Shang, Chunfeng; Yang, Wenbin; Bai, Lu; Du, Jiulin
2017-01-01
The internal brain dynamics that link sensation and action are arguably better studied during natural animal behaviors. Here, we report on a novel volume imaging and 3D tracking technique that monitors whole brain neural activity in freely swimming larval zebrafish (Danio rerio). We demonstrated the capability of our system through functional imaging of neural activity during visually evoked and prey capture behaviors in larval zebrafish. PMID:28930070
NASA Astrophysics Data System (ADS)
An, Soyoung; Choi, Woochul; Paik, Se-Bum
2015-11-01
Understanding the mechanism of information processing in the human brain remains a unique challenge because the nonlinear interactions between the neurons in the network are extremely complex and because controlling every relevant parameter during an experiment is difficult. Therefore, a simulation using simplified computational models may be an effective approach. In the present study, we developed a general model of neural networks that can simulate nonlinear activity patterns in the hierarchical structure of a neural network system. To test our model, we first examined whether our simulation could match the previously-observed nonlinear features of neural activity patterns. Next, we performed a psychophysics experiment for a simple visual working memory task to evaluate whether the model could predict the performance of human subjects. Our studies show that the model is capable of reproducing the relationship between memory load and performance and may contribute, in part, to our understanding of how the structure of neural circuits can determine the nonlinear neural activity patterns in the human brain.
Li, Ting; Niu, Yan; Xiang, Jie; Cheng, Junjie; Liu, Bo; Zhang, Hui; Yan, Tianyi; Kanazawa, Susumu; Wu, Jinglong
2018-01-01
Category-selective brain areas exhibit varying levels of neural activity to ipsilaterally presented stimuli. However, in face- and house-selective areas, the neural responses evoked by ipsilateral stimuli in the peripheral visual field remain unclear. In this study, we displayed face and house images using a wide-view visual presentation system while performing functional magnetic resonance imaging (fMRI). The face-selective areas (fusiform face area (FFA) and occipital face area (OFA)) exhibited intense neural responses to ipsilaterally presented images, whereas the house-selective areas (parahippocampal place area (PPA) and transverse occipital sulcus (TOS)) exhibited substantially smaller and even negative neural responses to the ipsilaterally presented images. We also found that the category preferences of the contralateral and ipsilateral neural responses were similar. Interestingly, the face- and house-selective areas exhibited neural responses to ipsilateral images that were smaller than the responses to the contralateral images. Multi-voxel pattern analysis (MVPA) was implemented to evaluate the difference between the contralateral and ipsilateral responses. The classification accuracies were much greater than those expected by chance. The classification accuracies in the FFA were smaller than those in the PPA and TOS. The closer eccentricities elicited greater classification accuracies in the PPA and TOS. We propose that these ipsilateral neural responses might be interpreted by interhemispheric communication through intrahemispheric connectivity of white matter connection and interhemispheric connectivity via the corpus callosum and occipital white matter connection. Furthermore, the PPA and TOS likely have weaker interhemispheric communication than the FFA and OFA, particularly in the peripheral visual field. PMID:29451872
ERIC Educational Resources Information Center
Zhao, Pei; Zhao, Jing; Weng, Xuchu; Li, Su
2018-01-01
Visual word N170 is an index of perceptual expertise for visual words across different writing systems. Recent developmental studies have shown the early emergence of visual word N170 and its close association with individual's reading ability. In the current study, we investigated whether fine-tuning N170 for Chinese characters could emerge after…
Visual development in primates: Neural mechanisms and critical periods
Kiorpes, Lynne
2015-01-01
Despite many decades of research into the development of visual cortex, it remains unclear what neural processes set limitations on the development of visual function and define its vulnerability to abnormal visual experience. This selected review examines the development of visual function and its neural correlates, and highlights the fact that in most cases receptive field properties of infant neurons are substantially more mature than infant visual function. One exception is temporal resolution, which can be accounted for by resolution of neurons at the level of the LGN. In terms of spatial vision, properties of single neurons alone are not sufficient to account for visual development. Different visual functions develop over different time courses. Their onset may be limited by the existence of neural response properties that support a given perceptual ability, but the subsequent time course of maturation to adult levels remains unexplained. Several examples are offered suggesting that taking account of weak signaling by infant neurons, correlated firing, and pooled responses of populations of neurons brings us closer to an understanding of the relationship between neural and behavioral development. PMID:25649764
Saidi, Maryam; Towhidkhah, Farzad; Gharibzadeh, Shahriar; Lari, Abdolaziz Azizi
2013-12-01
Humans perceive the surrounding world by integration of information through different sensory modalities. Earlier models of multisensory integration rely mainly on traditional Bayesian and causal Bayesian inferences for single causal (source) and two causal (for two senses such as visual and auditory systems), respectively. In this paper a new recurrent neural model is presented for integration of visual and proprioceptive information. This model is based on population coding which is able to mimic multisensory integration of neural centers in the human brain. The simulation results agree with those achieved by casual Bayesian inference. The model can also simulate the sensory training process of visual and proprioceptive information in human. Training process in multisensory integration is a point with less attention in the literature before. The effect of proprioceptive training on multisensory perception was investigated through a set of experiments in our previous study. The current study, evaluates the effect of both modalities, i.e., visual and proprioceptive training and compares them with each other through a set of new experiments. In these experiments, the subject was asked to move his/her hand in a circle and estimate its position. The experiments were performed on eight subjects with proprioception training and eight subjects with visual training. Results of the experiments show three important points: (1) visual learning rate is significantly more than that of proprioception; (2) means of visual and proprioceptive errors are decreased by training but statistical analysis shows that this decrement is significant for proprioceptive error and non-significant for visual error, and (3) visual errors in training phase even in the beginning of it, is much less than errors of the main test stage because in the main test, the subject has to focus on two senses. The results of the experiments in this paper is in agreement with the results of the neural model simulation.
Majerus, Steve; Cowan, Nelson; Péters, Frédéric; Van Calster, Laurens; Phillips, Christophe; Schrouff, Jessica
2016-01-01
Recent studies suggest common neural substrates involved in verbal and visual working memory (WM), interpreted as reflecting shared attention-based, short-term retention mechanisms. We used a machine-learning approach to determine more directly the extent to which common neural patterns characterize retention in verbal WM and visual WM. Verbal WM was assessed via a standard delayed probe recognition task for letter sequences of variable length. Visual WM was assessed via a visual array WM task involving the maintenance of variable amounts of visual information in the focus of attention. We trained a classifier to distinguish neural activation patterns associated with high- and low-visual WM load and tested the ability of this classifier to predict verbal WM load (high–low) from their associated neural activation patterns, and vice versa. We observed significant between-task prediction of load effects during WM maintenance, in posterior parietal and superior frontal regions of the dorsal attention network; in contrast, between-task prediction in sensory processing cortices was restricted to the encoding stage. Furthermore, between-task prediction of load effects was strongest in those participants presenting the highest capacity for the visual WM task. This study provides novel evidence for common, attention-based neural patterns supporting verbal and visual WM. PMID:25146374
Mechanisms Underlying Development of Visual Maps and Receptive Fields
Huberman, Andrew D.; Feller, Marla B.; Chapman, Barbara
2008-01-01
Patterns of synaptic connections in the visual system are remarkably precise. These connections dictate the receptive field properties of individual visual neurons and ultimately determine the quality of visual perception. Spontaneous neural activity is necessary for the development of various receptive field properties and visual feature maps. In recent years, attention has shifted to understanding the mechanisms by which spontaneous activity in the developing retina, lateral geniculate nucleus, and visual cortex instruct the axonal and dendritic refinements that give rise to orderly connections in the visual system. Axon guidance cues and a growing list of other molecules, including immune system factors, have also recently been implicated in visual circuit wiring. A major goal now is to determine how these molecules cooperate with spontaneous and visually evoked activity to give rise to the circuits underlying precise receptive field tuning and orderly visual maps. PMID:18558864
Fu, Si-Yao; Yang, Guo-Sheng; Kuai, Xin-Kai
2012-01-01
In this paper, we present a quantitative, highly structured cortex-simulated model, which can be simply described as feedforward, hierarchical simulation of ventral stream of visual cortex using biologically plausible, computationally convenient spiking neural network system. The motivation comes directly from recent pioneering works on detailed functional decomposition analysis of the feedforward pathway of the ventral stream of visual cortex and developments on artificial spiking neural networks (SNNs). By combining the logical structure of the cortical hierarchy and computing power of the spiking neuron model, a practical framework has been presented. As a proof of principle, we demonstrate our system on several facial expression recognition tasks. The proposed cortical-like feedforward hierarchy framework has the merit of capability of dealing with complicated pattern recognition problems, suggesting that, by combining the cognitive models with modern neurocomputational approaches, the neurosystematic approach to the study of cortex-like mechanism has the potential to extend our knowledge of brain mechanisms underlying the cognitive analysis and to advance theoretical models of how we recognize face or, more specifically, perceive other people's facial expression in a rich, dynamic, and complex environment, providing a new starting point for improved models of visual cortex-like mechanism. PMID:23193391
Foley, Elaine; Rippon, Gina; Thai, Ngoc Jade; Longe, Olivia; Senior, Carl
2012-02-01
Very little is known about the neural structures involved in the perception of realistic dynamic facial expressions. In the present study, a unique set of naturalistic dynamic facial emotional expressions was created. Through fMRI and connectivity analysis, a dynamic face perception network was identified, which is demonstrated to extend Haxby et al.'s [Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. The distributed human neural system for face perception. Trends in Cognitive Science, 4, 223-233, 2000] distributed neural system for face perception. This network includes early visual regions, such as the inferior occipital gyrus, which is identified as insensitive to motion or affect but sensitive to the visual stimulus, the STS, identified as specifically sensitive to motion, and the amygdala, recruited to process affect. Measures of effective connectivity between these regions revealed that dynamic facial stimuli were associated with specific increases in connectivity between early visual regions, such as the inferior occipital gyrus and the STS, along with coupling between the STS and the amygdala, as well as the inferior frontal gyrus. These findings support the presence of a distributed network of cortical regions that mediate the perception of different dynamic facial expressions.
Numerosity as a topological invariant.
Kluth, Tobias; Zetzsche, Christoph
2016-01-01
The ability to quickly recognize the number of objects in our environment is a fundamental cognitive function. However, it is far from clear which computations and which actual neural processing mechanisms are used to provide us with such a skill. Here we try to provide a detailed and comprehensive analysis of this issue, which comprises both the basic mathematical foundations and the peculiarities imposed by the structure of the visual system and by the neural computations provided by the visual cortex. We suggest that numerosity should be considered as a mathematical invariant. Making use of concepts from mathematical topology--like connectedness, Betti numbers, and the Gauss-Bonnet theorem--we derive the basic computations suited for the computation of this invariant. We show that the computation of numerosity is possible in a neurophysiologically plausible fashion using only computational elements which are known to exist in the visual cortex. We further show that a fundamental feature of numerosity perception, its Weber property, arises naturally, assuming noise in the basic neural operations. The model is tested on an extended data set (made publicly available). It is hoped that our results can provide a general framework for future research on the invariance properties of the numerosity system.
Fu, Si-Yao; Yang, Guo-Sheng; Kuai, Xin-Kai
2012-01-01
In this paper, we present a quantitative, highly structured cortex-simulated model, which can be simply described as feedforward, hierarchical simulation of ventral stream of visual cortex using biologically plausible, computationally convenient spiking neural network system. The motivation comes directly from recent pioneering works on detailed functional decomposition analysis of the feedforward pathway of the ventral stream of visual cortex and developments on artificial spiking neural networks (SNNs). By combining the logical structure of the cortical hierarchy and computing power of the spiking neuron model, a practical framework has been presented. As a proof of principle, we demonstrate our system on several facial expression recognition tasks. The proposed cortical-like feedforward hierarchy framework has the merit of capability of dealing with complicated pattern recognition problems, suggesting that, by combining the cognitive models with modern neurocomputational approaches, the neurosystematic approach to the study of cortex-like mechanism has the potential to extend our knowledge of brain mechanisms underlying the cognitive analysis and to advance theoretical models of how we recognize face or, more specifically, perceive other people's facial expression in a rich, dynamic, and complex environment, providing a new starting point for improved models of visual cortex-like mechanism.
Liu, Yi; Zang, Xuelian; Chen, Lihan; Assumpção, Leonardo; Li, Hong
2018-01-01
The growth of online shopping increases consumers' dependence on vicarious sensory experiences, such as observing others touching products in commercials. However, empirical evidence on whether observing others' sensory experiences increases purchasing intention is still scarce. In the present study, participants observed others interacting with products in the first- or third-person perspective in video clips, and their neural responses were measured with functional magnetic resonance imaging (fMRI). We investigated (1) whether and how vicariously touching certain products affected purchasing intention, and the neural correlates of this process; and (2) how visual perspective interacts with vicarious tactility. Vicarious tactile experiences were manipulated by hand actions touching or not touching the products, while the visual perspective was manipulated by showing the hand actions either in first- or third-person perspective. During the fMRI scanning, participants watched the video clips and rated their purchasing intention for each product. The results showed that, observing others touching (vs. not touching) the products increased purchasing intention, with vicarious neural responses found in mirror neuron systems (MNS) and lateral occipital complex (LOC). Moreover, the stronger neural activities in MNS was associated with higher purchasing intention. The effects of visual perspectives were found in left superior parietal lobule (SPL), while the interaction of tactility and visual perspective was shown in precuneus and precuneus-LOC connectivity. The present study provides the first evidence that vicariously touching a given product increased purchasing intention and the neural activities in bilateral MNS, LOC, left SPL and precuneus are involved in this process. Hum Brain Mapp 39:332-343, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Visual Working Memory Enhances the Neural Response to Matching Visual Input.
Gayet, Surya; Guggenmos, Matthias; Christophel, Thomas B; Haynes, John-Dylan; Paffen, Chris L E; Van der Stigchel, Stefan; Sterzer, Philipp
2017-07-12
Visual working memory (VWM) is used to maintain visual information available for subsequent goal-directed behavior. The content of VWM has been shown to affect the behavioral response to concurrent visual input, suggesting that visual representations originating from VWM and from sensory input draw upon a shared neural substrate (i.e., a sensory recruitment stance on VWM storage). Here, we hypothesized that visual information maintained in VWM would enhance the neural response to concurrent visual input that matches the content of VWM. To test this hypothesis, we measured fMRI BOLD responses to task-irrelevant stimuli acquired from 15 human participants (three males) performing a concurrent delayed match-to-sample task. In this task, observers were sequentially presented with two shape stimuli and a retro-cue indicating which of the two shapes should be memorized for subsequent recognition. During the retention interval, a task-irrelevant shape (the probe) was briefly presented in the peripheral visual field, which could either match or mismatch the shape category of the memorized stimulus. We show that this probe stimulus elicited a stronger BOLD response, and allowed for increased shape-classification performance, when it matched rather than mismatched the concurrently memorized content, despite identical visual stimulation. Our results demonstrate that VWM enhances the neural response to concurrent visual input in a content-specific way. This finding is consistent with the view that neural populations involved in sensory processing are recruited for VWM storage, and it provides a common explanation for a plethora of behavioral studies in which VWM-matching visual input elicits a stronger behavioral and perceptual response. SIGNIFICANCE STATEMENT Humans heavily rely on visual information to interact with their environment and frequently must memorize such information for later use. Visual working memory allows for maintaining such visual information in the mind's eye after termination of its retinal input. It is hypothesized that information maintained in visual working memory relies on the same neural populations that process visual input. Accordingly, the content of visual working memory is known to affect our conscious perception of concurrent visual input. Here, we demonstrate for the first time that visual input elicits an enhanced neural response when it matches the content of visual working memory, both in terms of signal strength and information content. Copyright © 2017 the authors 0270-6474/17/376638-10$15.00/0.
1989-08-14
DISCRIMINATE SIMILAR KANJt CHARACTERS. Yoshihiro Mori, Kazuhiko Yokosawa . 12 FURTHER EXPLORATIONS IN THE LEARNING OF VISUALLY-GUIDED REACHING: MAKING MURPHY...NETWORKS THAT LEARN TO DISCRIMINATE SIMILAR KANJI CHARACTERS YOSHIHIRO MORI, KAZUHIKO YOKOSAWA , ATR Auditory and Visual Perception Research Laboratories
Pop-out in visual search of moving targets in the archer fish.
Ben-Tov, Mor; Donchin, Opher; Ben-Shahar, Ohad; Segev, Ronen
2015-03-10
Pop-out in visual search reflects the capacity of observers to rapidly detect visual targets independent of the number of distracting objects in the background. Although it may be beneficial to most animals, pop-out behaviour has been observed only in mammals, where neural correlates are found in primary visual cortex as contextually modulated neurons that encode aspects of saliency. Here we show that archer fish can also utilize this important search mechanism by exhibiting pop-out of moving targets. We explore neural correlates of this behaviour and report the presence of contextually modulated neurons in the optic tectum that may constitute the neural substrate for a saliency map. Furthermore, we find that both behaving fish and neural responses exhibit additive responses to multiple visual features. These findings suggest that similar neural computations underlie pop-out behaviour in mammals and fish, and that pop-out may be a universal search mechanism across all vertebrates.
Neural networks for calibration tomography
NASA Technical Reports Server (NTRS)
Decker, Arthur
1993-01-01
Artificial neural networks are suitable for performing pattern-to-pattern calibrations. These calibrations are potentially useful for facilities operations in aeronautics, the control of optical alignment, and the like. Computed tomography is compared with neural net calibration tomography for estimating density from its x-ray transform. X-ray transforms are measured, for example, in diffuse-illumination, holographic interferometry of fluids. Computed tomography and neural net calibration tomography are shown to have comparable performance for a 10 degree viewing cone and 29 interferograms within that cone. The system of tomography discussed is proposed as a relevant test of neural networks and other parallel processors intended for using flow visualization data.
Sensitive periods in affective development: nonlinear maturation of fear learning.
Hartley, Catherine A; Lee, Francis S
2015-01-01
At specific maturational stages, neural circuits enter sensitive periods of heightened plasticity, during which the development of both brain and behavior are highly receptive to particular experiential information. A relatively advanced understanding of the regulatory mechanisms governing the initiation, closure, and reinstatement of sensitive period plasticity has emerged from extensive research examining the development of the visual system. In this article, we discuss a large body of work characterizing the pronounced nonlinear changes in fear learning and extinction that occur from childhood through adulthood, and their underlying neural substrates. We draw upon the model of sensitive period regulation within the visual system, and present burgeoning evidence suggesting that parallel mechanisms may regulate the qualitative changes in fear learning across development.
Sensitive Periods in Affective Development: Nonlinear Maturation of Fear Learning
Hartley, Catherine A; Lee, Francis S
2015-01-01
At specific maturational stages, neural circuits enter sensitive periods of heightened plasticity, during which the development of both brain and behavior are highly receptive to particular experiential information. A relatively advanced understanding of the regulatory mechanisms governing the initiation, closure, and reinstatement of sensitive period plasticity has emerged from extensive research examining the development of the visual system. In this article, we discuss a large body of work characterizing the pronounced nonlinear changes in fear learning and extinction that occur from childhood through adulthood, and their underlying neural substrates. We draw upon the model of sensitive period regulation within the visual system, and present burgeoning evidence suggesting that parallel mechanisms may regulate the qualitative changes in fear learning across development. PMID:25035083
Majerus, Steve; Cowan, Nelson; Péters, Frédéric; Van Calster, Laurens; Phillips, Christophe; Schrouff, Jessica
2016-01-01
Recent studies suggest common neural substrates involved in verbal and visual working memory (WM), interpreted as reflecting shared attention-based, short-term retention mechanisms. We used a machine-learning approach to determine more directly the extent to which common neural patterns characterize retention in verbal WM and visual WM. Verbal WM was assessed via a standard delayed probe recognition task for letter sequences of variable length. Visual WM was assessed via a visual array WM task involving the maintenance of variable amounts of visual information in the focus of attention. We trained a classifier to distinguish neural activation patterns associated with high- and low-visual WM load and tested the ability of this classifier to predict verbal WM load (high-low) from their associated neural activation patterns, and vice versa. We observed significant between-task prediction of load effects during WM maintenance, in posterior parietal and superior frontal regions of the dorsal attention network; in contrast, between-task prediction in sensory processing cortices was restricted to the encoding stage. Furthermore, between-task prediction of load effects was strongest in those participants presenting the highest capacity for the visual WM task. This study provides novel evidence for common, attention-based neural patterns supporting verbal and visual WM. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Neural systems for preparatory control of imitation.
Cross, Katy A; Iacoboni, Marco
2014-01-01
Humans have an automatic tendency to imitate others. Previous studies on how we control these tendencies have focused on reactive mechanisms, where inhibition of imitation is implemented after seeing an action. This work suggests that reactive control of imitation draws on at least partially specialized mechanisms. Here, we examine preparatory imitation control, where advance information allows control processes to be employed before an action is observed. Drawing on dual route models from the spatial compatibility literature, we compare control processes using biological and non-biological stimuli to determine whether preparatory imitation control recruits specialized neural systems that are similar to those observed in reactive imitation control. Results indicate that preparatory control involves anterior prefrontal, dorsolateral prefrontal, posterior parietal and early visual cortices regardless of whether automatic responses are evoked by biological (imitative) or non-biological stimuli. These results indicate both that preparatory control of imitation uses general mechanisms, and that preparatory control of imitation draws on different neural systems from reactive imitation control. Based on the regions involved, we hypothesize that preparatory control is implemented through top-down attentional biasing of visual processing.
Parallel Computations in Insect and Mammalian Visual Motion Processing
Clark, Damon A.; Demb, Jonathan B.
2016-01-01
Sensory systems use receptors to extract information from the environment and neural circuits to perform subsequent computations. These computations may be described as algorithms composed of sequential mathematical operations. Comparing these operations across taxa reveals how different neural circuits have evolved to solve the same problem, even when using different mechanisms to implement the underlying math. In this review, we compare how insect and mammalian neural circuits have solved the problem of motion estimation, focusing on the fruit fly Drosophila and the mouse retina. Although the two systems implement computations with grossly different anatomy and molecular mechanisms, the underlying circuits transform light into motion signals with strikingly similar processing steps. These similarities run from photoreceptor gain control and spatiotemporal tuning to ON and OFF pathway structures, motion detection, and computed motion signals. The parallels between the two systems suggest that a limited set of algorithms for estimating motion satisfies both the needs of sighted creatures and the constraints imposed on them by metabolism, anatomy, and the structure and regularities of the visual world. PMID:27780048
Parallel Computations in Insect and Mammalian Visual Motion Processing.
Clark, Damon A; Demb, Jonathan B
2016-10-24
Sensory systems use receptors to extract information from the environment and neural circuits to perform subsequent computations. These computations may be described as algorithms composed of sequential mathematical operations. Comparing these operations across taxa reveals how different neural circuits have evolved to solve the same problem, even when using different mechanisms to implement the underlying math. In this review, we compare how insect and mammalian neural circuits have solved the problem of motion estimation, focusing on the fruit fly Drosophila and the mouse retina. Although the two systems implement computations with grossly different anatomy and molecular mechanisms, the underlying circuits transform light into motion signals with strikingly similar processing steps. These similarities run from photoreceptor gain control and spatiotemporal tuning to ON and OFF pathway structures, motion detection, and computed motion signals. The parallels between the two systems suggest that a limited set of algorithms for estimating motion satisfies both the needs of sighted creatures and the constraints imposed on them by metabolism, anatomy, and the structure and regularities of the visual world. Copyright © 2016 Elsevier Ltd. All rights reserved.
Distinct Mechanisms for Synchronization and Temporal Patterning of Odor-Encoding Neural Assemblies
NASA Astrophysics Data System (ADS)
MacLeod, Katrina; Laurent, Gilles
1996-11-01
Stimulus-evoked oscillatory synchronization of neural assemblies and temporal patterns of neuronal activity have been observed in many sensory systems, such as the visual and auditory cortices of mammals or the olfactory system of insects. In the locust olfactory system, single odor puffs cause the immediate formation of odor-specific neural assemblies, defined both by their transient synchronized firing and their progressive transformation over the course of a response. The application of an antagonist of ionotropic γ-aminobutyric acid (GABA) receptors to the first olfactory relay neuropil selectively blocked the fast inhibitory synapse between local and projection neurons. This manipulation abolished the synchronization of the odor-coding neural ensembles but did not affect each neuron's temporal response patterns to odors, even when these patterns contained periods of inhibition. Fast GABA-mediated inhibition, therefore, appears to underlie neuronal synchronization but not response tuning in this olfactory system. The selective desynchronization of stimulus-evoked oscillating neural assemblies in vivo is now possible, enabling direct functional tests of their significance for sensation and perception.
Coding the presence of visual objects in a recurrent neural network of visual cortex.
Zwickel, Timm; Wachtler, Thomas; Eckhorn, Reinhard
2007-01-01
Before we can recognize a visual object, our visual system has to segregate it from its background. This requires a fast mechanism for establishing the presence and location of objects independently of their identity. Recently, border-ownership neurons were recorded in monkey visual cortex which might be involved in this task [Zhou, H., Friedmann, H., von der Heydt, R., 2000. Coding of border ownership in monkey visual cortex. J. Neurosci. 20 (17), 6594-6611]. In order to explain the basic mechanisms required for fast coding of object presence, we have developed a neural network model of visual cortex consisting of three stages. Feed-forward and lateral connections support coding of Gestalt properties, including similarity, good continuation, and convexity. Neurons of the highest area respond to the presence of an object and encode its position, invariant of its form. Feedback connections to the lowest area facilitate orientation detectors activated by contours belonging to potential objects, and thus generate the experimentally observed border-ownership property. This feedback control acts fast and significantly improves the figure-ground segregation required for the consecutive task of object recognition.
Neural Responses to Central and Peripheral Objects in the Lateral Occipital Cortex
Wang, Bin; Guo, Jiayue; Yan, Tianyi; Ohno, Seiichiro; Kanazawa, Susumu; Huang, Qiang; Wu, Jinglong
2016-01-01
Human object recognition and classification depend on the retinal location where the object is presented and decrease as eccentricity increases. The lateral occipital complex (LOC) is thought to be preferentially involved in the processing of objects, and its neural responses exhibit category biases to objects presented in the central visual field. However, the nature of LOC neural responses to central and peripheral objects remains largely unclear. In the present study, we used functional magnetic resonance imaging (fMRI) and a wide-view presentation system to investigate neural responses to four categories of objects (faces, houses, animals, and cars) in the primary visual cortex (V1) and the lateral visual cortex, including the LOC and the retinotopic areas LO-1 and LO-2. In these regions, the neural responses to objects decreased as the distance between the location of presentation and center fixation increased, which is consistent with the diminished perceptual ability that was found for peripherally presented images. The LOC and LO-2 exhibited significantly positive neural responses to all eccentricities (0–55°), but LO-1 exhibited significantly positive responses only to central eccentricities (0–22°). By measuring the ratio relative to V1 (RRV1), we further demonstrated that eccentricity, category and the interaction between them significantly affected neural processing in these regions. LOC, LO-1, and LO-2 exhibited larger RRV1s when stimuli were presented at an eccentricity of 0° compared to when they were presented at the greater eccentricities. In LOC and LO-2, the RRV1s for images of faces, animals and cars showed an increasing trend when the images were presented at eccentricities of 11 to 33°. However, the RRV1s for houses showed a decreasing trend in LO-1 and no difference in the LOC and LO-2. We hypothesize, that when houses and the images in the other categories were presented in the peripheral visual field, they were processed via different strategies in the lateral visual cortex. PMID:26924972
Integrating conflict detection and attentional control mechanisms.
Walsh, Bong J; Buonocore, Michael H; Carter, Cameron S; Mangun, George R
2011-09-01
Human behavior involves monitoring and adjusting performance to meet established goals. Performance-monitoring systems that act by detecting conflict in stimulus and response processing have been hypothesized to influence cortical control systems to adjust and improve performance. Here we used fMRI to investigate the neural mechanisms of conflict monitoring and resolution during voluntary spatial attention. We tested the hypothesis that the ACC would be sensitive to conflict during attentional orienting and influence activity in the frontoparietal attentional control network that selectively modulates visual information processing. We found that activity in ACC increased monotonically with increasing attentional conflict. This increased conflict detection activity was correlated with both increased activity in the attentional control network and improved speed and accuracy from one trial to the next. These results establish a long hypothesized interaction between conflict detection systems and neural systems supporting voluntary control of visual attention.
Microstimulation with Chronically Implanted Intracortical Electrodes
NASA Astrophysics Data System (ADS)
McCreery, Douglas
Stimulating microelectrodes that penetrate into the brain afford a means of accessing the basic functional units of the central nervous system. Microstimulation in the region of the cerebral cortex that subserve vision may be an alternative, or an adjunct, to a retinal prosthesis, and may be particularly attractive as a means of restoring a semblance of high-resolution central vision. There also is the intriguing possibility that such a prosthesis could convey higher order visual percepts, many of which are mediated by neural circuits in the secondary or "extra-striate" visual areas that surround the primary visual cortex. The technologies of intracortical stimulating microelectrodes and investigations of the effects of microstimulation on neural tissue have advanced to the point where a cortical-level prosthesis is at least feasible. The imperative of protecting neural tissue from stimulation-induced damage imposes constraints on the selection of stimulus parameters, as does the requirement that the stimulation not greatly affect the electrical excitability of the neurons that are to be activated. The latter is especially likely to occur when many adjacent microelectrodes are pulsed, as will be necessary in a visual prosthesis. However, data from animal studies indicates that these restrictions on stimulus parameter are compatible with those that can evoke visual percepts in humans and in experimental animals. These findings give cause to be optimistic about the prospects for realizing a visual prosthesis utilizing intracortical microstimulation.
EEG-based usability assessment of 3D shutter glasses
NASA Astrophysics Data System (ADS)
Wenzel, Markus A.; Schultze-Kraft, Rafael; Meinecke, Frank C.; Cardinaux, Fabien; Kemp, Thomas; Müller, Klaus-Robert; Curio, Gabriel; Blankertz, Benjamin
2016-02-01
Objective. Neurotechnology can contribute to the usability assessment of products by providing objective measures of neural workload and can uncover usability impediments that are not consciously perceived by test persons. In this study, the neural processing effort imposed on the viewer of 3D television by shutter glasses was quantified as a function of shutter frequency. In particular, we sought to determine the critical shutter frequency at which the ‘neural flicker’ vanishes, such that visual fatigue due to this additional neural effort can be prevented by increasing the frequency of the system. Approach. Twenty-three participants viewed an image through 3D shutter glasses, while multichannel electroencephalogram (EEG) was recorded. In total ten shutter frequencies were employed, selected individually for each participant to cover the range below, at and above the threshold of flicker perception. The source of the neural flicker correlate was extracted using independent component analysis and the flicker impact on the visual cortex was quantified by decoding the state of the shutter from the EEG. Main Result. Effects of the shutter glasses were traced in the EEG up to around 67 Hz—about 20 Hz over the flicker perception threshold—and vanished at the subsequent frequency level of 77 Hz. Significance. The impact of the shutter glasses on the visual cortex can be detected by neurotechnology even when a flicker is not reported by the participants. Potential impact. Increasing the shutter frequency from the usual 50 Hz or 60 Hz to 77 Hz reduces the risk of visual fatigue and thus improves shutter-glass-based 3D usability.
EEG-based usability assessment of 3D shutter glasses.
Wenzel, Markus A; Schultze-Kraft, Rafael; Meinecke, Frank C; Fabien Cardinaux; Kemp, Thomas; Klaus-Robert Müller; Gabriel Curio; Benjamin Blankertz
2016-02-01
Neurotechnology can contribute to the usability assessment of products by providing objective measures of neural workload and can uncover usability impediments that are not consciously perceived by test persons. In this study, the neural processing effort imposed on the viewer of 3D television by shutter glasses was quantified as a function of shutter frequency. In particular, we sought to determine the critical shutter frequency at which the 'neural flicker' vanishes, such that visual fatigue due to this additional neural effort can be prevented by increasing the frequency of the system. Twenty-three participants viewed an image through 3D shutter glasses, while multichannel electroencephalogram (EEG) was recorded. In total ten shutter frequencies were employed, selected individually for each participant to cover the range below, at and above the threshold of flicker perception. The source of the neural flicker correlate was extracted using independent component analysis and the flicker impact on the visual cortex was quantified by decoding the state of the shutter from the EEG. Effects of the shutter glasses were traced in the EEG up to around 67 Hz-about 20 Hz over the flicker perception threshold-and vanished at the subsequent frequency level of 77 Hz. The impact of the shutter glasses on the visual cortex can be detected by neurotechnology even when a flicker is not reported by the participants. Potential impact. Increasing the shutter frequency from the usual 50 Hz or 60 Hz to 77 Hz reduces the risk of visual fatigue and thus improves shutter-glass-based 3D usability.
Visual Perceptual Echo Reflects Learning of Regularities in Rapid Luminance Sequences.
Chang, Acer Y-C; Schwartzman, David J; VanRullen, Rufin; Kanai, Ryota; Seth, Anil K
2017-08-30
A novel neural signature of active visual processing has recently been described in the form of the "perceptual echo", in which the cross-correlation between a sequence of randomly fluctuating luminance values and occipital electrophysiological signals exhibits a long-lasting periodic (∼100 ms cycle) reverberation of the input stimulus (VanRullen and Macdonald, 2012). As yet, however, the mechanisms underlying the perceptual echo and its function remain unknown. Reasoning that natural visual signals often contain temporally predictable, though nonperiodic features, we hypothesized that the perceptual echo may reflect a periodic process associated with regularity learning. To test this hypothesis, we presented subjects with successive repetitions of a rapid nonperiodic luminance sequence, and examined the effects on the perceptual echo, finding that echo amplitude linearly increased with the number of presentations of a given luminance sequence. These data suggest that the perceptual echo reflects a neural signature of regularity learning.Furthermore, when a set of repeated sequences was followed by a sequence with inverted luminance polarities, the echo amplitude decreased to the same level evoked by a novel stimulus sequence. Crucially, when the original stimulus sequence was re-presented, the echo amplitude returned to a level consistent with the number of presentations of this sequence, indicating that the visual system retained sequence-specific information, for many seconds, even in the presence of intervening visual input. Altogether, our results reveal a previously undiscovered regularity learning mechanism within the human visual system, reflected by the perceptual echo. SIGNIFICANCE STATEMENT How the brain encodes and learns fast-changing but nonperiodic visual input remains unknown, even though such visual input characterizes natural scenes. We investigated whether the phenomenon of "perceptual echo" might index such learning. The perceptual echo is a long-lasting reverberation between a rapidly changing visual input and evoked neural activity, apparent in cross-correlations between occipital EEG and stimulus sequences, peaking in the alpha (∼10 Hz) range. We indeed found that perceptual echo is enhanced by repeatedly presenting the same visual sequence, indicating that the human visual system can rapidly and automatically learn regularities embedded within fast-changing dynamic sequences. These results point to a previously undiscovered regularity learning mechanism, operating at a rate defined by the alpha frequency. Copyright © 2017 the authors 0270-6474/17/378486-12$15.00/0.
Addition of visual noise boosts evoked potential-based brain-computer interface.
Xie, Jun; Xu, Guanghua; Wang, Jing; Zhang, Sicong; Zhang, Feng; Li, Yeping; Han, Chengcheng; Li, Lili
2014-05-14
Although noise has a proven beneficial role in brain functions, there have not been any attempts on the dedication of stochastic resonance effect in neural engineering applications, especially in researches of brain-computer interfaces (BCIs). In our study, a steady-state motion visual evoked potential (SSMVEP)-based BCI with periodic visual stimulation plus moderate spatiotemporal noise can achieve better offline and online performance due to enhancement of periodic components in brain responses, which was accompanied by suppression of high harmonics. Offline results behaved with a bell-shaped resonance-like functionality and 7-36% online performance improvements can be achieved when identical visual noise was adopted for different stimulation frequencies. Using neural encoding modeling, these phenomena can be explained as noise-induced input-output synchronization in human sensory systems which commonly possess a low-pass property. Our work demonstrated that noise could boost BCIs in addressing human needs.
Yoon, Jong H; Sheremata, Summer L; Rokem, Ariel; Silver, Michael A
2013-10-31
Cognitive and information processing deficits are core features and important sources of disability in schizophrenia. Our understanding of the neural substrates of these deficits remains incomplete, in large part because the complexity of impairments in schizophrenia makes the identification of specific deficits very challenging. Vision science presents unique opportunities in this regard: many years of basic research have led to detailed characterization of relationships between structure and function in the early visual system and have produced sophisticated methods to quantify visual perception and characterize its neural substrates. We present a selective review of research that illustrates the opportunities for discovery provided by visual studies in schizophrenia. We highlight work that has been particularly effective in applying vision science methods to identify specific neural abnormalities underlying information processing deficits in schizophrenia. In addition, we describe studies that have utilized psychophysical experimental designs that mitigate generalized deficit confounds, thereby revealing specific visual impairments in schizophrenia. These studies contribute to accumulating evidence that early visual cortex is a useful experimental system for the study of local cortical circuit abnormalities in schizophrenia. The high degree of similarity across neocortical areas of neuronal subtypes and their patterns of connectivity suggests that insights obtained from the study of early visual cortex may be applicable to other brain regions. We conclude with a discussion of future studies that combine vision science and neuroimaging methods. These studies have the potential to address pressing questions in schizophrenia, including the dissociation of local circuit deficits vs. impairments in feedback modulation by cognitive processes such as spatial attention and working memory, and the relative contributions of glutamatergic and GABAergic deficits.
Cadieu, Charles F.; Hong, Ha; Yamins, Daniel L. K.; Pinto, Nicolas; Ardila, Diego; Solomon, Ethan A.; Majaj, Najib J.; DiCarlo, James J.
2014-01-01
The primate visual system achieves remarkable visual object recognition performance even in brief presentations, and under changes to object exemplar, geometric transformations, and background variation (a.k.a. core visual object recognition). This remarkable performance is mediated by the representation formed in inferior temporal (IT) cortex. In parallel, recent advances in machine learning have led to ever higher performing models of object recognition using artificial deep neural networks (DNNs). It remains unclear, however, whether the representational performance of DNNs rivals that of the brain. To accurately produce such a comparison, a major difficulty has been a unifying metric that accounts for experimental limitations, such as the amount of noise, the number of neural recording sites, and the number of trials, and computational limitations, such as the complexity of the decoding classifier and the number of classifier training examples. In this work, we perform a direct comparison that corrects for these experimental limitations and computational considerations. As part of our methodology, we propose an extension of “kernel analysis” that measures the generalization accuracy as a function of representational complexity. Our evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task. Furthermore, we show that models that perform well on measures of representational performance also perform well on measures of representational similarity to IT, and on measures of predicting individual IT multi-unit responses. Whether these DNNs rely on computational mechanisms similar to the primate visual system is yet to be determined, but, unlike all previous bio-inspired models, that possibility cannot be ruled out merely on representational performance grounds. PMID:25521294
Neural Systems Involved in Fear and Anxiety Measured with Fear-Potentiated Startle
ERIC Educational Resources Information Center
Davis, Michael
2006-01-01
A good deal is now known about the neural circuitry involved in how conditioned fear can augment a simple reflex (fear-potentiated startle). This involves visual or auditory as well as shock pathways that project via the thalamus and perirhinal or insular cortex to the basolateral amygdala (BLA). The BLA projects to the central (CeA) and medial…
Neural network based visualization of collaborations in a citizen science project
NASA Astrophysics Data System (ADS)
Morais, Alessandra M. M.; Santos, Rafael D. C.; Raddick, M. Jordan
2014-05-01
Citizen science projects are those in which volunteers are asked to collaborate in scientific projects, usually by volunteering idle computer time for distributed data processing efforts or by actively labeling or classifying information - shapes of galaxies, whale sounds, historical records are all examples of citizen science projects in which users access a data collecting system to label or classify images and sounds. In order to be successful, a citizen science project must captivate users and keep them interested on the project and on the science behind it, increasing therefore the time the users spend collaborating with the project. Understanding behavior of citizen scientists and their interaction with the data collection systems may help increase the involvement of the users, categorize them accordingly to different parameters, facilitate their collaboration with the systems, design better user interfaces, and allow better planning and deployment of similar projects and systems. Users behavior can be actively monitored or derived from their interaction with the data collection systems. Records of the interactions can be analyzed using visualization techniques to identify patterns and outliers. In this paper we present some results on the visualization of more than 80 million interactions of almost 150 thousand users with the Galaxy Zoo I citizen science project. Visualization of the attributes extracted from their behaviors was done with a clustering neural network (the Self-Organizing Map) and a selection of icon- and pixel-based techniques. These techniques allows the visual identification of groups of similar behavior in several different ways.
Visual pathways from the perspective of cost functions and multi-task deep neural networks.
Scholte, H Steven; Losch, Max M; Ramakrishnan, Kandan; de Haan, Edward H F; Bohte, Sander M
2018-01-01
Vision research has been shaped by the seminal insight that we can understand the higher-tier visual cortex from the perspective of multiple functional pathways with different goals. In this paper, we try to give a computational account of the functional organization of this system by reasoning from the perspective of multi-task deep neural networks. Machine learning has shown that tasks become easier to solve when they are decomposed into subtasks with their own cost function. We hypothesize that the visual system optimizes multiple cost functions of unrelated tasks and this causes the emergence of a ventral pathway dedicated to vision for perception, and a dorsal pathway dedicated to vision for action. To evaluate the functional organization in multi-task deep neural networks, we propose a method that measures the contribution of a unit towards each task, applying it to two networks that have been trained on either two related or two unrelated tasks, using an identical stimulus set. Results show that the network trained on the unrelated tasks shows a decreasing degree of feature representation sharing towards higher-tier layers while the network trained on related tasks uniformly shows high degree of sharing. We conjecture that the method we propose can be used to analyze the anatomical and functional organization of the visual system and beyond. We predict that the degree to which tasks are related is a good descriptor of the degree to which they share downstream cortical-units. Copyright © 2017 Elsevier Ltd. All rights reserved.
Prior Knowledge about Objects Determines Neural Color Representation in Human Visual Cortex.
Vandenbroucke, A R E; Fahrenfort, J J; Meuwese, J D I; Scholte, H S; Lamme, V A F
2016-04-01
To create subjective experience, our brain must translate physical stimulus input by incorporating prior knowledge and expectations. For example, we perceive color and not wavelength information, and this in part depends on our past experience with colored objects ( Hansen et al. 2006; Mitterer and de Ruiter 2008). Here, we investigated the influence of object knowledge on the neural substrates underlying subjective color vision. In a functional magnetic resonance imaging experiment, human subjects viewed a color that lay midway between red and green (ambiguous with respect to its distance from red and green) presented on either typical red (e.g., tomato), typical green (e.g., clover), or semantically meaningless (nonsense) objects. Using decoding techniques, we could predict whether subjects viewed the ambiguous color on typical red or typical green objects based on the neural response of veridical red and green. This shift of neural response for the ambiguous color did not occur for nonsense objects. The modulation of neural responses was observed in visual areas (V3, V4, VO1, lateral occipital complex) involved in color and object processing, as well as frontal areas. This demonstrates that object memory influences wavelength information relatively early in the human visual system to produce subjective color vision. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Neural Representation of Motion-In-Depth in Area MT
Sanada, Takahisa M.
2014-01-01
Neural processing of 2D visual motion has been studied extensively, but relatively little is known about how visual cortical neurons represent visual motion trajectories that include a component toward or away from the observer (motion in depth). Psychophysical studies have demonstrated that humans perceive motion in depth based on both changes in binocular disparity over time (CD cue) and interocular velocity differences (IOVD cue). However, evidence for neurons that represent motion in depth has been limited, especially in primates, and it is unknown whether such neurons make use of CD or IOVD cues. We show that approximately one-half of neurons in macaque area MT are selective for the direction of motion in depth, and that this selectivity is driven primarily by IOVD cues, with a small contribution from the CD cue. Our results establish that area MT, a central hub of the primate visual motion processing system, contains a 3D representation of visual motion. PMID:25411481
Orientation-Selective Retinal Circuits in Vertebrates
Antinucci, Paride; Hindges, Robert
2018-01-01
Visual information is already processed in the retina before it is transmitted to higher visual centers in the brain. This includes the extraction of salient features from visual scenes, such as motion directionality or contrast, through neurons belonging to distinct neural circuits. Some retinal neurons are tuned to the orientation of elongated visual stimuli. Such ‘orientation-selective’ neurons are present in the retinae of most, if not all, vertebrate species analyzed to date, with species-specific differences in frequency and degree of tuning. In some cases, orientation-selective neurons have very stereotyped functional and morphological properties suggesting that they represent distinct cell types. In this review, we describe the retinal cell types underlying orientation selectivity found in various vertebrate species, and highlight their commonalities and differences. In addition, we discuss recent studies that revealed the cellular, synaptic and circuit mechanisms at the basis of retinal orientation selectivity. Finally, we outline the significance of these findings in shaping our current understanding of how this fundamental neural computation is implemented in the visual systems of vertebrates. PMID:29467629
Orientation-Selective Retinal Circuits in Vertebrates.
Antinucci, Paride; Hindges, Robert
2018-01-01
Visual information is already processed in the retina before it is transmitted to higher visual centers in the brain. This includes the extraction of salient features from visual scenes, such as motion directionality or contrast, through neurons belonging to distinct neural circuits. Some retinal neurons are tuned to the orientation of elongated visual stimuli. Such 'orientation-selective' neurons are present in the retinae of most, if not all, vertebrate species analyzed to date, with species-specific differences in frequency and degree of tuning. In some cases, orientation-selective neurons have very stereotyped functional and morphological properties suggesting that they represent distinct cell types. In this review, we describe the retinal cell types underlying orientation selectivity found in various vertebrate species, and highlight their commonalities and differences. In addition, we discuss recent studies that revealed the cellular, synaptic and circuit mechanisms at the basis of retinal orientation selectivity. Finally, we outline the significance of these findings in shaping our current understanding of how this fundamental neural computation is implemented in the visual systems of vertebrates.
Ugajin, Atsushi; Watanabe, Takayuki; Uchiyama, Hironobu; Sasaki, Tetsuhiko; Yajima, Shunsuke; Ono, Masato
2016-09-16
Specific genes quickly transcribed after extracellular stimuli without de novo protein synthesis are known as immediate early genes (IEGs) and are thought to contribute to learning and memory processes in the mature nervous system of vertebrates. A recent study revealed that the homolog of Early growth response protein-1 (Egr-1), which is one of the best-characterized vertebrate IEGs, shared similar properties as a neural activity-dependent gene in the adult brain of insects. With regard to the roles of vertebrate Egr-1 in neural development, the contribution to the development and growth of visual systems has been reported. However, in insects, the expression dynamics of the Egr-1 homologous gene during neural development remains poorly understood. Our expression analysis demonstrated that AmEgr, a honeybee homolog of Egr-1, was transiently upregulated in the developing brain during the early to mid pupal stages. In situ hybridization and 5-bromo-2'-deoxyuridine (BrdU) immunohistochemistry revealed that AmEgr was mainly expressed in post-mitotic cells in optic lobes, the primary visual center of the insect brain. These findings suggest the evolutionarily conserved role of Egr homologs in the development of visual systems in vertebrates and insects. Copyright © 2016 Elsevier Inc. All rights reserved.
Galeazzi, Juan M.; Navajas, Joaquín; Mender, Bedeho M. W.; Quian Quiroga, Rodrigo; Minini, Loredana; Stringer, Simon M.
2016-01-01
ABSTRACT Neurons have been found in the primate brain that respond to objects in specific locations in hand-centered coordinates. A key theoretical challenge is to explain how such hand-centered neuronal responses may develop through visual experience. In this paper we show how hand-centered visual receptive fields can develop using an artificial neural network model, VisNet, of the primate visual system when driven by gaze changes recorded from human test subjects as they completed a jigsaw. A camera mounted on the head captured images of the hand and jigsaw, while eye movements were recorded using an eye-tracking device. This combination of data allowed us to reconstruct the retinal images seen as humans undertook the jigsaw task. These retinal images were then fed into the neural network model during self-organization of its synaptic connectivity using a biologically plausible trace learning rule. A trace learning mechanism encourages neurons in the model to learn to respond to input images that tend to occur in close temporal proximity. In the data recorded from human subjects, we found that the participant’s gaze often shifted through a sequence of locations around a fixed spatial configuration of the hand and one of the jigsaw pieces. In this case, trace learning should bind these retinal images together onto the same subset of output neurons. The simulation results consequently confirmed that some cells learned to respond selectively to the hand and a jigsaw piece in a fixed spatial configuration across different retinal views. PMID:27253452
Galeazzi, Juan M; Navajas, Joaquín; Mender, Bedeho M W; Quian Quiroga, Rodrigo; Minini, Loredana; Stringer, Simon M
2016-01-01
Neurons have been found in the primate brain that respond to objects in specific locations in hand-centered coordinates. A key theoretical challenge is to explain how such hand-centered neuronal responses may develop through visual experience. In this paper we show how hand-centered visual receptive fields can develop using an artificial neural network model, VisNet, of the primate visual system when driven by gaze changes recorded from human test subjects as they completed a jigsaw. A camera mounted on the head captured images of the hand and jigsaw, while eye movements were recorded using an eye-tracking device. This combination of data allowed us to reconstruct the retinal images seen as humans undertook the jigsaw task. These retinal images were then fed into the neural network model during self-organization of its synaptic connectivity using a biologically plausible trace learning rule. A trace learning mechanism encourages neurons in the model to learn to respond to input images that tend to occur in close temporal proximity. In the data recorded from human subjects, we found that the participant's gaze often shifted through a sequence of locations around a fixed spatial configuration of the hand and one of the jigsaw pieces. In this case, trace learning should bind these retinal images together onto the same subset of output neurons. The simulation results consequently confirmed that some cells learned to respond selectively to the hand and a jigsaw piece in a fixed spatial configuration across different retinal views.
Aging reduces neural specialization in ventral visual cortex
Park, Denise C.; Polk, Thad A.; Park, Rob; Minear, Meredith; Savage, Anna; Smith, Mason R.
2004-01-01
The present study investigated whether neural structures become less functionally differentiated and specialized with age. We studied ventral visual cortex, an area of the brain that responds selectively to visual categories (faces, places, and words) in young adults, and that shows little atrophy with age. Functional MRI was used to estimate neural activity in this cortical area, while young and old adults viewed faces, houses, pseudowords, and chairs. The results demonstrated significantly less neural specialization for these stimulus categories in older adults across a range of analyses. PMID:15322270
The visual system’s internal model of the world
Lee, Tai Sing
2015-01-01
The Bayesian paradigm has provided a useful conceptual theory for understanding perceptual computation in the brain. While the detailed neural mechanisms of Bayesian inference are not fully understood, recent computational and neurophysiological works have illuminated the underlying computational principles and representational architecture. The fundamental insights are that the visual system is organized as a modular hierarchy to encode an internal model of the world, and that perception is realized by statistical inference based on such internal model. In this paper, I will discuss and analyze the varieties of representational schemes of these internal models and how they might be used to perform learning and inference. I will argue for a unified theoretical framework for relating the internal models to the observed neural phenomena and mechanisms in the visual cortex. PMID:26566294
Visual guidance in control of grasping.
Janssen, Peter; Scherberger, Hansjörg
2015-07-08
Humans and other primates possess a unique capacity to grasp and manipulate objects skillfully, a facility pervasive in everyday life that has undoubtedly contributed to the success of our species. When we reach and grasp an object, various cortical areas in the parietal and frontal lobes work together effortlessly to analyze object shape and position, transform this visual information into useful motor commands, and implement these motor representations to preshape the hand before contact with the object is made. In recent years, a growing number of studies have investigated the neural circuits underlying object grasping in both the visual and motor systems of the macaque monkey. The accumulated knowledge not only helps researchers understand how object grasping is implemented in the primate brain but may also contribute to the development of novel neural interfaces and neuroprosthetics.
How the blind "see" Braille: lessons from functional magnetic resonance imaging.
Sadato, Norihiro
2005-12-01
What does the visual cortex of the blind do during Braille reading? This process involves converting simple tactile information into meaningful patterns that have lexical and semantic properties. The perceptual processing of Braille might be mediated by the somatosensory system, whereas visual letter identity is accomplished within the visual system in sighted people. Recent advances in functional neuroimaging techniques, such as functional magnetic resonance imaging, have enabled exploration of the neural substrates of Braille reading. The primary visual cortex of early-onset blind subjects is functionally relevant to Braille reading, suggesting that the brain shows remarkable plasticity that potentially permits the additional processing of tactile information in the visual cortical areas.
Vermaercke, Ben; Van den Bergh, Gert; Gerich, Florian; Op de Beeck, Hans
2015-01-01
Recent studies have revealed a surprising degree of functional specialization in rodent visual cortex. It is unknown to what degree this functional organization is related to the well-known hierarchical organization of the visual system in primates. We designed a study in rats that targets one of the hallmarks of the hierarchical object vision pathway in primates: selectivity for behaviorally relevant dimensions. We compared behavioral performance in a visual water maze with neural discriminability in five visual cortical areas. We tested behavioral discrimination in two independent batches of six rats using six pairs of shapes used previously to probe shape selectivity in monkey cortex (Lehky and Sereno, 2007). The relative difficulty (error rate) of shape pairs was strongly correlated between the two batches, indicating that some shape pairs were more difficult to discriminate than others. Then, we recorded in naive rats from five visual areas from primary visual cortex (V1) over areas LM, LI, LL, up to lateral occipito-temporal cortex (TO). Shape selectivity in the upper layers of V1, where the information enters cortex, correlated mostly with physical stimulus dissimilarity and not with behavioral performance. In contrast, neural discriminability in lower layers of all areas was strongly correlated with behavioral performance. These findings, in combination with the results from Vermaercke et al. (2014b), suggest that the functional specialization in rodent lateral visual cortex reflects a processing hierarchy resulting in the emergence of complex selectivity that is related to behaviorally relevant stimulus differences.
Distinct Contributions of the Magnocellular and Parvocellular Visual Streams to Perceptual Selection
Denison, Rachel N.; Silver, Michael A.
2014-01-01
During binocular rivalry, conflicting images presented to the two eyes compete for perceptual dominance, but the neural basis of this competition is disputed. In interocular switch (IOS) rivalry, rival images periodically exchanged between the two eyes generate one of two types of perceptual alternation: 1) a fast, regular alternation between the images that is time-locked to the stimulus switches and has been proposed to arise from competition at lower levels of the visual processing hierarchy, or 2) a slow, irregular alternation spanning multiple stimulus switches that has been associated with higher levels of the visual system. The existence of these two types of perceptual alternation has been influential in establishing the view that rivalry may be resolved at multiple hierarchical levels of the visual system. We varied the spatial, temporal, and luminance properties of IOS rivalry gratings and found, instead, an association between fast, regular perceptual alternations and processing by the magnocellular stream and between slow, irregular alternations and processing by the parvocellular stream. The magnocellular and parvocellular streams are two early visual pathways that are specialized for the processing of motion and form, respectively. These results provide a new framework for understanding the neural substrates of binocular rivalry that emphasizes the importance of parallel visual processing streams, and not only hierarchical organization, in the perceptual resolution of ambiguities in the visual environment. PMID:21861685
Software tool for data mining and its applications
NASA Astrophysics Data System (ADS)
Yang, Jie; Ye, Chenzhou; Chen, Nianyi
2002-03-01
A software tool for data mining is introduced, which integrates pattern recognition (PCA, Fisher, clustering, hyperenvelop, regression), artificial intelligence (knowledge representation, decision trees), statistical learning (rough set, support vector machine), computational intelligence (neural network, genetic algorithm, fuzzy systems). It consists of nine function models: pattern recognition, decision trees, association rule, fuzzy rule, neural network, genetic algorithm, Hyper Envelop, support vector machine, visualization. The principle and knowledge representation of some function models of data mining are described. The software tool of data mining is realized by Visual C++ under Windows 2000. Nonmonotony in data mining is dealt with by concept hierarchy and layered mining. The software tool of data mining has satisfactorily applied in the prediction of regularities of the formation of ternary intermetallic compounds in alloy systems, and diagnosis of brain glioma.
TOPICAL REVIEW: Prosthetic interfaces with the visual system: biological issues
NASA Astrophysics Data System (ADS)
Cohen, Ethan D.
2007-06-01
The design of effective visual prostheses for the blind represents a challenge for biomedical engineers and neuroscientists. Significant progress has been made in the miniaturization and processing power of prosthesis electronics; however development lags in the design and construction of effective machine brain interfaces with visual system neurons. This review summarizes what has been learned about stimulating neurons in the human and primate retina, lateral geniculate nucleus and visual cortex. Each level of the visual system presents unique challenges for neural interface design. Blind patients with the retinal degenerative disease retinitis pigmentosa (RP) are a common population in clinical trials of visual prostheses. The visual performance abilities of normals and RP patients are compared. To generate pattern vision in blind patients, the visual prosthetic interface must effectively stimulate the retinotopically organized neurons in the central visual field to elicit patterned visual percepts. The development of more biologically compatible methods of stimulating visual system neurons is critical to the development of finer spatial percepts. Prosthesis electrode arrays need to adapt to different optimal stimulus locations, stimulus patterns, and patient disease states.
Sensory optimization by stochastic tuning.
Jurica, Peter; Gepshtein, Sergei; Tyukin, Ivan; van Leeuwen, Cees
2013-10-01
Individually, visual neurons are each selective for several aspects of stimulation, such as stimulus location, frequency content, and speed. Collectively, the neurons implement the visual system's preferential sensitivity to some stimuli over others, manifested in behavioral sensitivity functions. We ask how the individual neurons are coordinated to optimize visual sensitivity. We model synaptic plasticity in a generic neural circuit and find that stochastic changes in strengths of synaptic connections entail fluctuations in parameters of neural receptive fields. The fluctuations correlate with uncertainty of sensory measurement in individual neurons: The higher the uncertainty the larger the amplitude of fluctuation. We show that this simple relationship is sufficient for the stochastic fluctuations to steer sensitivities of neurons toward a characteristic distribution, from which follows a sensitivity function observed in human psychophysics and which is predicted by a theory of optimal allocation of receptive fields. The optimal allocation arises in our simulations without supervision or feedback about system performance and independently of coupling between neurons, making the system highly adaptive and sensitive to prevailing stimulation. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Organization of the Drosophila larval visual circuit
Gendre, Nanae; Neagu-Maier, G Larisa; Fetter, Richard D; Schneider-Mizell, Casey M; Truman, James W; Zlatic, Marta; Cardona, Albert
2017-01-01
Visual systems transduce, process and transmit light-dependent environmental cues. Computation of visual features depends on photoreceptor neuron types (PR) present, organization of the eye and wiring of the underlying neural circuit. Here, we describe the circuit architecture of the visual system of Drosophila larvae by mapping the synaptic wiring diagram and neurotransmitters. By contacting different targets, the two larval PR-subtypes create two converging pathways potentially underlying the computation of ambient light intensity and temporal light changes already within this first visual processing center. Locally processed visual information then signals via dedicated projection interneurons to higher brain areas including the lateral horn and mushroom body. The stratified structure of the larval optic neuropil (LON) suggests common organizational principles with the adult fly and vertebrate visual systems. The complete synaptic wiring diagram of the LON paves the way to understanding how circuits with reduced numerical complexity control wide ranges of behaviors.
Kukona, Anuenue; Tabor, Whitney
2011-01-01
The visual world paradigm presents listeners with a challenging problem: they must integrate two disparate signals, the spoken language and the visual context, in support of action (e.g., complex movements of the eyes across a scene). We present Impulse Processing, a dynamical systems approach to incremental eye movements in the visual world that suggests a framework for integrating language, vision, and action generally. Our approach assumes that impulses driven by the language and the visual context impinge minutely on a dynamical landscape of attractors corresponding to the potential eye-movement behaviors of the system. We test three unique predictions of our approach in an empirical study in the visual world paradigm, and describe an implementation in an artificial neural network. We discuss the Impulse Processing framework in relation to other models of the visual world paradigm. PMID:21609355
Crossmodal association of auditory and visual material properties in infants.
Ujiie, Yuta; Yamashita, Wakayo; Fujisaki, Waka; Kanazawa, So; Yamaguchi, Masami K
2018-06-18
The human perceptual system enables us to extract visual properties of an object's material from auditory information. In monkeys, the neural basis underlying such multisensory association develops through experience of exposure to a material; material information could be processed in the posterior inferior temporal cortex, progressively from the high-order visual areas. In humans, however, the development of this neural representation remains poorly understood. Here, we demonstrated for the first time the presence of a mapping of the auditory material property with visual material ("Metal" and "Wood") in the right temporal region in preverbal 4- to 8-month-old infants, using near-infrared spectroscopy (NIRS). Furthermore, we found that infants acquired the audio-visual mapping for a property of the "Metal" material later than for the "Wood" material, since infants form the visual property of "Metal" material after approximately 6 months of age. These findings indicate that multisensory processing of material information induces the activation of brain areas related to sound symbolism. Our findings also indicate that the material's familiarity might facilitate the development of multisensory processing during the first year of life.
End-to-End Multimodal Emotion Recognition Using Deep Neural Networks
NASA Astrophysics Data System (ADS)
Tzirakis, Panagiotis; Trigeorgis, George; Nicolaou, Mihalis A.; Schuller, Bjorn W.; Zafeiriou, Stefanos
2017-12-01
Automatic affect recognition is a challenging task due to the various modalities emotions can be expressed with. Applications can be found in many domains including multimedia retrieval and human computer interaction. In recent years, deep neural networks have been used with great success in determining emotional states. Inspired by this success, we propose an emotion recognition system using auditory and visual modalities. To capture the emotional content for various styles of speaking, robust features need to be extracted. To this purpose, we utilize a Convolutional Neural Network (CNN) to extract features from the speech, while for the visual modality a deep residual network (ResNet) of 50 layers. In addition to the importance of feature extraction, a machine learning algorithm needs also to be insensitive to outliers while being able to model the context. To tackle this problem, Long Short-Term Memory (LSTM) networks are utilized. The system is then trained in an end-to-end fashion where - by also taking advantage of the correlations of the each of the streams - we manage to significantly outperform the traditional approaches based on auditory and visual handcrafted features for the prediction of spontaneous and natural emotions on the RECOLA database of the AVEC 2016 research challenge on emotion recognition.
Sensory Optimization by Stochastic Tuning
Jurica, Peter; Gepshtein, Sergei; Tyukin, Ivan; van Leeuwen, Cees
2013-01-01
Individually, visual neurons are each selective for several aspects of stimulation, such as stimulus location, frequency content, and speed. Collectively, the neurons implement the visual system’s preferential sensitivity to some stimuli over others, manifested in behavioral sensitivity functions. We ask how the individual neurons are coordinated to optimize visual sensitivity. We model synaptic plasticity in a generic neural circuit, and find that stochastic changes in strengths of synaptic connections entail fluctuations in parameters of neural receptive fields. The fluctuations correlate with uncertainty of sensory measurement in individual neurons: the higher the uncertainty the larger the amplitude of fluctuation. We show that this simple relationship is sufficient for the stochastic fluctuations to steer sensitivities of neurons toward a characteristic distribution, from which follows a sensitivity function observed in human psychophysics, and which is predicted by a theory of optimal allocation of receptive fields. The optimal allocation arises in our simulations without supervision or feedback about system performance and independently of coupling between neurons, making the system highly adaptive and sensitive to prevailing stimulation. PMID:24219849
An insect-inspired model for visual binding I: learning objects and their characteristics.
Northcutt, Brandon D; Dyhr, Jonathan P; Higgins, Charles M
2017-04-01
Visual binding is the process of associating the responses of visual interneurons in different visual submodalities all of which are responding to the same object in the visual field. Recently identified neuropils in the insect brain termed optic glomeruli reside just downstream of the optic lobes and have an internal organization that could support visual binding. Working from anatomical similarities between optic and olfactory glomeruli, we have developed a model of visual binding based on common temporal fluctuations among signals of independent visual submodalities. Here we describe and demonstrate a neural network model capable both of refining selectivity of visual information in a given visual submodality, and of associating visual signals produced by different objects in the visual field by developing inhibitory neural synaptic weights representing the visual scene. We also show that this model is consistent with initial physiological data from optic glomeruli. Further, we discuss how this neural network model may be implemented in optic glomeruli at a neuronal level.
Wiegand, Iris; Töllner, Thomas; Habekost, Thomas; Dyrholm, Mads; Müller, Hermann J; Finke, Kathrin
2014-08-01
An individual's visual attentional capacity is characterized by 2 central processing resources, visual perceptual processing speed and visual short-term memory (vSTM) storage capacity. Based on Bundesen's theory of visual attention (TVA), independent estimates of these parameters can be obtained from mathematical modeling of performance in a whole report task. The framework's neural interpretation (NTVA) further suggests distinct brain mechanisms underlying these 2 functions. Using an interindividual difference approach, the present study was designed to establish the respective ERP correlates of both parameters. Participants with higher compared to participants with lower processing speed were found to show significantly reduced visual N1 responses, indicative of higher efficiency in early visual processing. By contrast, for participants with higher relative to lower vSTM storage capacity, contralateral delay activity over visual areas was enhanced while overall nonlateralized delay activity was reduced, indicating that holding (the maximum number of) items in vSTM relies on topographically specific sustained activation within the visual system. Taken together, our findings show that the 2 main aspects of visual attentional capacity are reflected in separable neurophysiological markers, validating a central assumption of NTVA. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Stronger Neural Dynamics Capture Changes in Infants' Visual Working Memory Capacity over Development
ERIC Educational Resources Information Center
Perone, Sammy; Simmering, Vanessa R.; Spencer, John P.
2011-01-01
Visual working memory (VWM) capacity has been studied extensively in adults, and methodological advances have enabled researchers to probe capacity limits in infancy using a preferential looking paradigm. Evidence suggests that capacity increases rapidly between 6 and 10 months of age. To understand how the VWM system develops, we must understand…
The Evolution and Development of Neural Superposition
Agi, Egemen; Langen, Marion; Altschuler, Steven J.; Wu, Lani F.; Zimmermann, Timo
2014-01-01
Visual systems have a rich history as model systems for the discovery and understanding of basic principles underlying neuronal connectivity. The compound eyes of insects consist of up to thousands of small unit eyes that are connected by photoreceptor axons to set up a visual map in the brain. The photoreceptor axon terminals thereby represent neighboring points seen in the environment in neighboring synaptic units in the brain. Neural superposition is a special case of such a wiring principle, where photoreceptors from different unit eyes that receive the same input converge upon the same synaptic units in the brain. This wiring principle is remarkable, because each photoreceptor in a single unit eye receives different input and each individual axon, among thousands others in the brain, must be sorted together with those few axons that have the same input. Key aspects of neural superposition have been described as early as 1907. Since then neuroscientists, evolutionary and developmental biologists have been fascinated by how such a complicated wiring principle could evolve, how it is genetically encoded, and how it is developmentally realized. In this review article, we will discuss current ideas about the evolutionary origin and developmental program of neural superposition. Our goal is to identify in what way the special case of neural superposition can help us answer more general questions about the evolution and development of genetically “hard-wired” synaptic connectivity in the brain. PMID:24912630
The evolution and development of neural superposition.
Agi, Egemen; Langen, Marion; Altschuler, Steven J; Wu, Lani F; Zimmermann, Timo; Hiesinger, Peter Robin
2014-01-01
Visual systems have a rich history as model systems for the discovery and understanding of basic principles underlying neuronal connectivity. The compound eyes of insects consist of up to thousands of small unit eyes that are connected by photoreceptor axons to set up a visual map in the brain. The photoreceptor axon terminals thereby represent neighboring points seen in the environment in neighboring synaptic units in the brain. Neural superposition is a special case of such a wiring principle, where photoreceptors from different unit eyes that receive the same input converge upon the same synaptic units in the brain. This wiring principle is remarkable, because each photoreceptor in a single unit eye receives different input and each individual axon, among thousands others in the brain, must be sorted together with those few axons that have the same input. Key aspects of neural superposition have been described as early as 1907. Since then neuroscientists, evolutionary and developmental biologists have been fascinated by how such a complicated wiring principle could evolve, how it is genetically encoded, and how it is developmentally realized. In this review article, we will discuss current ideas about the evolutionary origin and developmental program of neural superposition. Our goal is to identify in what way the special case of neural superposition can help us answer more general questions about the evolution and development of genetically "hard-wired" synaptic connectivity in the brain.
Wilkey, Eric D; Barone, Jordan C; Mazzocco, Michèle M M; Vogel, Stephan E; Price, Gavin R
2017-10-01
Nonsymbolic numerical comparison task performance (whereby a participant judges which of two groups of objects is numerically larger) is thought to index the efficiency of neural systems supporting numerical magnitude perception, and performance on such tasks has been related to individual differences in math competency. However, a growing body of research suggests task performance is heavily influenced by visual parameters of the stimuli (e.g. surface area and dot size of object sets) such that the correlation with math is driven by performance on trials in which number is incongruent with visual cues. Almost nothing is currently known about whether the neural correlates of nonsymbolic magnitude comparison are also affected by visual congruency. To investigate this issue, we used functional magnetic resonance imaging (fMRI) to analyze neural activity during a nonsymbolic comparison task as a function of visual congruency in a sample of typically developing high school students (n = 36). Further, we investigated the relation to math competency as measured by the preliminary scholastic aptitude test (PSAT) in 10th grade. Our results indicate that neural activity was modulated by the ratio of the dot sets being compared in brain regions previously shown to exhibit an effect of ratio (i.e. left anterior cingulate, left precentral gyrus, left intraparietal sulcus, and right superior parietal lobe) when calculated from the average of congruent and incongruent trials, as it is in most studies, and that the effect of ratio within those regions did not differ as a function of congruency condition. However, there were significant differences in other regions in overall task-related activation, as opposed to the neural ratio effect, when congruent and incongruent conditions were contrasted at the whole-brain level. Math competency negatively correlated with ratio-dependent neural response in the left insula across congruency conditions and showed distinct correlations when split across conditions. There was a positive correlation between math competency in the right supramarginal gyrus during congruent trials and a negative correlation in the left angular gyrus during incongruent trials. Together, these findings support the idea that performance on the nonsymbolic comparison task relates to math competency and ratio-dependent neural activity does not differ by congruency condition. With regards to math competency, congruent and incongruent trials showed distinct relations between math competency and individual differences in ratio-dependent neural activity. Copyright © 2017 Elsevier Inc. All rights reserved.
The Brain as a Distributed Intelligent Processing System: An EEG Study
da Rocha, Armando Freitas; Rocha, Fábio Theoto; Massad, Eduardo
2011-01-01
Background Various neuroimaging studies, both structural and functional, have provided support for the proposal that a distributed brain network is likely to be the neural basis of intelligence. The theory of Distributed Intelligent Processing Systems (DIPS), first developed in the field of Artificial Intelligence, was proposed to adequately model distributed neural intelligent processing. In addition, the neural efficiency hypothesis suggests that individuals with higher intelligence display more focused cortical activation during cognitive performance, resulting in lower total brain activation when compared with individuals who have lower intelligence. This may be understood as a property of the DIPS. Methodology and Principal Findings In our study, a new EEG brain mapping technique, based on the neural efficiency hypothesis and the notion of the brain as a Distributed Intelligence Processing System, was used to investigate the correlations between IQ evaluated with WAIS (Whechsler Adult Intelligence Scale) and WISC (Wechsler Intelligence Scale for Children), and the brain activity associated with visual and verbal processing, in order to test the validity of a distributed neural basis for intelligence. Conclusion The present results support these claims and the neural efficiency hypothesis. PMID:21423657
Emergence of order in visual system development.
Shatz, C J
1996-01-01
Neural connections in the adult central nervous system are highly precise. In the visual system, retinal ganglion cells send their axons to target neurons in the lateral geniculate nucleus (LGN) in such a way that axons originating from the two eyes terminate in adjacent but nonoverlapping eye-specific layers. During development, however, inputs from the two eyes are intermixed, and the adult pattern emerges gradually as axons from the two eyes sort out to form the layers. Experiments indicate that the sorting-out process, even though it occurs in utero in higher mammals and always before vision, requires retinal ganglion cell signaling; blocking retinal ganglion cell action potentials with tetrodotoxin prevents the formation of the layers. These action potentials are endogenously generated by the ganglion cells, which fire spontaneously and synchronously with each other, generating "waves" of activity that travel across the retina. Calcium imaging of the retina shows that the ganglion cells undergo correlated calcium bursting to generate the waves and that amacrine cells also participate in the correlated activity patterns. Physiological recordings from LGN neurons in vitro indicate that the quasiperiodic activity generated by the retinal ganglion cells is transmitted across the synapse between ganglion cells to drive target LGN neurons. These observations suggest that (i) a neural circuit within the immature retina is responsible for generating specific spatiotemporal patterns of neural activity; (ii) spontaneous activity generated in the retina is propagated across central synapses; and (iii) even before the photoreceptors are present, nerve cell function is essential for correct wiring of the visual system during early development. Since spontaneously generated activity is known to be present elsewhere in the developing CNS, this process of activity-dependent wiring could be used throughout the nervous system to help refine early sets of neural connections into their highly precise adult patterns. Images Fig. 1 Fig. 4 PMID:8570602
Ambrose, Joseph P; Wijeakumar, Sobanawartiny; Buss, Aaron T; Spencer, John P
2016-01-01
Visual working memory (VWM) is a key cognitive system that enables people to hold visual information in mind after a stimulus has been removed and compare past and present to detect changes that have occurred. VWM is severely capacity limited to around 3-4 items, although there are robust individual differences in this limit. Importantly, these individual differences are evident in neural measures of VWM capacity. Here, we capitalized on recent work showing that capacity is lower for more complex stimulus dimension. In particular, we asked whether individual differences in capacity remain consistent if capacity is shifted by a more demanding task, and, further, whether the correspondence between behavioral and neural measures holds across a shift in VWM capacity. Participants completed a change detection (CD) task with simple colors and complex shapes in an fMRI experiment. As expected, capacity was significantly lower for the shape dimension. Moreover, there were robust individual differences in behavioral estimates of VWM capacity across dimensions. Similarly, participants with a stronger BOLD response for color also showed a strong neural response for shape within the lateral occipital cortex, intraparietal sulcus (IPS), and superior IPS. Although there were robust individual differences in the behavioral and neural measures, we found little evidence of systematic brain-behavior correlations across feature dimensions. This suggests that behavioral and neural measures of capacity provide different views onto the processes that underlie VWM and CD. Recent theoretical approaches that attempt to bridge between behavioral and neural measures are well positioned to address these findings in future work.
Crowding, visual awareness, and their respective neural loci
Shin, Kilho; Chung, Susana T. L.; Tjan, Bosco S.
2017-01-01
In peripheral vision, object identification can be impeded when a target object is flanked by other objects. This phenomenon of crowding has been attributed to basic processes associated with image encoding by the visual system, but the neural origin of crowding is not known. Determining whether crowding depends on subjective awareness of the flankers can provide information on the neural origin of crowding. However, recent studies that manipulated flanker awareness have yielded conflicting results. In the current study, we suppressed flanker awareness with two methods: interocular suppression (IOS) and adaptation-induced blindness (AIB). We tested two different types of stimuli: gratings and letters. With IOS, we found that the magnitude of crowding increased as the number of physical flankers increased, even when the observers did not report seeing any of the flankers. In contrast, when flanker awareness was manipulated with AIB, the magnitude of crowding increased with the number of perceived flankers. Our results show that whether crowding is contingent on awareness of the flankers depends on the method used to suppress awareness. In addition, our results imply that the locus of crowding is upstream from the neural locus of IOS and close to or downstream from that of AIB. Neurophysiology and neuroimaging studies jointly implicate mid-to-high level visual processing stages for IOS, while direct evidence regarding the neural locus of AIB is limited. The most consistent interpretation of our empirical findings is to place the neural locus of crowding at an early cortical site, such as V1 or V2. PMID:28549353
Eštočinová, Jana; Lo Gerfo, Emanuele; Della Libera, Chiara; Chelazzi, Leonardo; Santandrea, Elisa
2016-11-01
Visual selective attention (VSA) optimizes perception and behavioral control by enabling efficient selection of relevant information and filtering of distractors. While focusing resources on task-relevant information helps counteract distraction, dedicated filtering mechanisms have recently been demonstrated, allowing neural systems to implement suitable policies for the suppression of potential interference. Limited evidence is presently available concerning the neural underpinnings of these mechanisms, and whether neural circuitry within the visual cortex might play a causal role in their instantiation, a possibility that we directly tested here. In two related experiments, transcranial magnetic stimulation (TMS) was applied over the lateral occipital cortex of healthy humans at different times during the execution of a behavioral task which entailed varying levels of distractor interference and need for attentional engagement. While earlier TMS boosted target selection, stimulation within a restricted time epoch close to (and in the course of) stimulus presentation engendered selective enhancement of distractor suppression, by affecting the ongoing, reactive instantiation of attentional filtering mechanisms required by specific task conditions. The results attest to a causal role of mid-tier ventral visual areas in distractor filtering and offer insights into the mechanisms through which TMS may have affected ongoing neural activity in the stimulated tissue. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude
2016-01-01
The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain. PMID:27282108
Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude
2016-06-10
The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain.
Can responses to basic non-numerical visual features explain neural numerosity responses?
Harvey, Ben M; Dumoulin, Serge O
2017-04-01
Humans and many animals can distinguish between stimuli that differ in numerosity, the number of objects in a set. Human and macaque parietal lobes contain neurons that respond to changes in stimulus numerosity. However, basic non-numerical visual features can affect neural responses to and perception of numerosity, and visual features often co-vary with numerosity. Therefore, it is debated whether numerosity or co-varying low-level visual features underlie neural and behavioral responses to numerosity. To test the hypothesis that non-numerical visual features underlie neural numerosity responses in a human parietal numerosity map, we analyze responses to a group of numerosity stimulus configurations that have the same numerosity progression but vary considerably in their non-numerical visual features. Using ultra-high-field (7T) fMRI, we measure responses to these stimulus configurations in an area of posterior parietal cortex whose responses are believed to reflect numerosity-selective activity. We describe an fMRI analysis method to distinguish between alternative models of neural response functions, following a population receptive field (pRF) modeling approach. For each stimulus configuration, we first quantify the relationships between numerosity and several non-numerical visual features that have been proposed to underlie performance in numerosity discrimination tasks. We then determine how well responses to these non-numerical visual features predict the observed fMRI responses, and compare this to the predictions of responses to numerosity. We demonstrate that a numerosity response model predicts observed responses more accurately than models of responses to simple non-numerical visual features. As such, neural responses in cognitive processing need not reflect simpler properties of early sensory inputs. Copyright © 2017 Elsevier Inc. All rights reserved.
Krisch, I; Hosticka, B J
2007-01-01
Microsystem technologies offer significant advantages in the development of neural prostheses. In the last two decades, it has become feasible to develop intelligent prostheses that are fully implantable into the human body with respect to functionality, complexity, size, weight, and compactness. Design and development enforce collaboration of various disciplines including physicians, engineers, and scientists. The retina implant system can be taken as one sophisticated example of a prosthesis which bypasses neural defects and enables direct electrical stimulation of nerve cells. This micro implantable visual prosthesis assists blind patients to return to the normal course of life. The retina implant is intended for patients suffering from retinitis pigmentosa or macular degeneration. In this contribution, we focus on the epiretinal prosthesis and discuss topics like system design, data and power transfer, fabrication, packaging and testing. In detail, the system is based upon an implantable micro electro stimulator which is powered and controlled via a wireless inductive link. Microelectronic circuits for data encoding and stimulation are assembled on flexible substrates with an integrated electrode array. The implant system is encapsulated using parylene C and silicone rubber. Results extracted from experiments in vivo demonstrate the retinotopic activation of the visual cortex.
Dissociation of the Neural Correlates of Visual and Auditory Contextual Encoding
ERIC Educational Resources Information Center
Gottlieb, Lauren J.; Uncapher, Melina R.; Rugg, Michael D.
2010-01-01
The present study contrasted the neural correlates of encoding item-context associations according to whether the contextual information was visual or auditory. Subjects (N = 20) underwent fMRI scanning while studying a series of visually presented pictures, each of which co-occurred with either a visually or an auditorily presented name. The task…
Visual input enhances selective speech envelope tracking in auditory cortex at a "cocktail party".
Zion Golumbic, Elana; Cogan, Gregory B; Schroeder, Charles E; Poeppel, David
2013-01-23
Our ability to selectively attend to one auditory signal amid competing input streams, epitomized by the "Cocktail Party" problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared with responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. We investigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli. We recorded magnetoencephalographic signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker's face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a Cocktail Party setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed. These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive.
On the role of spatial phase and phase correlation in vision, illusion, and cognition
Gladilin, Evgeny; Eils, Roland
2015-01-01
Numerous findings indicate that spatial phase bears an important cognitive information. Distortion of phase affects topology of edge structures and makes images unrecognizable. In turn, appropriately phase-structured patterns give rise to various illusions of virtual image content and apparent motion. Despite a large body of phenomenological evidence not much is known yet about the role of phase information in neural mechanisms of visual perception and cognition. Here, we are concerned with analysis of the role of spatial phase in computational and biological vision, emergence of visual illusions and pattern recognition. We hypothesize that fundamental importance of phase information for invariant retrieval of structural image features and motion detection promoted development of phase-based mechanisms of neural image processing in course of evolution of biological vision. Using an extension of Fourier phase correlation technique, we show that the core functions of visual system such as motion detection and pattern recognition can be facilitated by the same basic mechanism. Our analysis suggests that emergence of visual illusions can be attributed to presence of coherently phase-shifted repetitive patterns as well as the effects of acuity compensation by saccadic eye movements. We speculate that biological vision relies on perceptual mechanisms effectively similar to phase correlation, and predict neural features of visual pattern (dis)similarity that can be used for experimental validation of our hypothesis of “cognition by phase correlation.” PMID:25954190
On the role of spatial phase and phase correlation in vision, illusion, and cognition.
Gladilin, Evgeny; Eils, Roland
2015-01-01
Numerous findings indicate that spatial phase bears an important cognitive information. Distortion of phase affects topology of edge structures and makes images unrecognizable. In turn, appropriately phase-structured patterns give rise to various illusions of virtual image content and apparent motion. Despite a large body of phenomenological evidence not much is known yet about the role of phase information in neural mechanisms of visual perception and cognition. Here, we are concerned with analysis of the role of spatial phase in computational and biological vision, emergence of visual illusions and pattern recognition. We hypothesize that fundamental importance of phase information for invariant retrieval of structural image features and motion detection promoted development of phase-based mechanisms of neural image processing in course of evolution of biological vision. Using an extension of Fourier phase correlation technique, we show that the core functions of visual system such as motion detection and pattern recognition can be facilitated by the same basic mechanism. Our analysis suggests that emergence of visual illusions can be attributed to presence of coherently phase-shifted repetitive patterns as well as the effects of acuity compensation by saccadic eye movements. We speculate that biological vision relies on perceptual mechanisms effectively similar to phase correlation, and predict neural features of visual pattern (dis)similarity that can be used for experimental validation of our hypothesis of "cognition by phase correlation."
Miskovic, Vladimir; Martinovic, Jasna; Wieser, Matthias M.; Petro, Nathan M.; Bradley, Margaret M.; Keil, Andreas
2015-01-01
Emotionally arousing scenes readily capture visual attention, prompting amplified neural activity in sensory regions of the brain. The physical stimulus features and related information channels in the human visual system that contribute to this modulation, however, are not known. Here, we manipulated low-level physical parameters of complex scenes varying in hedonic valence and emotional arousal in order to target the relative contributions of luminance based versus chromatic visual channels to emotional perception. Stimulus-evoked brain electrical activity was measured during picture viewing and used to quantify neural responses sensitive to lower-tier visual cortical involvement (steady-state visual evoked potentials) as well as the late positive potential, reflecting a more distributed cortical event. Results showed that the enhancement for emotional content was stimulus-selective when examining the steady-state segments of the evoked visual potentials. Response amplification was present only for low spatial frequency, grayscale stimuli, and not for high spatial frequency, red/green stimuli. In contrast, the late positive potential was modulated by emotion regardless of the scene’s physical properties. Our findings are discussed in relation to neurophysiologically plausible constraints operating at distinct stages of the cortical processing stream. PMID:25640949
Vision for navigation: What can we learn from ants?
Graham, Paul; Philippides, Andrew
2017-09-01
The visual systems of all animals are used to provide information that can guide behaviour. In some cases insects demonstrate particularly impressive visually-guided behaviour and then we might reasonably ask how the low-resolution vision and limited neural resources of insects are tuned to particular behavioural strategies. Such questions are of interest to both biologists and to engineers seeking to emulate insect-level performance with lightweight hardware. One behaviour that insects share with many animals is the use of learnt visual information for navigation. Desert ants, in particular, are expert visual navigators. Across their foraging life, ants can learn long idiosyncratic foraging routes. What's more, these routes are learnt quickly and the visual cues that define them can be implemented for guidance independently of other social or personal information. Here we review the style of visual navigation in solitary foraging ants and consider the physiological mechanisms that underpin it. Our perspective is to consider that robust navigation comes from the optimal interaction between behavioural strategy, visual mechanisms and neural hardware. We consider each of these in turn, highlighting the value of ant-like mechanisms in biomimetic endeavours. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
Miskovic, Vladimir; Martinovic, Jasna; Wieser, Matthias J; Petro, Nathan M; Bradley, Margaret M; Keil, Andreas
2015-03-01
Emotionally arousing scenes readily capture visual attention, prompting amplified neural activity in sensory regions of the brain. The physical stimulus features and related information channels in the human visual system that contribute to this modulation, however, are not known. Here, we manipulated low-level physical parameters of complex scenes varying in hedonic valence and emotional arousal in order to target the relative contributions of luminance based versus chromatic visual channels to emotional perception. Stimulus-evoked brain electrical activity was measured during picture viewing and used to quantify neural responses sensitive to lower-tier visual cortical involvement (steady-state visual evoked potentials) as well as the late positive potential, reflecting a more distributed cortical event. Results showed that the enhancement for emotional content was stimulus-selective when examining the steady-state segments of the evoked visual potentials. Response amplification was present only for low spatial frequency, grayscale stimuli, and not for high spatial frequency, red/green stimuli. In contrast, the late positive potential was modulated by emotion regardless of the scene's physical properties. Our findings are discussed in relation to neurophysiologically plausible constraints operating at distinct stages of the cortical processing stream. Copyright © 2015 Elsevier B.V. All rights reserved.
Yan, Xiaodan
2010-01-01
The current study investigated the functional connectivity of the primary sensory system with resting state fMRI and applied such knowledge into the design of the neural architecture of autonomous humanoid robots. Correlation and Granger causality analyses were utilized to reveal the functional connectivity patterns. Dissociation was within the primary sensory system, in that the olfactory cortex and the somatosensory cortex were strongly connected to the amygdala whereas the visual cortex and the auditory cortex were strongly connected with the frontal cortex. The posterior cingulate cortex (PCC) and the anterior cingulate cortex (ACC) were found to maintain constant communication with the primary sensory system, the frontal cortex, and the amygdala. Such neural architecture inspired the design of dissociated emergent-response system and fine-processing system in autonomous humanoid robots, with separate processing units and another consolidation center to coordinate the two systems. Such design can help autonomous robots to detect and respond quickly to danger, so as to maintain their sustainability and independence.
Attention distributed across sensory modalities enhances perceptual performance
Mishra, Jyoti; Gazzaley, Adam
2012-01-01
This study investigated the interaction between top-down attentional control and multisensory processing in humans. Using semantically congruent and incongruent audiovisual stimulus streams, we found target detection to be consistently improved in the setting of distributed audiovisual attention versus focused visual attention. This performance benefit was manifested as faster reaction times for congruent audiovisual stimuli, and as accuracy improvements for incongruent stimuli, resulting in a resolution of stimulus interference. Electrophysiological recordings revealed that these behavioral enhancements were associated with reduced neural processing of both auditory and visual components of the audiovisual stimuli under distributed vs. focused visual attention. These neural changes were observed at early processing latencies, within 100–300 ms post-stimulus onset, and localized to auditory, visual, and polysensory temporal cortices. These results highlight a novel neural mechanism for top-down driven performance benefits via enhanced efficacy of sensory neural processing during distributed audiovisual attention relative to focused visual attention. PMID:22933811
NASA Astrophysics Data System (ADS)
Kobayashi, Takuma; Tagawa, Ayato; Noda, Toshihiko; Sasagawa, Kiyotaka; Tokuda, Takashi; Hatanaka, Yumiko; Tamura, Hideki; Ishikawa, Yasuyuki; Shiosaka, Sadao; Ohta, Jun
2010-11-01
The combination of optical imaging with voltage-sensitive dyes is a powerful tool for studying the spatiotemporal patterns of neural activity and understanding the neural networks of the brain. To visualize the potential status of multiple neurons simultaneously using a compact instrument with high density and a wide range, we present a novel measurement system using an implantable biomedical photonic LSI device with a red absorptive light filter for voltage-sensitive dye imaging (BpLSI-red). The BpLSI-red was developed for sensing fluorescence by the on-chip LSI, which was designed by using complementary metal-oxide-semiconductor (CMOS) technology. A micro-electro-mechanical system (MEMS) microfabrication technique was used to postprocess the CMOS sensor chip; light-emitting diodes (LEDs) were integrated for illumination and to enable long-term cell culture. Using the device, we succeeded in visualizing the membrane potential of 2000-3000 cells and the process of depolarization of pheochromocytoma cells (PC12 cells) and mouse cerebral cortical neurons in a primary culture with cellular resolution. Therefore, our measurement application enables the detection of multiple neural activities simultaneously.
Kriegeskorte, Nikolaus
2015-11-24
Recent advances in neural network modeling have enabled major strides in computer vision and other artificial intelligence applications. Human-level visual recognition abilities are coming within reach of artificial systems. Artificial neural networks are inspired by the brain, and their computations could be implemented in biological neurons. Convolutional feedforward networks, which now dominate computer vision, take further inspiration from the architecture of the primate visual hierarchy. However, the current models are designed with engineering goals, not to model brain computations. Nevertheless, initial studies comparing internal representations between these models and primate brains find surprisingly similar representational spaces. With human-level performance no longer out of reach, we are entering an exciting new era, in which we will be able to build biologically faithful feedforward and recurrent computational models of how biological brains perform high-level feats of intelligence, including vision.
The economics of motion perception and invariants of visual sensitivity.
Gepshtein, Sergei; Tyukin, Ivan; Kubovy, Michael
2007-06-21
Neural systems face the challenge of optimizing their performance with limited resources, just as economic systems do. Here, we use tools of neoclassical economic theory to explore how a frugal visual system should use a limited number of neurons to optimize perception of motion. The theory prescribes that vision should allocate its resources to different conditions of stimulation according to the degree of balance between measurement uncertainties and stimulus uncertainties. We find that human vision approximately follows the optimal prescription. The equilibrium theory explains why human visual sensitivity is distributed the way it is and why qualitatively different regimes of apparent motion are observed at different speeds. The theory offers a new normative framework for understanding the mechanisms of visual sensitivity at the threshold of visibility and above the threshold and predicts large-scale changes in visual sensitivity in response to changes in the statistics of stimulation and system goals.
Computational model for perception of objects and motions.
Yang, WenLu; Zhang, LiQing; Ma, LiBo
2008-06-01
Perception of objects and motions in the visual scene is one of the basic problems in the visual system. There exist 'What' and 'Where' pathways in the superior visual cortex, starting from the simple cells in the primary visual cortex. The former is able to perceive objects such as forms, color, and texture, and the latter perceives 'where', for example, velocity and direction of spatial movement of objects. This paper explores brain-like computational architectures of visual information processing. We propose a visual perceptual model and computational mechanism for training the perceptual model. The computational model is a three-layer network. The first layer is the input layer which is used to receive the stimuli from natural environments. The second layer is designed for representing the internal neural information. The connections between the first layer and the second layer, called the receptive fields of neurons, are self-adaptively learned based on principle of sparse neural representation. To this end, we introduce Kullback-Leibler divergence as the measure of independence between neural responses and derive the learning algorithm based on minimizing the cost function. The proposed algorithm is applied to train the basis functions, namely receptive fields, which are localized, oriented, and bandpassed. The resultant receptive fields of neurons in the second layer have the characteristics resembling that of simple cells in the primary visual cortex. Based on these basis functions, we further construct the third layer for perception of what and where in the superior visual cortex. The proposed model is able to perceive objects and their motions with a high accuracy and strong robustness against additive noise. Computer simulation results in the final section show the feasibility of the proposed perceptual model and high efficiency of the learning algorithm.
Qin, Pengmin; Duncan, Niall W; Wiebking, Christine; Gravel, Paul; Lyttelton, Oliver; Hayes, Dave J; Verhaeghe, Jeroen; Kostikov, Alexey; Schirrmacher, Ralf; Reader, Andrew J; Northoff, Georg
2012-01-01
Recent imaging studies have demonstrated that levels of resting γ-aminobutyric acid (GABA) in the visual cortex predict the degree of stimulus-induced activity in the same region. These studies have used the presentation of discrete visual stimulus; the change from closed eyes to open also represents a simple visual stimulus, however, and has been shown to induce changes in local brain activity and in functional connectivity between regions. We thus aimed to investigate the role of the GABA system, specifically GABA(A) receptors, in the changes in brain activity between the eyes closed (EC) and eyes open (EO) state in order to provide detail at the receptor level to complement previous studies of GABA concentrations. We conducted an fMRI study involving two different modes of the change from EC to EO: an EO and EC block design, allowing the modeling of the haemodynamic response, followed by longer periods of EC and EO to allow the measuring of functional connectivity. The same subjects also underwent [(18)F]Flumazenil PET to measure GABA(A) receptor binding potentials. It was demonstrated that the local-to-global ratio of GABA(A) receptor binding potential in the visual cortex predicted the degree of changes in neural activity from EC to EO. This same relationship was also shown in the auditory cortex. Furthermore, the local-to-global ratio of GABA(A) receptor binding potential in the visual cortex also predicted the change in functional connectivity between the visual and auditory cortex from EC to EO. These findings contribute to our understanding of the role of GABA(A) receptors in stimulus-induced neural activity in local regions and in inter-regional functional connectivity.
Attention affects visual perceptual processing near the hand.
Cosman, Joshua D; Vecera, Shaun P
2010-09-01
Specialized, bimodal neural systems integrate visual and tactile information in the space near the hand. Here, we show that visuo-tactile representations allow attention to influence early perceptual processing, namely, figure-ground assignment. Regions that were reached toward were more likely than other regions to be assigned as foreground figures, and hand position competed with image-based information to bias figure-ground assignment. Our findings suggest that hand position allows attention to influence visual perceptual processing and that visual processes typically viewed as unimodal can be influenced by bimodal visuo-tactile representations.
Brain-Based Devices for Neuromorphic Computer Systems
2013-07-01
and Deco, G. (2012). Effective Visual Working Memory Capacity: An Emergent Effect from the Neural Dynamics in an Attractor Network. PLoS ONE 7, e42719...models, apply them to a recognition task, and to demonstrate a working memory . In the course of this work a new analytical method for spiking data was...4 3.4 Spiking Neural Model Simulation of Working Memory ..................................... 5 3.5 A Novel Method for Analysis
Image Understanding by Image-Seeking Adaptive Networks (ISAN).
1987-08-10
our reserch on adaptive neural networks in the visual and sensory-motor cortex of cats. We demonstrate that, under certain conditions, plasticity is...understanding in organisms proceeds directly from adaptively seeking whole images and not via a preliminary analysis of elementary features, followed by object...empirical reserch has always been that ultimately any neural system has to serve behavior and that behavior serves survival. Evolutionary selection makes it
Babiloni, Claudio; Marzano, Nicola; Soricelli, Andrea; Cordone, Susanna; Millán-Calenti, José Carlos; Del Percio, Claudio; Buján, Ana
2016-01-01
This article reviews three experiments on event-related potentials (ERPs) testing the hypothesis that primary visual consciousness (stimulus self-report) is related to enhanced cortical neural synchronization as a function of stimulus features. ERP peak latency and sources were compared between “seen” trials and “not seen” trials, respectively related and unrelated to the primary visual consciousness. Three salient features of visual stimuli were considered (visuospatial, emotional face expression, and written words). Results showed the typical visual ERP components in both “seen” and “not seen” trials. There was no statistical difference in the ERP peak latencies between the “seen” and “not seen” trials, suggesting a similar timing of the cortical neural synchronization regardless the primary visual consciousness. In contrast, ERP sources showed differences between “seen” and “not seen” trials. For the visuospatial stimuli, the primary consciousness was related to higher activity in dorsal occipital and parietal sources at about 400 ms post-stimulus. For the emotional face expressions, there was greater activity in parietal and frontal sources at about 180 ms post-stimulus. For the written letters, there was higher activity in occipital, parietal and temporal sources at about 230 ms post-stimulus. These results hint that primary visual consciousness is associated with an enhanced cortical neural synchronization having entirely different spatiotemporal characteristics as a function of the features of the visual stimuli and possibly, the relative qualia (i.e., visuospatial, face expression, and words). In this framework, the dorsal visual stream may be synchronized in association with the primary consciousness of visuospatial and emotional face contents. Analogously, both dorsal and ventral visual streams may be synchronized in association with the primary consciousness of linguistic contents. In this line of reasoning, the ensemble of the cortical neural networks underpinning the single visual features would constitute a sort of multi-dimensional palette of colors, shapes, regions of the visual field, movements, emotional face expressions, and words. The synchronization of one or more of these cortical neural networks, each with its peculiar timing, would produce the primary consciousness of one or more of the visual features of the scene. PMID:27445750
Neural mechanisms underlying sound-induced visual motion perception: An fMRI study.
Hidaka, Souta; Higuchi, Satomi; Teramoto, Wataru; Sugita, Yoichi
2017-07-01
Studies of crossmodal interactions in motion perception have reported activation in several brain areas, including those related to motion processing and/or sensory association, in response to multimodal (e.g., visual and auditory) stimuli that were both in motion. Recent studies have demonstrated that sounds can trigger illusory visual apparent motion to static visual stimuli (sound-induced visual motion: SIVM): A visual stimulus blinking at a fixed location is perceived to be moving laterally when an alternating left-right sound is also present. Here, we investigated brain activity related to the perception of SIVM using a 7T functional magnetic resonance imaging technique. Specifically, we focused on the patterns of neural activities in SIVM and visually induced visual apparent motion (VIVM). We observed shared activations in the middle occipital area (V5/hMT), which is thought to be involved in visual motion processing, for SIVM and VIVM. Moreover, as compared to VIVM, SIVM resulted in greater activation in the superior temporal area and dominant functional connectivity between the V5/hMT area and the areas related to auditory and crossmodal motion processing. These findings indicate that similar but partially different neural mechanisms could be involved in auditory-induced and visually-induced motion perception, and neural signals in auditory, visual, and, crossmodal motion processing areas closely and directly interact in the perception of SIVM. Copyright © 2017 Elsevier B.V. All rights reserved.
Eguchi, Akihiro; Isbister, James B; Ahmad, Nasir; Stringer, Simon
2018-07-01
We present a hierarchical neural network model, in which subpopulations of neurons develop fixed and regularly repeating temporal chains of spikes (polychronization), which respond specifically to randomized Poisson spike trains representing the input training images. The performance is improved by including top-down and lateral synaptic connections, as well as introducing multiple synaptic contacts between each pair of pre- and postsynaptic neurons, with different synaptic contacts having different axonal delays. Spike-timing-dependent plasticity thus allows the model to select the most effective axonal transmission delay between neurons. Furthermore, neurons representing the binding relationship between low-level and high-level visual features emerge through visually guided learning. This begins to provide a way forward to solving the classic feature binding problem in visual neuroscience and leads to a new hypothesis concerning how information about visual features at every spatial scale may be projected upward through successive neuronal layers. We name this hypothetical upward projection of information the "holographic principle." (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Plastic reorganization of neural systems for perception of others in the congenitally blind.
Fairhall, S L; Porter, K B; Bellucci, C; Mazzetti, M; Cipolli, C; Gobbini, M I
2017-09-01
Recent evidence suggests that the function of the core system for face perception might extend beyond visual face-perception to a broader role in person perception. To critically test the broader role of core face-system in person perception, we examined the role of the core system during the perception of others in 7 congenitally blind individuals and 15 sighted subjects by measuring their neural responses using fMRI while they listened to voices and performed identity and emotion recognition tasks. We hypothesised that in people who have had no visual experience of faces, core face-system areas may assume a role in the perception of others via voices. Results showed that emotions conveyed by voices can be decoded in homologues of the core face system only in the blind. Moreover, there was a specific enhancement of response to verbal as compared to non-verbal stimuli in bilateral fusiform face areas and the right posterior superior temporal sulcus showing that the core system also assumes some language-related functions in the blind. These results indicate that, in individuals with no history of visual experience, areas of the core system for face perception may assume a role in aspects of voice perception that are relevant to social cognition and perception of others' emotions. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.
Chessa, Manuela; Bianchi, Valentina; Zampetti, Massimo; Sabatini, Silvio P; Solari, Fabio
2012-01-01
The intrinsic parallelism of visual neural architectures based on distributed hierarchical layers is well suited to be implemented on the multi-core architectures of modern graphics cards. The design strategies that allow us to optimally take advantage of such parallelism, in order to efficiently map on GPU the hierarchy of layers and the canonical neural computations, are proposed. Specifically, the advantages of a cortical map-like representation of the data are exploited. Moreover, a GPU implementation of a novel neural architecture for the computation of binocular disparity from stereo image pairs, based on populations of binocular energy neurons, is presented. The implemented neural model achieves good performances in terms of reliability of the disparity estimates and a near real-time execution speed, thus demonstrating the effectiveness of the devised design strategies. The proposed approach is valid in general, since the neural building blocks we implemented are a common basis for the modeling of visual neural functionalities.
Yuan, Wu-Jie; Dimigen, Olaf; Sommer, Werner; Zhou, Changsong
2013-01-01
Microsaccades during fixation have been suggested to counteract visual fading. Recent experiments have also observed microsaccade-related neural responses from cellular record, scalp electroencephalogram (EEG), and functional magnetic resonance imaging (fMRI). The underlying mechanism, however, is not yet understood and highly debated. It has been proposed that the neural activity of primary visual cortex (V1) is a crucial component for counteracting visual adaptation. In this paper, we use computational modeling to investigate how short-term depression (STD) in thalamocortical synapses might affect the neural responses of V1 in the presence of microsaccades. Our model not only gives a possible synaptic explanation for microsaccades in counteracting visual fading, but also reproduces several features in experimental findings. These modeling results suggest that STD in thalamocortical synapses plays an important role in microsaccade-related neural responses and the model may be useful for further investigation of behavioral properties and functional roles of microsaccades. PMID:23630494
Vinken, Kasper; Vogels, Rufin; Op de Beeck, Hans
2017-03-20
From an ecological point of view, it is generally suggested that the main goal of vision in rats and mice is navigation and (aerial) predator evasion [1-3]. The latter requires fast and accurate detection of a change in the visual environment. An outstanding question is whether there are mechanisms in the rodent visual system that would support and facilitate visual change detection. An experimental protocol frequently used to investigate change detection in humans is the oddball paradigm, in which a rare, unexpected stimulus is presented in a train of stimulus repetitions [4]. A popular "predictive coding" theory of cortical responses states that neural responses should decrease for expected sensory input and increase for unexpected input [5, 6]. Despite evidence for response suppression and enhancement in noninvasive scalp recordings in humans with this paradigm [7, 8], it has proven challenging to observe both phenomena in invasive action potential recordings in other animals [9-11]. During a visual oddball experiment, we recorded multi-unit spiking activity in rat primary visual cortex (V1) and latero-intermediate area (LI), which is a higher area of the rodent ventral visual stream. In rat V1, there was only evidence for response suppression related to stimulus-specific adaptation, and not for response enhancement. However, higher up in area LI, spiking activity showed clear surprise-based response enhancement in addition to stimulus-specific adaptation. These results show that neural responses along the rat ventral visual stream become increasingly sensitive to changes in the visual environment, suggesting a system specialized in the detection of unexpected events. Copyright © 2017 Elsevier Ltd. All rights reserved.
Stronger Neural Modulation by Visual Motion Intensity in Autism Spectrum Disorders
Peiker, Ina; Schneider, Till R.; Milne, Elizabeth; Schöttle, Daniel; Vogeley, Kai; Münchau, Alexander; Schunke, Odette; Siegel, Markus; Engel, Andreas K.; David, Nicole
2015-01-01
Theories of autism spectrum disorders (ASD) have focused on altered perceptual integration of sensory features as a possible core deficit. Yet, there is little understanding of the neuronal processing of elementary sensory features in ASD. For typically developed individuals, we previously established a direct link between frequency-specific neural activity and the intensity of a specific sensory feature: Gamma-band activity in the visual cortex increased approximately linearly with the strength of visual motion. Using magnetoencephalography (MEG), we investigated whether in individuals with ASD neural activity reflect the coherence, and thus intensity, of visual motion in a similar fashion. Thirteen adult participants with ASD and 14 control participants performed a motion direction discrimination task with increasing levels of motion coherence. A polynomial regression analysis revealed that gamma-band power increased significantly stronger with motion coherence in ASD compared to controls, suggesting excessive visual activation with increasing stimulus intensity originating from motion-responsive visual areas V3, V6 and hMT/V5. Enhanced neural responses with increasing stimulus intensity suggest an enhanced response gain in ASD. Response gain is controlled by excitatory-inhibitory interactions, which also drive high-frequency oscillations in the gamma-band. Thus, our data suggest that a disturbed excitatory-inhibitory balance underlies enhanced neural responses to coherent motion in ASD. PMID:26147342
Testolin, Alberto; De Filippo De Grazia, Michele; Zorzi, Marco
2017-01-01
The recent "deep learning revolution" in artificial neural networks had strong impact and widespread deployment for engineering applications, but the use of deep learning for neurocomputational modeling has been so far limited. In this article we argue that unsupervised deep learning represents an important step forward for improving neurocomputational models of perception and cognition, because it emphasizes the role of generative learning as opposed to discriminative (supervised) learning. As a case study, we present a series of simulations investigating the emergence of neural coding of visual space for sensorimotor transformations. We compare different network architectures commonly used as building blocks for unsupervised deep learning by systematically testing the type of receptive fields and gain modulation developed by the hidden neurons. In particular, we compare Restricted Boltzmann Machines (RBMs), which are stochastic, generative networks with bidirectional connections trained using contrastive divergence, with autoencoders, which are deterministic networks trained using error backpropagation. For both learning architectures we also explore the role of sparse coding, which has been identified as a fundamental principle of neural computation. The unsupervised models are then compared with supervised, feed-forward networks that learn an explicit mapping between different spatial reference frames. Our simulations show that both architectural and learning constraints strongly influenced the emergent coding of visual space in terms of distribution of tuning functions at the level of single neurons. Unsupervised models, and particularly RBMs, were found to more closely adhere to neurophysiological data from single-cell recordings in the primate parietal cortex. These results provide new insights into how basic properties of artificial neural networks might be relevant for modeling neural information processing in biological systems.
Testolin, Alberto; De Filippo De Grazia, Michele; Zorzi, Marco
2017-01-01
The recent “deep learning revolution” in artificial neural networks had strong impact and widespread deployment for engineering applications, but the use of deep learning for neurocomputational modeling has been so far limited. In this article we argue that unsupervised deep learning represents an important step forward for improving neurocomputational models of perception and cognition, because it emphasizes the role of generative learning as opposed to discriminative (supervised) learning. As a case study, we present a series of simulations investigating the emergence of neural coding of visual space for sensorimotor transformations. We compare different network architectures commonly used as building blocks for unsupervised deep learning by systematically testing the type of receptive fields and gain modulation developed by the hidden neurons. In particular, we compare Restricted Boltzmann Machines (RBMs), which are stochastic, generative networks with bidirectional connections trained using contrastive divergence, with autoencoders, which are deterministic networks trained using error backpropagation. For both learning architectures we also explore the role of sparse coding, which has been identified as a fundamental principle of neural computation. The unsupervised models are then compared with supervised, feed-forward networks that learn an explicit mapping between different spatial reference frames. Our simulations show that both architectural and learning constraints strongly influenced the emergent coding of visual space in terms of distribution of tuning functions at the level of single neurons. Unsupervised models, and particularly RBMs, were found to more closely adhere to neurophysiological data from single-cell recordings in the primate parietal cortex. These results provide new insights into how basic properties of artificial neural networks might be relevant for modeling neural information processing in biological systems. PMID:28377709
Otsuna, Hideo; Shinomiya, Kazunori; Ito, Kei
2014-01-01
Compared with connections between the retinae and primary visual centers, relatively less is known in both mammals and insects about the functional segregation of neural pathways connecting primary and higher centers of the visual processing cascade. Here, using the Drosophila visual system as a model, we demonstrate two levels of parallel computation in the pathways that connect primary visual centers of the optic lobe to computational circuits embedded within deeper centers in the central brain. We show that a seemingly simple achromatic behavior, namely phototaxis, is under the control of several independent pathways, each of which is responsible for navigation towards unique wavelengths. Silencing just one pathway is enough to disturb phototaxis towards one characteristic monochromatic source, whereas phototactic behavior towards white light is not affected. The response spectrum of each demonstrable pathway is different from that of individual photoreceptors, suggesting subtractive computations. A choice assay between two colors showed that these pathways are responsible for navigation towards, but not for the detection itself of, the monochromatic light. The present study provides novel insights about how visual information is separated and processed in parallel to achieve robust control of an innate behavior. PMID:24574974
Lateralization of the human mirror neuron system.
Aziz-Zadeh, Lisa; Koski, Lisa; Zaidel, Eran; Mazziotta, John; Iacoboni, Marco
2006-03-15
A cortical network consisting of the inferior frontal, rostral inferior parietal, and posterior superior temporal cortices has been implicated in representing actions in the primate brain and is critical to imitation in humans. This neural circuitry may be an evolutionary precursor of neural systems associated with language. However, language is predominantly lateralized to the left hemisphere, whereas the degree of lateralization of the imitation circuitry in humans is unclear. We conducted a functional magnetic resonance imaging study of imitation of finger movements with lateralized stimuli and responses. During imitation, activity in the inferior frontal and rostral inferior parietal cortex, although fairly bilateral, was stronger in the hemisphere ipsilateral to the visual stimulus and response hand. This ipsilateral pattern is at variance with the typical contralateral activity of primary visual and motor areas. Reliably increased signal in the right superior temporal sulcus (STS) was observed for both left-sided and right-sided imitation tasks, although subthreshold activity was also observed in the left STS. Overall, the data indicate that visual and motor components of the human mirror system are not left-lateralized. The left hemisphere superiority for language, then, must be have been favored by other types of language precursors, perhaps auditory or multimodal action representations.
Behavior and neural basis of near-optimal visual search
Ma, Wei Ji; Navalpakkam, Vidhya; Beck, Jeffrey M; van den Berg, Ronald; Pouget, Alexandre
2013-01-01
The ability to search efficiently for a target in a cluttered environment is one of the most remarkable functions of the nervous system. This task is difficult under natural circumstances, as the reliability of sensory information can vary greatly across space and time and is typically a priori unknown to the observer. In contrast, visual-search experiments commonly use stimuli of equal and known reliability. In a target detection task, we randomly assigned high or low reliability to each item on a trial-by-trial basis. An optimal observer would weight the observations by their trial-to-trial reliability and combine them using a specific nonlinear integration rule. We found that humans were near-optimal, regardless of whether distractors were homogeneous or heterogeneous and whether reliability was manipulated through contrast or shape. We present a neural-network implementation of near-optimal visual search based on probabilistic population coding. The network matched human performance. PMID:21552276
Bells, Sonya; Lefebvre, Jérémie; Prescott, Steven A; Dockstader, Colleen; Bouffet, Eric; Skocic, Jovanka; Laughlin, Suzanne; Mabbott, Donald J
2017-08-23
Cognition is compromised by white matter (WM) injury but the neurophysiological alterations linking them remain unclear. We hypothesized that reduced neural synchronization caused by disruption of neural signal propagation is involved. To test this, we evaluated group differences in: diffusion tensor WM microstructure measures within the optic radiations, primary visual area (V1), and cuneus; neural phase synchrony to a visual attention cue during visual-motor task; and reaction time to a response cue during the same task between 26 pediatric patients (17/9: male/female) treated with cranial radiation treatment for a brain tumor (12.67 ± 2.76 years), and 26 healthy children (16/10: male/female; 12.01 ± 3.9 years). We corroborated our findings using a corticocortical computational model representing perturbed signal conduction from myelin. Patients show delayed reaction time, WM compromise, and reduced phase synchrony during visual attention compared with healthy children. Notably, using partial least-squares-path modeling we found that WM insult within the optic radiations, V1, and cuneus is a strong predictor of the slower reaction times via disruption of neural synchrony in visual cortex. Observed changes in synchronization were reproduced in a computational model of WM injury. These findings provide new evidence linking cognition with WM via the reliance of neural synchronization on propagation of neural signals. SIGNIFICANCE STATEMENT By comparing brain tumor patients to healthy children, we establish that changes in the microstructure of the optic radiations and neural synchrony during visual attention predict reaction time. Furthermore, by testing the directionality of these links through statistical modeling and verifying our findings with computational modeling, we infer a causal relationship, namely that changes in white matter microstructure impact cognition in part by disturbing the ability of neural assemblies to synchronize. Together, our human imaging data and computer simulations show a fundamental connection between WM microstructure and neural synchronization that is critical for cognitive processing. Copyright © 2017 the authors 0270-6474/17/378227-12$15.00/0.
2016-01-01
Although much is known about the regenerative capacity of retinal ganglion cells, very significant barriers remain in our ability to restore visual function following traumatic injury or disease-induced degeneration. Here we summarize our current understanding of the factors regulating axon guidance and target engagement in regenerating axons, and review the state of the field of neural regeneration, focusing on the visual system and highlighting studies using other model systems that can inform analysis of visual system regeneration. This overview is motivated by a Society for Neuroscience Satellite meeting, “Reconnecting Neurons in the Visual System,” held in October 2015 sponsored by the National Eye Institute as part of their “Audacious Goals Initiative” and co-organized by Carol Mason (Columbia University) and Michael Crair (Yale University). The collective wisdom of the conference participants pointed to important gaps in our knowledge and barriers to progress in promoting the restoration of visual system function. This article is thus a summary of our existing understanding of visual system regeneration and provides a blueprint for future progress in the field. PMID:27798125
Qualitative similarities in the visual short-term memory of pigeons and people.
Gibson, Brett; Wasserman, Edward; Luck, Steven J
2011-10-01
Visual short-term memory plays a key role in guiding behavior, and individual differences in visual short-term memory capacity are strongly predictive of higher cognitive abilities. To provide a broader evolutionary context for understanding this memory system, we directly compared the behavior of pigeons and humans on a change detection task. Although pigeons had a lower storage capacity and a higher lapse rate than humans, both species stored multiple items in short-term memory and conformed to the same basic performance model. Thus, despite their very different evolutionary histories and neural architectures, pigeons and humans have functionally similar visual short-term memory systems, suggesting that the functional properties of visual short-term memory are subject to similar selective pressures across these distant species.
Visualizing deep neural network by alternately image blurring and deblurring.
Wang, Feng; Liu, Haijun; Cheng, Jian
2018-01-01
Visualization from trained deep neural networks has drawn massive public attention in recent. One of the visualization approaches is to train images maximizing the activation of specific neurons. However, directly maximizing the activation would lead to unrecognizable images, which cannot provide any meaningful information. In this paper, we introduce a simple but effective technique to constrain the optimization route of the visualization. By adding two totally inverse transformations, image blurring and deblurring, to the optimization procedure, recognizable images can be created. Our algorithm is good at extracting the details in the images, which are usually filtered by previous methods in the visualizations. Extensive experiments on AlexNet, VGGNet and GoogLeNet illustrate that we can better understand the neural networks utilizing the knowledge obtained by the visualization. Copyright © 2017 Elsevier Ltd. All rights reserved.
Memory-related brain lateralisation in birds and humans.
Moorman, Sanne; Nicol, Alister U
2015-03-01
Visual imprinting in chicks and song learning in songbirds are prominent model systems for the study of the neural mechanisms of memory. In both systems, neural lateralisation has been found to be involved in memory formation. Although many processes in the human brain are lateralised--spatial memory and musical processing involves mostly right hemisphere dominance, whilst language is mostly left hemisphere dominant--it is unclear what the function of lateralisation is. It might enhance brain capacity, make processing more efficient, or prevent occurrence of conflicting signals. In both avian paradigms we find memory-related lateralisation. We will discuss avian lateralisation findings and propose that birds provide a strong model for studying neural mechanisms of memory-related lateralisation. Copyright © 2014. Published by Elsevier Ltd.
Two critical periods in early visual cortex during figure-ground segregation.
Wokke, Martijn E; Sligte, Ilja G; Steven Scholte, H; Lamme, Victor A F
2012-11-01
The ability to distinguish a figure from its background is crucial for visual perception. To date, it remains unresolved where and how in the visual system different stages of figure-ground segregation emerge. Neural correlates of figure border detection have consistently been found in early visual cortex (V1/V2). However, areas V1/V2 have also been frequently associated with later stages of figure-ground segregation (such as border ownership or surface segregation). To causally link activity in early visual cortex to different stages of figure-ground segregation, we briefly disrupted activity in areas V1/V2 at various moments in time using transcranial magnetic stimulation (TMS). Prior to stimulation we presented stimuli that made it possible to differentiate between figure border detection and surface segregation. We concurrently recorded electroencephalographic (EEG) signals to examine how neural correlates of figure-ground segregation were affected by TMS. Results show that disruption of V1/V2 in an early time window (96-119 msec) affected detection of figure stimuli and affected neural correlates of figure border detection, border ownership, and surface segregation. TMS applied in a relatively late time window (236-259 msec) selectively deteriorated performance associated with surface segregation. We conclude that areas V1/V2 are not only essential in an early stage of figure-ground segregation when figure borders are detected, but subsequently causally contribute to more sophisticated stages of figure-ground segregation such as surface segregation.
Two critical periods in early visual cortex during figure–ground segregation
Wokke, Martijn E; Sligte, Ilja G; Steven Scholte, H; Lamme, Victor A F
2012-01-01
The ability to distinguish a figure from its background is crucial for visual perception. To date, it remains unresolved where and how in the visual system different stages of figure–ground segregation emerge. Neural correlates of figure border detection have consistently been found in early visual cortex (V1/V2). However, areas V1/V2 have also been frequently associated with later stages of figure–ground segregation (such as border ownership or surface segregation). To causally link activity in early visual cortex to different stages of figure–ground segregation, we briefly disrupted activity in areas V1/V2 at various moments in time using transcranial magnetic stimulation (TMS). Prior to stimulation we presented stimuli that made it possible to differentiate between figure border detection and surface segregation. We concurrently recorded electroencephalographic (EEG) signals to examine how neural correlates of figure–ground segregation were affected by TMS. Results show that disruption of V1/V2 in an early time window (96–119 msec) affected detection of figure stimuli and affected neural correlates of figure border detection, border ownership, and surface segregation. TMS applied in a relatively late time window (236–259 msec) selectively deteriorated performance associated with surface segregation. We conclude that areas V1/V2 are not only essential in an early stage of figure–ground segregation when figure borders are detected, but subsequently causally contribute to more sophisticated stages of figure–ground segregation such as surface segregation. PMID:23170239
Visual Input Enhances Selective Speech Envelope Tracking in Auditory Cortex at a ‘Cocktail Party’
Golumbic, Elana Zion; Cogan, Gregory B.; Schroeder, Charles E.; Poeppel, David
2013-01-01
Our ability to selectively attend to one auditory signal amidst competing input streams, epitomized by the ‘Cocktail Party’ problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared to responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. We investigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli. We recorded magnetoencephalographic (MEG) signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker’s face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a ‘Cocktail Party’ setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed. These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive. PMID:23345218
Kaiser, Daniel; Stein, Timo; Peelen, Marius V.
2014-01-01
In virtually every real-life situation humans are confronted with complex and cluttered visual environments that contain a multitude of objects. Because of the limited capacity of the visual system, objects compete for neural representation and cognitive processing resources. Previous work has shown that such attentional competition is partly object based, such that competition among elements is reduced when these elements perceptually group into an object based on low-level cues. Here, using functional MRI (fMRI) and behavioral measures, we show that the attentional benefit of grouping extends to higher-level grouping based on the relative position of objects as experienced in the real world. An fMRI study designed to measure competitive interactions among objects in human visual cortex revealed reduced neural competition between objects when these were presented in commonly experienced configurations, such as a lamp above a table, relative to the same objects presented in other configurations. In behavioral visual search studies, we then related this reduced neural competition to improved target detection when distracter objects were shown in regular configurations. Control studies showed that low-level grouping could not account for these results. We interpret these findings as reflecting the grouping of objects based on higher-level spatial-relational knowledge acquired through a lifetime of seeing objects in specific configurations. This interobject grouping effectively reduces the number of objects that compete for representation and thereby contributes to the efficiency of real-world perception. PMID:25024190
Cholinergic enhancement of visual attention and neural oscillations in the human brain.
Bauer, Markus; Kluge, Christian; Bach, Dominik; Bradbury, David; Heinze, Hans Jochen; Dolan, Raymond J; Driver, Jon
2012-03-06
Cognitive processes such as visual perception and selective attention induce specific patterns of brain oscillations. The neurochemical bases of these spectral changes in neural activity are largely unknown, but neuromodulators are thought to regulate processing. The cholinergic system is linked to attentional function in vivo, whereas separate in vitro studies show that cholinergic agonists induce high-frequency oscillations in slice preparations. This has led to theoretical proposals that cholinergic enhancement of visual attention might operate via gamma oscillations in visual cortex, although low-frequency alpha/beta modulation may also play a key role. Here we used MEG to record cortical oscillations in the context of administration of a cholinergic agonist (physostigmine) during a spatial visual attention task in humans. This cholinergic agonist enhanced spatial attention effects on low-frequency alpha/beta oscillations in visual cortex, an effect correlating with a drug-induced speeding of performance. By contrast, the cholinergic agonist did not alter high-frequency gamma oscillations in visual cortex. Thus, our findings show that cholinergic neuromodulation enhances attentional selection via an impact on oscillatory synchrony in visual cortex, for low rather than high frequencies. We discuss this dissociation between high- and low-frequency oscillations in relation to proposals that lower-frequency oscillations are generated by feedback pathways within visual cortex. Copyright © 2012 Elsevier Ltd. All rights reserved.
Classification Objects, Ideal Observers & Generative Models
ERIC Educational Resources Information Center
Olman, Cheryl; Kersten, Daniel
2004-01-01
A successful vision system must solve the problem of deriving geometrical information about three-dimensional objects from two-dimensional photometric input. The human visual system solves this problem with remarkable efficiency, and one challenge in vision research is to understand how neural representations of objects are formed and what visual…
Visual Based Retrieval Systems and Web Mining--Introduction.
ERIC Educational Resources Information Center
Iyengar, S. S.
2001-01-01
Briefly discusses Web mining and image retrieval techniques, and then presents a summary of articles in this special issue. Articles focus on Web content mining, artificial neural networks as tools for image retrieval, content-based image retrieval systems, and personalizing the Web browsing experience using media agents. (AEF)
Cocaine, Appetitive Memory and Neural Connectivity
Ray, Suchismita
2013-01-01
This review examines existing cognitive experimental and brain imaging research related to cocaine addiction. In section 1, previous studies that have examined cognitive processes, such as implicit and explicit memory processes in cocaine users are reported. Next, in section 2, brain imaging studies are reported that have used chronic users of cocaine as study participants. In section 3, several conclusions are drawn. They are: (a) in cognitive experimental literature, no study has examined both implicit and explicit memory processes involving cocaine related visual information in the same cocaine user, (b) neural mechanisms underlying implicit and explicit memory processes for cocaine-related visual cues have not been directly investigated in cocaine users in the imaging literature, and (c) none of the previous imaging studies has examined connectivity between the memory system and craving system in the brain of chronic users of cocaine. Finally, future directions in the field of cocaine addiction are suggested. PMID:25009766
Decoding Visual Location From Neural Patterns in the Auditory Cortex of the Congenitally Deaf
Almeida, Jorge; He, Dongjun; Chen, Quanjing; Mahon, Bradford Z.; Zhang, Fan; Gonçalves, Óscar F.; Fang, Fang; Bi, Yanchao
2016-01-01
Sensory cortices of individuals who are congenitally deprived of a sense can exhibit considerable plasticity and be recruited to process information from the senses that remain intact. Here, we explored whether the auditory cortex of congenitally deaf individuals represents visual field location of a stimulus—a dimension that is represented in early visual areas. We used functional MRI to measure neural activity in auditory and visual cortices of congenitally deaf and hearing humans while they observed stimuli typically used for mapping visual field preferences in visual cortex. We found that the location of a visual stimulus can be successfully decoded from the patterns of neural activity in auditory cortex of congenitally deaf but not hearing individuals. This is particularly true for locations within the horizontal plane and within peripheral vision. These data show that the representations stored within neuroplastically changed auditory cortex can align with dimensions that are typically represented in visual cortex. PMID:26423461
Eye evolution at high resolution: the neuron as a unit of homology.
Erclik, Ted; Hartenstein, Volker; McInnes, Roderick R; Lipshitz, Howard D
2009-08-01
Based on differences in morphology, photoreceptor-type usage and lens composition it has been proposed that complex eyes have evolved independently many times. The remarkable observation that different eye types rely on a conserved network of genes (including Pax6/eyeless) for their formation has led to the revised proposal that disparate complex eye types have evolved from a shared and simpler prototype. Did this ancestral eye already contain the neural circuitry required for image processing? And what were the evolutionary events that led to the formation of complex visual systems, such as those found in vertebrates and insects? The recent identification of unexpected cell-type homologies between neurons in the vertebrate and Drosophila visual systems has led to two proposed models for the evolution of complex visual systems from a simple prototype. The first, as an extension of the finding that the neurons of the vertebrate retina share homologies with both insect (rhabdomeric) and vertebrate (ciliary) photoreceptor cell types, suggests that the vertebrate retina is a composite structure, made up of neurons that have evolved from two spatially separate ancestral photoreceptor populations. The second model, based largely on the conserved role for the Vsx homeobox genes in photoreceptor-target neuron development, suggests that the last common ancestor of vertebrates and flies already possessed a relatively sophisticated visual system that contained a mixture of rhabdomeric and ciliary photoreceptors as well as their first- and second-order target neurons. The vertebrate retina and fly visual system would have subsequently evolved by elaborating on this ancestral neural circuit. Here we present evidence for these two cell-type homology-based models and discuss their implications.
Visuotactile motion congruence enhances gamma-band activity in visual and somatosensory cortices.
Krebber, Martin; Harwood, James; Spitzer, Bernhard; Keil, Julian; Senkowski, Daniel
2015-08-15
When touching and viewing a moving surface our visual and somatosensory systems receive congruent spatiotemporal input. Behavioral studies have shown that motion congruence facilitates interplay between visual and tactile stimuli, but the neural mechanisms underlying this interplay are not well understood. Neural oscillations play a role in motion processing and multisensory integration. They may also be crucial for visuotactile motion processing. In this electroencephalography study, we applied linear beamforming to examine the impact of visuotactile motion congruence on beta and gamma band activity (GBA) in visual and somatosensory cortices. Visual and tactile inputs comprised of gratings that moved either in the same or different directions. Participants performed a target detection task that was unrelated to motion congruence. While there were no effects in the beta band (13-21Hz), the power of GBA (50-80Hz) in visual and somatosensory cortices was larger for congruent compared with incongruent motion stimuli. This suggests enhanced bottom-up multisensory processing when visual and tactile gratings moved in the same direction. Supporting its behavioral relevance, GBA was correlated with shorter reaction times in the target detection task. We conclude that motion congruence plays an important role for the integrative processing of visuotactile stimuli in sensory cortices, as reflected by oscillatory responses in the gamma band. Copyright © 2015 Elsevier Inc. All rights reserved.
Neuromorphic VLSI vision system for real-time texture segregation.
Shimonomura, Kazuhiro; Yagi, Tetsuya
2008-10-01
The visual system of the brain can perceive an external scene in real-time with extremely low power dissipation, although the response speed of an individual neuron is considerably lower than that of semiconductor devices. The neurons in the visual pathway generate their receptive fields using a parallel and hierarchical architecture. This architecture of the visual cortex is interesting and important for designing a novel perception system from an engineering perspective. The aim of this study is to develop a vision system hardware, which is designed inspired by a hierarchical visual processing in V1, for real time texture segregation. The system consists of a silicon retina, orientation chip, and field programmable gate array (FPGA) circuit. The silicon retina emulates the neural circuits of the vertebrate retina and exhibits a Laplacian-Gaussian-like receptive field. The orientation chip selectively aggregates multiple pixels of the silicon retina in order to produce Gabor-like receptive fields that are tuned to various orientations by mimicking the feed-forward model proposed by Hubel and Wiesel. The FPGA circuit receives the output of the orientation chip and computes the responses of the complex cells. Using this system, the neural images of simple cells were computed in real-time for various orientations and spatial frequencies. Using the orientation-selective outputs obtained from the multi-chip system, a real-time texture segregation was conducted based on a computational model inspired by psychophysics and neurophysiology. The texture image was filtered by the two orthogonally oriented receptive fields of the multi-chip system and the filtered images were combined to segregate the area of different texture orientation with the aid of FPGA. The present system is also useful for the investigation of the functions of the higher-order cells that can be obtained by combining the simple and complex cells.
Nowke, Christian; Diaz-Pier, Sandra; Weyers, Benjamin; Hentschel, Bernd; Morrison, Abigail; Kuhlen, Torsten W.; Peyser, Alexander
2018-01-01
Simulation models in many scientific fields can have non-unique solutions or unique solutions which can be difficult to find. Moreover, in evolving systems, unique final state solutions can be reached by multiple different trajectories. Neuroscience is no exception. Often, neural network models are subject to parameter fitting to obtain desirable output comparable to experimental data. Parameter fitting without sufficient constraints and a systematic exploration of the possible solution space can lead to conclusions valid only around local minima or around non-minima. To address this issue, we have developed an interactive tool for visualizing and steering parameters in neural network simulation models. In this work, we focus particularly on connectivity generation, since finding suitable connectivity configurations for neural network models constitutes a complex parameter search scenario. The development of the tool has been guided by several use cases—the tool allows researchers to steer the parameters of the connectivity generation during the simulation, thus quickly growing networks composed of multiple populations with a targeted mean activity. The flexibility of the software allows scientists to explore other connectivity and neuron variables apart from the ones presented as use cases. With this tool, we enable an interactive exploration of parameter spaces and a better understanding of neural network models and grapple with the crucial problem of non-unique network solutions and trajectories. In addition, we observe a reduction in turn around times for the assessment of these models, due to interactive visualization while the simulation is computed. PMID:29937723
Clarke, Aaron M.; Herzog, Michael H.; Francis, Gregory
2014-01-01
Experimentalists tend to classify models of visual perception as being either local or global, and involving either feedforward or feedback processing. We argue that these distinctions are not as helpful as they might appear, and we illustrate these issues by analyzing models of visual crowding as an example. Recent studies have argued that crowding cannot be explained by purely local processing, but that instead, global factors such as perceptual grouping are crucial. Theories of perceptual grouping, in turn, often invoke feedback connections as a way to account for their global properties. We examined three types of crowding models that are representative of global processing models, and two of which employ feedback processing: a model based on Fourier filtering, a feedback neural network, and a specific feedback neural architecture that explicitly models perceptual grouping. Simulations demonstrate that crucial empirical findings are not accounted for by any of the models. We conclude that empirical investigations that reject a local or feedforward architecture offer almost no constraints for model construction, as there are an uncountable number of global and feedback systems. We propose that the identification of a system as being local or global and feedforward or feedback is less important than the identification of a system's computational details. Only the latter information can provide constraints on model development and promote quantitative explanations of complex phenomena. PMID:25374554
Yue, Shigang; Rind, F Claire
2006-05-01
The lobula giant movement detector (LGMD) is an identified neuron in the locust brain that responds most strongly to the images of an approaching object such as a predator. Its computational model can cope with unpredictable environments without using specific object recognition algorithms. In this paper, an LGMD-based neural network is proposed with a new feature enhancement mechanism to enhance the expanded edges of colliding objects via grouped excitation for collision detection with complex backgrounds. The isolated excitation caused by background detail will be filtered out by the new mechanism. Offline tests demonstrated the advantages of the presented LGMD-based neural network in complex backgrounds. Real time robotics experiments using the LGMD-based neural network as the only sensory system showed that the system worked reliably in a wide range of conditions; in particular, the robot was able to navigate in arenas with structured surrounds and complex backgrounds.
What is the optimal architecture for visual information routing?
Wolfrum, Philipp; von der Malsburg, Christoph
2007-12-01
Analyzing the design of networks for visual information routing is an underconstrained problem due to insufficient anatomical and physiological data. We propose here optimality criteria for the design of routing networks. For a very general architecture, we derive the number of routing layers and the fanout that minimize the required neural circuitry. The optimal fanout l is independent of network size, while the number k of layers scales logarithmically (with a prefactor below 1), with the number n of visual resolution units to be routed independently. The results are found to agree with data of the primate visual system.
An Attractive Reelin Gradient Establishes Synaptic Lamination in the Vertebrate Visual System.
Di Donato, Vincenzo; De Santis, Flavia; Albadri, Shahad; Auer, Thomas Oliver; Duroure, Karine; Charpentier, Marine; Concordet, Jean-Paul; Gebhardt, Christoph; Del Bene, Filippo
2018-03-07
A conserved organizational and functional principle of neural networks is the segregation of axon-dendritic synaptic connections into laminae. Here we report that targeting of synaptic laminae by retinal ganglion cell (RGC) arbors in the vertebrate visual system is regulated by a signaling system relying on target-derived Reelin and VLDLR/Dab1a on the projecting neurons. Furthermore, we find that Reelin is distributed as a gradient on the target tissue and stabilized by heparan sulfate proteoglycans (HSPGs) in the extracellular matrix (ECM). Through genetic manipulations, we show that this Reelin gradient is important for laminar targeting and that it is attractive for RGC axons. Finally, we suggest a comprehensive model of synaptic lamina formation in which attractive Reelin counter-balances repulsive Slit1, thereby guiding RGC axons toward single synaptic laminae. We establish a mechanism that may represent a general principle for neural network assembly in vertebrate species and across different brain areas. Copyright © 2018 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Gegenfurtner, Andreas; Kok, Ellen M.; van Geel, Koos; de Bruin, Anique B. H.; Sorger, Bettina
2017-01-01
Functional neuroimaging is a useful approach to study the neural correlates of visual perceptual expertise. The purpose of this paper is to review the functional-neuroimaging methods that have been implemented in previous research in this context. First, we will discuss research questions typically addressed in visual expertise research. Second,…
ERIC Educational Resources Information Center
Brooks, Brian E.; Cooper, Eric E.
2006-01-01
Three divided visual field experiments tested current hypotheses about the types of visual shape representation tasks that recruit the cognitive and neural mechanisms underlying face recognition. Experiment 1 found a right hemisphere advantage for subordinate but not basic-level face recognition. Experiment 2 found a right hemisphere advantage for…
The artist emerges: visual art learning alters neural structure and function.
Schlegel, Alexander; Alexander, Prescott; Fogelson, Sergey V; Li, Xueting; Lu, Zhengang; Kohler, Peter J; Riley, Enrico; Tse, Peter U; Meng, Ming
2015-01-15
How does the brain mediate visual artistic creativity? Here we studied behavioral and neural changes in drawing and painting students compared to students who did not study art. We investigated three aspects of cognition vital to many visual artists: creative cognition, perception, and perception-to-action. We found that the art students became more creative via the reorganization of prefrontal white matter but did not find any significant changes in perceptual ability or related neural activity in the art students relative to the control group. Moreover, the art students improved in their ability to sketch human figures from observation, and multivariate patterns of cortical and cerebellar activity evoked by this drawing task became increasingly separable between art and non-art students. Our findings suggest that the emergence of visual artistic skills is supported by plasticity in neural pathways that enable creative cognition and mediate perceptuomotor integration. Copyright © 2014 Elsevier Inc. All rights reserved.
Gestalt isomorphism and the primacy of subjective conscious experience: a Gestalt Bubble model.
Lehar, Steven
2003-08-01
A serious crisis is identified in theories of neurocomputation, marked by a persistent disparity between the phenomenological or experiential account of visual perception and the neurophysiological level of description of the visual system. In particular, conventional concepts of neural processing offer no explanation for the holistic global aspects of perception identified by Gestalt theory. The problem is paradigmatic and can be traced to contemporary concepts of the functional role of the neural cell, known as the Neuron Doctrine. In the absence of an alternative neurophysiologically plausible model, I propose a perceptual modeling approach, to model the percept as experienced subjectively, rather than modeling the objective neurophysiological state of the visual system that supposedly subserves that experience. A Gestalt Bubble model is presented to demonstrate how the elusive Gestalt principles of emergence, reification, and invariance can be expressed in a quantitative model of the subjective experience of visual consciousness. That model in turn reveals a unique computational strategy underlying visual processing, which is unlike any algorithm devised by man, and certainly unlike the atomistic feed-forward model of neurocomputation offered by the Neuron Doctrine paradigm. The perceptual modeling approach reveals the primary function of perception as that of generating a fully spatial virtual-reality replica of the external world in an internal representation. The common objections to this "picture-in-the-head" concept of perceptual representation are shown to be ill founded.
Visual gravity cues in the interpretation of biological movements: neural correlates in humans.
Maffei, Vincenzo; Indovina, Iole; Macaluso, Emiliano; Ivanenko, Yuri P; A Orban, Guy; Lacquaniti, Francesco
2015-01-01
Our visual system takes into account the effects of Earth gravity to interpret biological motion (BM), but the neural substrates of this process remain unclear. Here we measured functional magnetic resonance (fMRI) signals while participants viewed intact or scrambled stick-figure animations of walking, running, hopping, and skipping recorded at normal or reduced gravity. We found that regions sensitive to BM configuration in the occipito-temporal cortex (OTC) were more active for reduced than normal gravity but with intact stimuli only. Effective connectivity analysis suggests that predictive coding of gravity effects underlies BM interpretation. This process might be implemented by a family of snapshot neurons involved in action monitoring. Copyright © 2014 Elsevier Inc. All rights reserved.
Neural mechanisms of oculomotor abnormalities in the infantile strabismus syndrome.
Walton, Mark M G; Pallus, Adam; Fleuriet, Jérome; Mustari, Michael J; Tarczy-Hornoch, Kristina
2017-07-01
Infantile strabismus is characterized by numerous visual and oculomotor abnormalities. Recently nonhuman primate models of infantile strabismus have been established, with characteristics that closely match those observed in human patients. This has made it possible to study the neural basis for visual and oculomotor symptoms in infantile strabismus. In this review, we consider the available evidence for neural abnormalities in structures related to oculomotor pathways ranging from visual cortex to oculomotor nuclei. These studies provide compelling evidence that a disturbance of binocular vision during a sensitive period early in life, whatever the cause, results in a cascade of abnormalities through numerous brain areas involved in visual functions and eye movements. Copyright © 2017 the American Physiological Society.
Johari, Karim; Behroozmand, Roozbeh
2017-05-01
The predictive coding model suggests that neural processing of sensory information is facilitated for temporally-predictable stimuli. This study investigated how temporal processing of visually-presented sensory cues modulates movement reaction time and neural activities in speech and hand motor systems. Event-related potentials (ERPs) were recorded in 13 subjects while they were visually-cued to prepare to produce a steady vocalization of a vowel sound or press a button in a randomized order, and to initiate the cued movement following the onset of a go signal on the screen. Experiment was conducted in two counterbalanced blocks in which the time interval between visual cue and go signal was temporally-predictable (fixed delay at 1000 ms) or unpredictable (variable between 1000 and 2000 ms). Results of the behavioral response analysis indicated that movement reaction time was significantly decreased for temporally-predictable stimuli in both speech and hand modalities. We identified premotor ERP activities with a left-lateralized parietal distribution for hand and a frontocentral distribution for speech that were significantly suppressed in response to temporally-predictable compared with unpredictable stimuli. The premotor ERPs were elicited approximately -100 ms before movement and were significantly correlated with speech and hand motor reaction times only in response to temporally-predictable stimuli. These findings suggest that the motor system establishes a predictive code to facilitate movement in response to temporally-predictable sensory stimuli. Our data suggest that the premotor ERP activities are robust neurophysiological biomarkers of such predictive coding mechanisms. These findings provide novel insights into the temporal processing mechanisms of speech and hand motor systems.
Tallon-Baudry, Catherine; Campana, Florence; Park, Hyeong-Dong; Babo-Rebelo, Mariana
2018-05-01
Why should a scientist whose aim is to unravel the neural mechanisms of perception consider brain-body interactions seriously? Brain-body interactions have traditionally been associated with emotion, effort, or stress, but not with the "cold" processes of perception and attention. Here, we review recent experimental evidence suggesting a different picture: the neural monitoring of bodily state, and in particular the neural monitoring of the heart, affects visual perception. The impact of spontaneous fluctuations of neural responses to heartbeats on visual detection is as large as the impact of explicit manipulations of spatial attention in perceptual tasks. However, we propose that the neural monitoring of visceral inputs plays a specific role in conscious perception, distinct from the role of attention. The neural monitoring of organs such as the heart or the gut would generate a subject-centered reference frame, from which the first-person perspective inherent to conscious perception can develop. In this view, conscious perception results from the integration of visual content with first-person perspective. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Selecting and perceiving multiple visual objects
Xu, Yaoda; Chun, Marvin M.
2010-01-01
To explain how multiple visual objects are attended and perceived, we propose that our visual system first selects a fixed number of about four objects from a crowded scene based on their spatial information (object individuation) and then encode their details (object identification). We describe the involvement of the inferior intra-parietal sulcus (IPS) in object individuation and the superior IPS and higher visual areas in object identification. Our neural object-file theory synthesizes and extends existing ideas in visual cognition and is supported by behavioral and neuroimaging results. It provides a better understanding of the role of the different parietal areas in encoding visual objects and can explain various forms of capacity-limited processing in visual cognition such as working memory. PMID:19269882
Suzuki, Takumi; Sato, Makoto
2017-11-15
Diversification of neuronal types is key to establishing functional variations in neural circuits. The first critical step to generate neuronal diversity is to organize the compartmental domains of developing brains into spatially distinct neural progenitor pools. Neural progenitors in each pool then generate a unique set of diverse neurons through specific spatiotemporal specification processes. In this review article, we focus on an additional mechanism, 'inter-progenitor pool wiring', that further expands the diversity of neural circuits. After diverse types of neurons are generated in one progenitor pool, a fraction of these neurons start migrating toward a remote brain region containing neurons that originate from another progenitor pool. Finally, neurons of different origins are intermingled and eventually form complex but precise neural circuits. The developing cerebral cortex of mammalian brains is one of the best examples of inter-progenitor pool wiring. However, Drosophila visual system development has revealed similar mechanisms in invertebrate brains, suggesting that inter-progenitor pool wiring is an evolutionarily conserved strategy that expands neural circuit diversity. Here, we will discuss how inter-progenitor pool wiring is accomplished in mammalian and fly brain systems. Copyright © 2017 Elsevier Inc. All rights reserved.
Visual awareness suppression by pre-stimulus brain stimulation; a neural effect.
Jacobs, Christianne; Goebel, Rainer; Sack, Alexander T
2012-01-02
Transcranial magnetic stimulation (TMS) has established the functional relevance of early visual cortex (EVC) for visual awareness with great temporal specificity non-invasively in conscious human volunteers. Many studies have found a suppressive effect when TMS was applied over EVC 80-100 ms after the onset of the visual stimulus (post-stimulus TMS time window). Yet, few studies found task performance to also suffer when TMS was applied even before visual stimulus presentation (pre-stimulus TMS time window). This pre-stimulus TMS effect, however, remains controversially debated and its origin had mainly been ascribed to TMS-induced eye-blinking artifacts. Here, we applied chronometric TMS over EVC during the execution of a visual discrimination task, covering an exhaustive range of visual stimulus-locked TMS time windows ranging from -80 pre-stimulus to 300 ms post-stimulus onset. Electrooculographical (EoG) recordings, sham TMS stimulation, and vertex TMS stimulation controlled for different types of non-neural TMS effects. Our findings clearly reveal TMS-induced masking effects for both pre- and post-stimulus time windows, and for both objective visual discrimination performance and subjective visibility. Importantly, all effects proved to be still present after post hoc removal of eye blink trials, suggesting a neural origin for the pre-stimulus TMS suppression effect on visual awareness. We speculate based on our data that TMS exerts its pre-stimulus effect via generation of a neural state which interacts with subsequent visual input. Copyright © 2011 Elsevier Inc. All rights reserved.
Wu, Xiang; He, Sheng; Bushara, Khalaf; Zeng, Feiyan; Liu, Ying; Zhang, Daren
2012-10-01
Object recognition occurs even when environmental information is incomplete. Illusory contours (ICs), in which a contour is perceived though the contour edges are incomplete, have been extensively studied as an example of such a visual completion phenomenon. Despite the neural activity in response to ICs in visual cortical areas from low (V1 and V2) to high (LOC: the lateral occipital cortex) levels, the details of the neural processing underlying IC perception are largely not clarified. For example, how do the visual areas function in IC perception and how do they interact to archive the coherent contour perception? IC perception involves the process of completing the local discrete contour edges (contour completion) and the process of representing the global completed contour information (contour representation). Here, functional magnetic resonance imaging was used to dissociate contour completion and contour representation by varying each in opposite directions. The results show that the neural activity was stronger to stimuli with more contour completion than to stimuli with more contour representation in V1 and V2, which was the reverse of that in the LOC. When inspecting the neural activity change across the visual pathway, the activation remained high for the stimuli with more contour completion and increased for the stimuli with more contour representation. These results suggest distinct neural correlates of contour completion and contour representation, and the possible collaboration between the two processes during IC perception, indicating a neural connection between the discrete retinal input and the coherent visual percept. Copyright © 2011 Wiley Periodicals, Inc.
High-Level Vision: Top-Down Processing in Neurally Inspired Architectures
2008-02-01
shunting subsystem). Visual input from the lateral geniculate enters the visual buffer via the black arrow at the bottom. Processing subsystems used... lateral geniculate nucleus of the thalamus (LGNd), the superior colliculus of the midbrain, and cortical regions V1 through V4. Beyond early vision...resonance imaging FOA: focus of attention IMPER: IMagery and PERception model IS: information shunting system LGNd: dorsal lateral geniculate nucleus
Wood, Daniel K; Gu, Chao; Corneil, Brian D; Gribble, Paul L; Goodale, Melvyn A
2015-08-01
We recorded muscle activity from an upper limb muscle while human subjects reached towards peripheral targets. We tested the hypothesis that the transient visual response sweeps not only through the central nervous system, but also through the peripheral nervous system. Like the transient visual response in the central nervous system, stimulus-locked muscle responses (< 100 ms) were sensitive to stimulus contrast, and were temporally and spatially dissociable from voluntary orienting activity. Also, the arrival of visual responses reduced the variability of muscle activity by resetting the phase of ongoing low-frequency oscillations. This latter finding critically extends the emerging evidence that the feedforward visual sweep reduces neural variability via phase resetting. We conclude that, when sensory information is relevant to a particular effector, detailed information about the sensorimotor transformation, even from the earliest stages, is found in the peripheral nervous system. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Harris, Joseph A; Wu, Chien-Te; Woldorff, Marty G
2011-06-07
It is generally agreed that considerable amounts of low-level sensory processing of visual stimuli can occur without conscious awareness. On the other hand, the degree of higher level visual processing that occurs in the absence of awareness is as yet unclear. Here, event-related potential (ERP) measures of brain activity were recorded during a sandwich-masking paradigm, a commonly used approach for attenuating conscious awareness of visual stimulus content. In particular, the present study used a combination of ERP activation contrasts to track both early sensory-processing ERP components and face-specific N170 ERP activations, in trials with versus without awareness. The electrophysiological measures revealed that the sandwich masking abolished the early face-specific N170 neural response (peaking at ~170 ms post-stimulus), an effect that paralleled the abolition of awareness of face versus non-face image content. Furthermore, however, the masking appeared to render a strong attenuation of earlier feedforward visual sensory-processing signals. This early attenuation presumably resulted in insufficient information being fed into the higher level visual system pathways specific to object category processing, thus leading to unawareness of the visual object content. These results support a coupling of visual awareness and neural indices of face processing, while also demonstrating an early low-level mechanism of interference in sandwich masking.
Artificial neural network does better spatiotemporal compressive sampling
NASA Astrophysics Data System (ADS)
Lee, Soo-Young; Hsu, Charles; Szu, Harold
2012-06-01
Spatiotemporal sparseness is generated naturally by human visual system based on artificial neural network modeling of associative memory. Sparseness means nothing more and nothing less than the compressive sensing achieves merely the information concentration. To concentrate the information, one uses the spatial correlation or spatial FFT or DWT or the best of all adaptive wavelet transform (cf. NUS, Shen Shawei). However, higher dimensional spatiotemporal information concentration, the mathematics can not do as flexible as a living human sensory system. The reason is obviously for survival reasons. The rest of the story is given in the paper.
Digital implementation of a neural network for imaging
NASA Astrophysics Data System (ADS)
Wood, Richard; McGlashan, Alex; Yatulis, Jay; Mascher, Peter; Bruce, Ian
2012-10-01
This paper outlines the design and testing of a digital imaging system that utilizes an artificial neural network with unsupervised and supervised learning to convert streaming input (real time) image space into parameter space. The primary objective of this work is to investigate the effectiveness of using a neural network to significantly reduce the information density of streaming images so that objects can be readily identified by a limited set of primary parameters and act as an enhanced human machine interface (HMI). Many applications are envisioned including use in biomedical imaging, anomaly detection and as an assistive device for the visually impaired. A digital circuit was designed and tested using a Field Programmable Gate Array (FPGA) and an off the shelf digital camera. Our results indicate that the networks can be readily trained when subject to limited sets of objects such as the alphabet. We can also separate limited object sets with rotational and positional invariance. The results also show that limited visual fields form with only local connectivity.
Marginalization in neural circuits with divisive normalization
Beck, J.M.; Latham, P.E.; Pouget, A.
2011-01-01
A wide range of computations performed by the nervous system involves a type of probabilistic inference known as marginalization. This computation comes up in seemingly unrelated tasks, including causal reasoning, odor recognition, motor control, visual tracking, coordinate transformations, visual search, decision making, and object recognition, to name just a few. The question we address here is: how could neural circuits implement such marginalizations? We show that when spike trains exhibit a particular type of statistics – associated with constant Fano factors and gain-invariant tuning curves, as is often reported in vivo – some of the more common marginalizations can be achieved with networks that implement a quadratic nonlinearity and divisive normalization, the latter being a type of nonlinear lateral inhibition that has been widely reported in neural circuits. Previous studies have implicated divisive normalization in contrast gain control and attentional modulation. Our results raise the possibility that it is involved in yet another, highly critical, computation: near optimal marginalization in a remarkably wide range of tasks. PMID:22031877
ERIC Educational Resources Information Center
Wang, Lan-Ting; Lee, Kun-Chou
2014-01-01
The vision plays an important role in educational technologies because it can produce and communicate quite important functions in teaching and learning. In this paper, learners' preference for the visual complexity on small screens of mobile computers is studied by neural networks. The visual complexity in this study is divided into five…
McDowell, Jennifer E.; Dyckman, Kara A.; Austin, Benjamin; Clementz, Brett A.
2008-01-01
This review provides a summary of the contributions made by human functional neuroimaging studies to the understanding of neural correlates of saccadic control. The generation of simple visually-guided saccades (redirections of gaze to a visual stimulus or prosaccades) and more complex volitional saccades require similar basic neural circuitry with additional neural regions supporting requisite higher level processes. The saccadic system has been studied extensively in non-human primates (e.g. single unit recordings) and humans (e.g. lesions and neuroimaging). Considerable knowledge of this system’s functional neuroanatomy makes it useful for investigating models of cognitive control. The network involved in prosaccade generation (by definition exogenously-driven) includes subcortical (striatum, thalamus, superior colliculus, and cerebellar vermis) and cortical structures (primary visual, extrastriate, and parietal cortices, and frontal and supplementary eye fields). Activation in these regions is also observed during endogenously-driven voluntary saccades (e.g. antisaccades, ocular motor delayed response or memory saccades, predictive tracking tasks and anticipatory saccades, and saccade sequencing), all of which require complex cognitive processes like inhibition and working memory. These additional requirements are supported by changes in neural activity in basic saccade circuitry and by recruitment of additional neural regions (such as prefrontal and anterior cingulate cortices). Activity in visual cortex is modulated as a function of task demands and may predict the type of saccade to be generated, perhaps via top-down control mechanisms. Neuroimaging studies suggest two foci of activation within FEF - medial and lateral - which may correspond to volitional and reflexive demands, respectively. Future research on saccade control could usefully (i) delineate important anatomical subdivisions that underlie functional differences, (ii) evaluate functional connectivity of anatomical regions supporting saccade generation using methods such as ICA and structural equation modeling, (iii) investigate how context affects behavior and brain activity, and (iv) use multi-modal neuroimaging to maximize spatial and temporal resolution. PMID:18835656
Mosaic and Concerted Evolution in the Visual System of Birds
Gutiérrez-Ibáñez, Cristián; Iwaniuk, Andrew N.; Moore, Bret A.; Fernández-Juricic, Esteban; Corfield, Jeremy R.; Krilow, Justin M.; Kolominsky, Jeffrey; Wylie, Douglas R.
2014-01-01
Two main models have been proposed to explain how the relative size of neural structures varies through evolution. In the mosaic evolution model, individual brain structures vary in size independently of each other, whereas in the concerted evolution model developmental constraints result in different parts of the brain varying in size in a coordinated manner. Several studies have shown variation of the relative size of individual nuclei in the vertebrate brain, but it is currently not known if nuclei belonging to the same functional pathway vary independently of each other or in a concerted manner. The visual system of birds offers an ideal opportunity to specifically test which of the two models apply to an entire sensory pathway. Here, we examine the relative size of 9 different visual nuclei across 98 species of birds. This includes data on interspecific variation in the cytoarchitecture and relative size of the isthmal nuclei, which has not been previously reported. We also use a combination of statistical analyses, phylogenetically corrected principal component analysis and evolutionary rates of change on the absolute and relative size of the nine nuclei, to test if visual nuclei evolved in a concerted or mosaic manner. Our results strongly indicate a combination of mosaic and concerted evolution (in the relative size of nine nuclei) within the avian visual system. Specifically, the relative size of the isthmal nuclei and parts of the tectofugal pathway covary across species in a concerted fashion, whereas the relative volume of the other visual nuclei measured vary independently of one another, such as that predicted by the mosaic model. Our results suggest the covariation of different neural structures depends not only on the functional connectivity of each nucleus, but also on the diversity of afferents and efferents of each nucleus. PMID:24621573
Chen, Guang; Rasch, Malte J.; Wang, Ran; Zhang, Xiao-hui
2015-01-01
Neural oscillatory activities have been shown to play important roles in neural information processing and the shaping of circuit connections during development. However, it remains unknown whether and how specific neural oscillations emerge during a postnatal critical period (CP), in which neuronal connections are most substantially modified by neural activity and experience. By recording local field potentials (LFPs) and single unit activity in developing primary visual cortex (V1) of head-fixed awake mice, we here demonstrate an emergence of characteristic oscillatory activities during the CP. From the pre-CP to CP, the peak frequency of spontaneous fast oscillatory activities shifts from the beta band (15–35 Hz) to the gamma band (40–70 Hz), accompanied by a decrease of cross-frequency coupling (CFC) and broadband spike-field coherence (SFC). Moreover, visual stimulation induced a large increase of beta-band activity but a reduction of gamma-band activity specifically from the CP onwards. Dark rearing of animals from the birth delayed this emergence of oscillatory activities during the CP, suggesting its dependence on early visual experience. These findings suggest that the characteristic neuronal oscillatory activities emerged specifically during the CP may represent as neural activity trait markers for the experience-dependent maturation of developing visual cortical circuits. PMID:26648548
The Puzzle of Visual Development: Behavior and Neural Limits.
Kiorpes, Lynne
2016-11-09
The development of visual function takes place over many months or years in primate infants. Visual sensitivity is very poor near birth and improves over different times courses for different visual functions. The neural mechanisms that underlie these processes are not well understood despite many decades of research. The puzzle arises because research into the factors that limit visual function in infants has found surprisingly mature neural organization and adult-like receptive field properties in very young infants. The high degree of visual plasticity that has been documented during the sensitive period in young children and animals leaves the brain vulnerable to abnormal visual experience. Abnormal visual experience during the sensitive period can lead to amblyopia, a developmental disorder of vision affecting ∼3% of children. This review provides a historical perspective on research into visual development and the disorder amblyopia. The mismatch between the status of the primary visual cortex and visual behavior, both during visual development and in amblyopia, is discussed, and several potential resolutions are considered. It seems likely that extrastriate visual areas further along the visual pathways may set important limits on visual function and show greater vulnerability to abnormal visual experience. Analyses based on multiunit, population activity may provide useful representations of the information being fed forward from primary visual cortex to extrastriate processing areas and to the motor output. Copyright © 2016 the authors 0270-6474/16/3611384-10$15.00/0.
Embodied learning of a generative neural model for biological motion perception and inference
Schrodt, Fabian; Layher, Georg; Neumann, Heiko; Butz, Martin V.
2015-01-01
Although an action observation network and mirror neurons for understanding the actions and intentions of others have been under deep, interdisciplinary consideration over recent years, it remains largely unknown how the brain manages to map visually perceived biological motion of others onto its own motor system. This paper shows how such a mapping may be established, even if the biologically motion is visually perceived from a new vantage point. We introduce a learning artificial neural network model and evaluate it on full body motion tracking recordings. The model implements an embodied, predictive inference approach. It first learns to correlate and segment multimodal sensory streams of own bodily motion. In doing so, it becomes able to anticipate motion progression, to complete missing modal information, and to self-generate learned motion sequences. When biological motion of another person is observed, this self-knowledge is utilized to recognize similar motion patterns and predict their progress. Due to the relative encodings, the model shows strong robustness in recognition despite observing rather large varieties of body morphology and posture dynamics. By additionally equipping the model with the capability to rotate its visual frame of reference, it is able to deduce the visual perspective onto the observed person, establishing full consistency to the embodied self-motion encodings by means of active inference. In further support of its neuro-cognitive plausibility, we also model typical bistable perceptions when crucial depth information is missing. In sum, the introduced neural model proposes a solution to the problem of how the human brain may establish correspondence between observed bodily motion and its own motor system, thus offering a mechanism that supports the development of mirror neurons. PMID:26217215
Embodied learning of a generative neural model for biological motion perception and inference.
Schrodt, Fabian; Layher, Georg; Neumann, Heiko; Butz, Martin V
2015-01-01
Although an action observation network and mirror neurons for understanding the actions and intentions of others have been under deep, interdisciplinary consideration over recent years, it remains largely unknown how the brain manages to map visually perceived biological motion of others onto its own motor system. This paper shows how such a mapping may be established, even if the biologically motion is visually perceived from a new vantage point. We introduce a learning artificial neural network model and evaluate it on full body motion tracking recordings. The model implements an embodied, predictive inference approach. It first learns to correlate and segment multimodal sensory streams of own bodily motion. In doing so, it becomes able to anticipate motion progression, to complete missing modal information, and to self-generate learned motion sequences. When biological motion of another person is observed, this self-knowledge is utilized to recognize similar motion patterns and predict their progress. Due to the relative encodings, the model shows strong robustness in recognition despite observing rather large varieties of body morphology and posture dynamics. By additionally equipping the model with the capability to rotate its visual frame of reference, it is able to deduce the visual perspective onto the observed person, establishing full consistency to the embodied self-motion encodings by means of active inference. In further support of its neuro-cognitive plausibility, we also model typical bistable perceptions when crucial depth information is missing. In sum, the introduced neural model proposes a solution to the problem of how the human brain may establish correspondence between observed bodily motion and its own motor system, thus offering a mechanism that supports the development of mirror neurons.
Cortical connective field estimates from resting state fMRI activity.
Gravel, Nicolás; Harvey, Ben; Nordhjem, Barbara; Haak, Koen V; Dumoulin, Serge O; Renken, Remco; Curčić-Blake, Branislava; Cornelissen, Frans W
2014-01-01
One way to study connectivity in visual cortical areas is by examining spontaneous neural activity. In the absence of visual input, such activity remains shaped by the underlying neural architecture and, presumably, may still reflect visuotopic organization. Here, we applied population connective field (CF) modeling to estimate the spatial profile of functional connectivity in the early visual cortex during resting state functional magnetic resonance imaging (RS-fMRI). This model-based analysis estimates the spatial integration between blood-oxygen level dependent (BOLD) signals in distinct cortical visual field maps using fMRI. Just as population receptive field (pRF) mapping predicts the collective neural activity in a voxel as a function of response selectivity to stimulus position in visual space, CF modeling predicts the activity of voxels in one visual area as a function of the aggregate activity in voxels in another visual area. In combination with pRF mapping, CF locations on the cortical surface can be interpreted in visual space, thus enabling reconstruction of visuotopic maps from resting state data. We demonstrate that V1 ➤ V2 and V1 ➤ V3 CF maps estimated from resting state fMRI data show visuotopic organization. Therefore, we conclude that-despite some variability in CF estimates between RS scans-neural properties such as CF maps and CF size can be derived from resting state data.
Clavagnier, Simon; Dumoulin, Serge O; Hess, Robert F
2015-11-04
The neural basis of amblyopia is a matter of debate. The following possibilities have been suggested: loss of foveal cells, reduced cortical magnification, loss of spatial resolution of foveal cells, and topographical disarray in the cellular map. To resolve this we undertook a population receptive field (pRF) functional magnetic resonance imaging analysis in the central field in humans with moderate-to-severe amblyopia. We measured the relationship between averaged pRF size and retinal eccentricity in retinotopic visual areas. Results showed that cortical magnification is normal in the foveal field of strabismic amblyopes. However, the pRF sizes are enlarged for the amblyopic eye. We speculate that the pRF enlargement reflects loss of cellular resolution or an increased cellular positional disarray within the representation of the amblyopic eye. The neural basis of amblyopia, a visual deficit affecting 3% of the human population, remains a matter of debate. We undertook the first population receptive field functional magnetic resonance imaging analysis in participants with amblyopia and compared the projections from the amblyopic and fellow normal eye in the visual cortex. The projection from the amblyopic eye was found to have a normal cortical magnification factor, enlarged population receptive field sizes, and topographic disorganization in all early visual areas. This is consistent with an explanation of amblyopia as an immature system with a normal complement of cells whose spatial resolution is reduced and whose topographical map is disordered. This bears upon a number of competing theories for the psychophysical defect and affects future treatment therapies. Copyright © 2015 the authors 0270-6474/15/3514740-16$15.00/0.
Songnian, Zhao; Qi, Zou; Chang, Liu; Xuemin, Liu; Shousi, Sun; Jun, Qiu
2014-04-23
How it is possible to "faithfully" represent a three-dimensional stereoscopic scene using Cartesian coordinates on a plane, and how three-dimensional perceptions differ between an actual scene and an image of the same scene are questions that have not yet been explored in depth. They seem like commonplace phenomena, but in fact, they are important and difficult issues for visual information processing, neural computation, physics, psychology, cognitive psychology, and neuroscience. The results of this study show that the use of plenoptic (or all-optical) functions and their dual plane parameterizations can not only explain the nature of information processing from the retina to the primary visual cortex and, in particular, the characteristics of the visual pathway's optical system and its affine transformation, but they can also clarify the reason why the vanishing point and line exist in a visual image. In addition, they can better explain the reasons why a three-dimensional Cartesian coordinate system can be introduced into the two-dimensional plane to express a real three-dimensional scene. 1. We introduce two different mathematical expressions of the plenoptic functions, Pw and Pv that can describe the objective world. We also analyze the differences between these two functions when describing visual depth perception, that is, the difference between how these two functions obtain the depth information of an external scene.2. The main results include a basic method for introducing a three-dimensional Cartesian coordinate system into a two-dimensional plane to express the depth of a scene, its constraints, and algorithmic implementation. In particular, we include a method to separate the plenoptic function and proceed with the corresponding transformation in the retina and visual cortex.3. We propose that size constancy, the vanishing point, and vanishing line form the basis of visual perception of the outside world, and that the introduction of a three-dimensional Cartesian coordinate system into a two dimensional plane reveals a corresponding mapping between a retinal image and the vanishing point and line.
2014-01-01
Background How it is possible to “faithfully” represent a three-dimensional stereoscopic scene using Cartesian coordinates on a plane, and how three-dimensional perceptions differ between an actual scene and an image of the same scene are questions that have not yet been explored in depth. They seem like commonplace phenomena, but in fact, they are important and difficult issues for visual information processing, neural computation, physics, psychology, cognitive psychology, and neuroscience. Results The results of this study show that the use of plenoptic (or all-optical) functions and their dual plane parameterizations can not only explain the nature of information processing from the retina to the primary visual cortex and, in particular, the characteristics of the visual pathway’s optical system and its affine transformation, but they can also clarify the reason why the vanishing point and line exist in a visual image. In addition, they can better explain the reasons why a three-dimensional Cartesian coordinate system can be introduced into the two-dimensional plane to express a real three-dimensional scene. Conclusions 1. We introduce two different mathematical expressions of the plenoptic functions, P w and P v that can describe the objective world. We also analyze the differences between these two functions when describing visual depth perception, that is, the difference between how these two functions obtain the depth information of an external scene. 2. The main results include a basic method for introducing a three-dimensional Cartesian coordinate system into a two-dimensional plane to express the depth of a scene, its constraints, and algorithmic implementation. In particular, we include a method to separate the plenoptic function and proceed with the corresponding transformation in the retina and visual cortex. 3. We propose that size constancy, the vanishing point, and vanishing line form the basis of visual perception of the outside world, and that the introduction of a three-dimensional Cartesian coordinate system into a two dimensional plane reveals a corresponding mapping between a retinal image and the vanishing point and line. PMID:24755246
Neural network face recognition using wavelets
NASA Astrophysics Data System (ADS)
Karunaratne, Passant V.; Jouny, Ismail I.
1997-04-01
The recognition of human faces is a phenomenon that has been mastered by the human visual system and that has been researched extensively in the domain of computer neural networks and image processing. This research is involved in the study of neural networks and wavelet image processing techniques in the application of human face recognition. The objective of the system is to acquire a digitized still image of a human face, carry out pre-processing on the image as required, an then, given a prior database of images of possible individuals, be able to recognize the individual in the image. The pre-processing segment of the system includes several procedures, namely image compression, denoising, and feature extraction. The image processing is carried out using Daubechies wavelets. Once the images have been passed through the wavelet-based image processor they can be efficiently analyzed by means of a neural network. A back- propagation neural network is used for the recognition segment of the system. The main constraints of the system is with regard to the characteristics of the images being processed. The system should be able to carry out effective recognition of the human faces irrespective of the individual's facial-expression, presence of extraneous objects such as head-gear or spectacles, and face/head orientation. A potential application of this face recognition system would be as a secondary verification method in an automated teller machine.
Separating neural activity associated with emotion and implied motion: An fMRI study.
Kolesar, Tiffany A; Kornelsen, Jennifer; Smith, Stephen D
2017-02-01
Previous research provides evidence for an emo-motoric neural network allowing emotion to modulate activity in regions of the nervous system related to movement. However, recent research suggests that these results may be due to the movement depicted in the stimuli. The purpose of the current study was to differentiate the unique neural activity of emotion and implied motion using functional MRI. Thirteen healthy participants viewed 4 sets of images: (a) negative stimuli implying movement, (b) negative stimuli not implying movement, (c) neutral stimuli implying movement, and (d) neutral stimuli not implying movement. A main effect for implied motion was found, primarily in regions associated with multimodal integration (bilateral insula and cingulate), and visual areas that process motion (bilateral middle temporal gyrus). A main effect for emotion was found primarily in occipital and parietal regions, indicating that emotion enhances visual perception. Surprisingly, emotion also activated the left precentral gyrus, a motor region. These results demonstrate that emotion elicits activity above and beyond that evoked by the perception of implied movement, but that the neural representations of these characteristics overlap. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Dynamic plasticity in coupled avian midbrain maps
NASA Astrophysics Data System (ADS)
Atwal, Gurinder Singh
2004-12-01
Internal mapping of the external environment is carried out using the receptive fields of topographic neurons in the brain, and in a normal barn owl the aural and visual subcortical maps are aligned from early experiences. However, instantaneous misalignment of the aural and visual stimuli has been observed to result in adaptive behavior, manifested by functional and anatomical changes of the auditory processing system. Using methods of information theory and statistical mechanics a model of the adaptive dynamics of the aural receptive field is presented and analyzed. The dynamics is determined by maximizing the mutual information between the neural output and the weighted sensory neural inputs, admixed with noise, subject to biophysical constraints. The reduced costs of neural rewiring, as in the case of young barn owls, reveal two qualitatively different types of receptive field adaptation depending on the magnitude of the audiovisual misalignment. By letting the misalignment increase with time, it is shown that the ability to adapt can be increased even when neural rewiring costs are high, in agreement with recent experimental reports of the increased plasticity of the auditory space map in adult barn owls due to incremental learning. Finally, a critical speed of misalignment is identified, demarcating the crossover from adaptive to nonadaptive behavior.
Understanding human visual systems and its impact on our intelligent instruments
NASA Astrophysics Data System (ADS)
Strojnik Scholl, Marija; Páez, Gonzalo; Scholl, Michelle K.
2013-09-01
We review the evolution of machine vision and comment on the cross-fertilization from the neural sciences onto flourishing fields of neural processing, parallel processing, and associative memory in optical sciences and computing. Then we examine how the intensive efforts in mapping the human brain have been influenced by concepts in computer sciences, control theory, and electronic circuits. We discuss two neural paths that employ the input from the vision sense to determine the navigational options and object recognition. They are ventral temporal pathway for object recognition (what?) and dorsal parietal pathway for navigation (where?), respectively. We describe the reflexive and conscious decision centers in cerebral cortex involved with visual attention and gaze control. Interestingly, these require return path though the midbrain for ocular muscle control. We find that the cognitive psychologists currently study human brain employing low-spatial-resolution fMRI with temporal response on the order of a second. In recent years, the life scientists have concentrated on insect brains to study neural processes. We discuss how reflexive and conscious gaze-control decisions are made in the frontal eye field and inferior parietal lobe, constituting the fronto-parietal attention network. We note that ethical and experiential learnings impact our conscious decisions.
Knips, Guido; Zibner, Stephan K U; Reimann, Hendrik; Schöner, Gregor
2017-01-01
Reaching for objects and grasping them is a fundamental skill for any autonomous robot that interacts with its environment. Although this skill seems trivial to adults, who effortlessly pick up even objects they have never seen before, it is hard for other animals, for human infants, and for most autonomous robots. Any time during movement preparation and execution, human reaching movement are updated if the visual scene changes (with a delay of about 100 ms). The capability for online updating highlights how tightly perception, movement planning, and movement generation are integrated in humans. Here, we report on an effort to reproduce this tight integration in a neural dynamic process model of reaching and grasping that covers the complete path from visual perception to movement generation within a unified modeling framework, Dynamic Field Theory. All requisite processes are realized as time-continuous dynamical systems that model the evolution in time of neural population activation. Population level neural processes bring about the attentional selection of objects, the estimation of object shape and pose, and the mapping of pose parameters to suitable movement parameters. Once a target object has been selected, its pose parameters couple into the neural dynamics of movement generation so that changes of pose are propagated through the architecture to update the performed movement online. Implementing the neural architecture on an anthropomorphic robot arm equipped with a Kinect sensor, we evaluate the model by grasping wooden objects. Their size, shape, and pose are estimated from a neural model of scene perception that is based on feature fields. The sequential organization of a reach and grasp act emerges from a sequence of dynamic instabilities within a neural dynamics of behavioral organization, that effectively switches the neural controllers from one phase of the action to the next. Trajectory formation itself is driven by a dynamical systems version of the potential field approach. We highlight the emergent capacity for online updating by showing that a shift or rotation of the object during the reaching phase leads to the online adaptation of the movement plan and successful completion of the grasp.
Knips, Guido; Zibner, Stephan K. U.; Reimann, Hendrik; Schöner, Gregor
2017-01-01
Reaching for objects and grasping them is a fundamental skill for any autonomous robot that interacts with its environment. Although this skill seems trivial to adults, who effortlessly pick up even objects they have never seen before, it is hard for other animals, for human infants, and for most autonomous robots. Any time during movement preparation and execution, human reaching movement are updated if the visual scene changes (with a delay of about 100 ms). The capability for online updating highlights how tightly perception, movement planning, and movement generation are integrated in humans. Here, we report on an effort to reproduce this tight integration in a neural dynamic process model of reaching and grasping that covers the complete path from visual perception to movement generation within a unified modeling framework, Dynamic Field Theory. All requisite processes are realized as time-continuous dynamical systems that model the evolution in time of neural population activation. Population level neural processes bring about the attentional selection of objects, the estimation of object shape and pose, and the mapping of pose parameters to suitable movement parameters. Once a target object has been selected, its pose parameters couple into the neural dynamics of movement generation so that changes of pose are propagated through the architecture to update the performed movement online. Implementing the neural architecture on an anthropomorphic robot arm equipped with a Kinect sensor, we evaluate the model by grasping wooden objects. Their size, shape, and pose are estimated from a neural model of scene perception that is based on feature fields. The sequential organization of a reach and grasp act emerges from a sequence of dynamic instabilities within a neural dynamics of behavioral organization, that effectively switches the neural controllers from one phase of the action to the next. Trajectory formation itself is driven by a dynamical systems version of the potential field approach. We highlight the emergent capacity for online updating by showing that a shift or rotation of the object during the reaching phase leads to the online adaptation of the movement plan and successful completion of the grasp. PMID:28303100
Embedding Task-Based Neural Models into a Connectome-Based Model of the Cerebral Cortex.
Ulloa, Antonio; Horwitz, Barry
2016-01-01
A number of recent efforts have used large-scale, biologically realistic, neural models to help understand the neural basis for the patterns of activity observed in both resting state and task-related functional neural imaging data. An example of the former is The Virtual Brain (TVB) software platform, which allows one to apply large-scale neural modeling in a whole brain framework. TVB provides a set of structural connectomes of the human cerebral cortex, a collection of neural processing units for each connectome node, and various forward models that can convert simulated neural activity into a variety of functional brain imaging signals. In this paper, we demonstrate how to embed a previously or newly constructed task-based large-scale neural model into the TVB platform. We tested our method on a previously constructed large-scale neural model (LSNM) of visual object processing that consisted of interconnected neural populations that represent, primary and secondary visual, inferotemporal, and prefrontal cortex. Some neural elements in the original model were "non-task-specific" (NS) neurons that served as noise generators to "task-specific" neurons that processed shapes during a delayed match-to-sample (DMS) task. We replaced the NS neurons with an anatomical TVB connectome model of the cerebral cortex comprising 998 regions of interest interconnected by white matter fiber tract weights. We embedded our LSNM of visual object processing into corresponding nodes within the TVB connectome. Reciprocal connections between TVB nodes and our task-based modules were included in this framework. We ran visual object processing simulations and showed that the TVB simulator successfully replaced the noise generation originally provided by NS neurons; i.e., the DMS tasks performed with the hybrid LSNM/TVB simulator generated equivalent neural and fMRI activity to that of the original task-based models. Additionally, we found partial agreement between the functional connectivities using the hybrid LSNM/TVB model and the original LSNM. Our framework thus presents a way to embed task-based neural models into the TVB platform, enabling a better comparison between empirical and computational data, which in turn can lead to a better understanding of how interacting neural populations give rise to human cognitive behaviors.
Visual Aversive Learning Compromises Sensory Discrimination.
Shalev, Lee; Paz, Rony; Avidan, Galia
2018-03-14
Aversive learning is thought to modulate perceptual thresholds, which can lead to overgeneralization. However, it remains undetermined whether this modulation is domain specific or a general effect. Moreover, despite the unique role of the visual modality in human perception, it is unclear whether this aspect of aversive learning exists in this modality. The current study was designed to examine the effect of visual aversive outcomes on the perception of basic visual and auditory features. We tested the ability of healthy participants, both males and females, to discriminate between neutral stimuli, before and after visual learning. In each experiment, neutral stimuli were associated with aversive images in an experimental group and with neutral images in a control group. Participants demonstrated a deterioration in discrimination (higher discrimination thresholds) only after aversive learning. This deterioration was measured for both auditory (tone frequency) and visual (orientation and contrast) features. The effect was replicated in five different experiments and lasted for at least 24 h. fMRI neural responses and pupil size were also measured during learning. We showed an increase in neural activations in the anterior cingulate cortex, insula, and amygdala during aversive compared with neutral learning. Interestingly, the early visual cortex showed increased brain activity during aversive compared with neutral context trials, with identical visual information. Our findings imply the existence of a central multimodal mechanism, which modulates early perceptual properties, following exposure to negative situations. Such a mechanism could contribute to abnormal responses that underlie anxiety states, even in new and safe environments. SIGNIFICANCE STATEMENT Using a visual aversive-learning paradigm, we found deteriorated discrimination abilities for visual and auditory stimuli that were associated with visual aversive stimuli. We showed increased neural activations in the anterior cingulate cortex, insula, and amygdala during aversive learning, compared with neutral learning. Importantly, similar findings were also evident in the early visual cortex during trials with aversive/neutral context, but with identical visual information. The demonstration of this phenomenon in the visual modality is important, as it provides support to the notion that aversive learning can influence perception via a central mechanism, independent of input modality. Given the dominance of the visual system in human perception, our findings hold relevance to daily life, as well as imply a potential etiology for anxiety disorders. Copyright © 2018 the authors 0270-6474/18/382766-14$15.00/0.
ERIC Educational Resources Information Center
Barca, Laura; Cornelissen, Piers; Simpson, Michael; Urooj, Uzma; Woods, Will; Ellis, Andrew W.
2011-01-01
Right-handed participants respond more quickly and more accurately to written words presented in the right visual field (RVF) than in the left visual field (LVF). Previous attempts to identify the neural basis of the RVF advantage have had limited success. Experiment 1 was a behavioral study of lateralized word naming which established that the…
Optimization of a GCaMP calcium indicator for neural activity imaging.
Akerboom, Jasper; Chen, Tsai-Wen; Wardill, Trevor J; Tian, Lin; Marvin, Jonathan S; Mutlu, Sevinç; Calderón, Nicole Carreras; Esposti, Federico; Borghuis, Bart G; Sun, Xiaonan Richard; Gordus, Andrew; Orger, Michael B; Portugues, Ruben; Engert, Florian; Macklin, John J; Filosa, Alessandro; Aggarwal, Aman; Kerr, Rex A; Takagi, Ryousuke; Kracun, Sebastian; Shigetomi, Eiji; Khakh, Baljit S; Baier, Herwig; Lagnado, Leon; Wang, Samuel S-H; Bargmann, Cornelia I; Kimmel, Bruce E; Jayaraman, Vivek; Svoboda, Karel; Kim, Douglas S; Schreiter, Eric R; Looger, Loren L
2012-10-03
Genetically encoded calcium indicators (GECIs) are powerful tools for systems neuroscience. Recent efforts in protein engineering have significantly increased the performance of GECIs. The state-of-the art single-wavelength GECI, GCaMP3, has been deployed in a number of model organisms and can reliably detect three or more action potentials in short bursts in several systems in vivo. Through protein structure determination, targeted mutagenesis, high-throughput screening, and a battery of in vitro assays, we have increased the dynamic range of GCaMP3 by severalfold, creating a family of "GCaMP5" sensors. We tested GCaMP5s in several systems: cultured neurons and astrocytes, mouse retina, and in vivo in Caenorhabditis chemosensory neurons, Drosophila larval neuromuscular junction and adult antennal lobe, zebrafish retina and tectum, and mouse visual cortex. Signal-to-noise ratio was improved by at least 2- to 3-fold. In the visual cortex, two GCaMP5 variants detected twice as many visual stimulus-responsive cells as GCaMP3. By combining in vivo imaging with electrophysiology we show that GCaMP5 fluorescence provides a more reliable measure of neuronal activity than its predecessor GCaMP3. GCaMP5 allows more sensitive detection of neural activity in vivo and may find widespread applications for cellular imaging in general.
Sensitive periods for the functional specialization of the neural system for human face processing.
Röder, Brigitte; Ley, Pia; Shenoy, Bhamy H; Kekunnaya, Ramesh; Bottari, Davide
2013-10-15
The aim of the study was to identify possible sensitive phases in the development of the processing system for human faces. We tested the neural processing of faces in 11 humans who had been blind from birth and had undergone cataract surgery between 2 mo and 14 y of age. Pictures of faces and houses, scrambled versions of these pictures, and pictures of butterflies were presented while event-related potentials were recorded. Participants had to respond to the pictures of butterflies (targets) only. All participants, even those who had been blind from birth for several years, were able to categorize the pictures and to detect the targets. In healthy controls and in a group of visually impaired individuals with a history of developmental or incomplete congenital cataracts, the well-known enhancement of the N170 (negative peak around 170 ms) event-related potential to faces emerged, but a face-sensitive response was not observed in humans with a history of congenital dense cataracts. By contrast, this group showed a similar N170 response to all visual stimuli, which was indistinguishable from the N170 response to faces in the controls. The face-sensitive N170 response has been associated with the structural encoding of faces. Therefore, these data provide evidence for the hypothesis that the functional differentiation of category-specific neural representations in humans, presumably involving the elaboration of inhibitory circuits, is dependent on experience and linked to a sensitive period. Such functional specialization of neural systems seems necessary to archive high processing proficiency.
Graf, Heiko; Metzger, Coraline D; Walter, Martin; Abler, Birgit
2016-01-06
Investigating the effects of serotonergic antidepressants on neural correlates of visual erotic stimulation revealed decreased reactivity within the dopaminergic reward network along with decreased subjective sexual functioning compared with placebo. However, a global dampening of the reward system under serotonergic drugs is not intuitive considering clinical observations of their beneficial effects in the treatment of depression. Particularly, learning signals as coded in prediction error processing within the dopaminergic reward system can be assumed to be rather enhanced as antidepressant drugs have been demonstrated to facilitate the efficacy of psychotherapeutic interventions relying on learning processes. Within the same study sample, we now explored the effects of serotonergic and dopaminergic/noradrenergic antidepressants on prediction error signals compared with placebo by functional MRI. A total of 17 healthy male participants (mean age: 25.4 years) were investigated under the administration of paroxetine, bupropion and placebo for 7 days each within a randomized, double-blind, within-subject cross-over design. During functional MRI, we used an established monetary incentive task to explore neural prediction error signals within the bilateral nucleus accumbens as region of interest within the dopaminergic reward system. In contrast to diminished neural activations and subjective sexual functioning under the serotonergic agent paroxetine under visual erotic stimulation, we revealed unaffected or even enhanced neural prediction error processing within the nucleus accumbens under this antidepressant along with unaffected behavioural processing. Our study provides evidence that serotonergic antidepressants facilitate prediction error signalling and may support suggestions of beneficial effects of these agents on reinforced learning as an essential element in behavioural psychotherapy.
Multiple mechanisms of consciousness: the neural correlates of emotional awareness.
Amting, Jayna M; Greening, Steven G; Mitchell, Derek G V
2010-07-28
Emotional stimuli, including facial expressions, are thought to gain rapid and privileged access to processing resources in the brain. Despite this access, we are conscious of only a fraction of the myriad of emotion-related cues we face everyday. It remains unclear, therefore, what the relationship is between activity in neural regions associated with emotional representation and the phenomenological experience of emotional awareness. We used functional magnetic resonance imaging and binocular rivalry to delineate the neural correlates of awareness of conflicting emotional expressions in humans. Behaviorally, fearful faces were significantly more likely to be perceived than disgusted or neutral faces. Functionally, increased activity was observed in regions associated with facial expression processing, including the amygdala and fusiform gyrus during emotional awareness. In contrast, awareness of neutral faces and suppression of fearful faces were associated with increased activity in dorsolateral prefrontal and inferior parietal cortices. The amygdala showed increased functional connectivity with ventral visual system regions during fear awareness and increased connectivity with perigenual prefrontal cortex (pgPFC; Brodmann's area 32/10) when fear was suppressed. Despite being prioritized for awareness, emotional items were associated with reduced activity in areas considered critical for consciousness. Contributions to consciousness from bottom-up and top-down neural regions may be additive, such that increased activity in specialized regions within the extended ventral visual system may reduce demands on a frontoparietal system important for awareness. The possibility is raised that interactions between pgPFC and the amygdala, previously implicated in extinction, may also influence whether or not an emotional stimulus is accessible to consciousness.
Marino, Alexandria C.; Mazer, James A.
2016-01-01
During natural vision, saccadic eye movements lead to frequent retinal image changes that result in different neuronal subpopulations representing the same visual feature across fixations. Despite these potentially disruptive changes to the neural representation, our visual percept is remarkably stable. Visual receptive field remapping, characterized as an anticipatory shift in the position of a neuron’s spatial receptive field immediately before saccades, has been proposed as one possible neural substrate for visual stability. Many of the specific properties of remapping, e.g., the exact direction of remapping relative to the saccade vector and the precise mechanisms by which remapping could instantiate stability, remain a matter of debate. Recent studies have also shown that visual attention, like perception itself, can be sustained across saccades, suggesting that the attentional control system can also compensate for eye movements. Classical remapping could have an attentional component, or there could be a distinct attentional analog of visual remapping. At this time we do not yet fully understand how the stability of attentional representations relates to perisaccadic receptive field shifts. In this review, we develop a vocabulary for discussing perisaccadic shifts in receptive field location and perisaccadic shifts of attentional focus, review and synthesize behavioral and neurophysiological studies of perisaccadic perception and perisaccadic attention, and identify open questions that remain to be experimentally addressed. PMID:26903820
Computational Model of Primary Visual Cortex Combining Visual Attention for Action Recognition
Shu, Na; Gao, Zhiyong; Chen, Xiangan; Liu, Haihua
2015-01-01
Humans can easily understand other people’s actions through visual systems, while computers cannot. Therefore, a new bio-inspired computational model is proposed in this paper aiming for automatic action recognition. The model focuses on dynamic properties of neurons and neural networks in the primary visual cortex (V1), and simulates the procedure of information processing in V1, which consists of visual perception, visual attention and representation of human action. In our model, a family of the three-dimensional spatial-temporal correlative Gabor filters is used to model the dynamic properties of the classical receptive field of V1 simple cell tuned to different speeds and orientations in time for detection of spatiotemporal information from video sequences. Based on the inhibitory effect of stimuli outside the classical receptive field caused by lateral connections of spiking neuron networks in V1, we propose surround suppressive operator to further process spatiotemporal information. Visual attention model based on perceptual grouping is integrated into our model to filter and group different regions. Moreover, in order to represent the human action, we consider the characteristic of the neural code: mean motion map based on analysis of spike trains generated by spiking neurons. The experimental evaluation on some publicly available action datasets and comparison with the state-of-the-art approaches demonstrate the superior performance of the proposed model. PMID:26132270
Tradespace Exploration for the Engineering of Resilient Systems
2015-05-01
world scenarios. The types of tools within the SAE set include visualization, decision analysis, and M&S, so it is difficult to categorize this toolset... overpopulated , or questionable. ERS Tradespace Workshop Create predictive models using multiple techniques (e.g., regression, Kriging, neural nets
Internal state of monkey primary visual cortex (V1) predicts figure-ground perception.
Supèr, Hans; van der Togt, Chris; Spekreijse, Henk; Lamme, Victor A F
2003-04-15
When stimulus information enters the visual cortex, it is rapidly processed for identification. However, sometimes the processing of the stimulus is inadequate and the subject fails to notice the stimulus. Human psychophysical studies show that this occurs during states of inattention or absent-mindedness. At a neurophysiological level, it remains unclear what these states are. To study the role of cortical state in perception, we analyzed neural activity in the monkey primary visual cortex before the appearance of a stimulus. We show that, before the appearance of a reported stimulus, neural activity was stronger and more correlated than for a not-reported stimulus. This indicates that the strength of neural activity and the functional connectivity between neurons in the primary visual cortex participate in the perceptual processing of stimulus information. Thus, to detect a stimulus, the visual cortex needs to be in an appropriate state.
The Orphan Nuclear Receptor TLX/NR2E1 in Neural Stem Cells and Diseases.
Wang, Tao; Xiong, Jian-Qiong
2016-02-01
The human TLX gene encodes an orphan nuclear receptor predominantly expressed in the central nervous system. Tailess and Tlx, the TLX homologues in Drosophila and mouse, play essential roles in body-pattern formation and neurogenesis during early embryogenesis and perform crucial functions in maintaining stemness and controlling the differentiation of adult neural stem cells in the central nervous system, especially the visual system. Multiple target genes and signaling pathways are regulated by TLX and its homologues in specific tissues during various developmental stages. This review aims to summarize previous studies including many recent updates from different aspects concerning TLX and its homologues in Drosophila and mouse.
NASA Astrophysics Data System (ADS)
Yang, Lei; Tian, Jie; Wang, Xiaoxiang; Hu, Jin
2005-04-01
The comprehensive understanding of human emotion processing needs consideration both in the spatial distribution and the temporal sequencing of neural activity. The aim of our work is to identify brain regions involved in emotional recognition as well as to follow the time sequence in the millisecond-range resolution. The effect of activation upon visual stimuli in different gender by International Affective Picture System (IAPS) has been examined. Hemodynamic and electrophysiological responses were measured in the same subjects. Both fMRI and ERP study were employed in an event-related study. fMRI have been obtained with 3.0 T Siemens Magnetom whole-body MRI scanner. 128-channel ERP data were recorded using an EGI system. ERP is sensitive to millisecond changes in mental activity, but the source localization and timing is limited by the ill-posed 'inversed' problem. We try to investigate the ERP source reconstruction problem in this study using fMRI constraint. We chose ICA as a pre-processing step of ERP source reconstruction to exclude the artifacts and provide a prior estimate of the number of dipoles. The results indicate that male and female show differences in neural mechanism during emotion visual stimuli.
Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses
Molloy, Katharine; Griffiths, Timothy D.; Lavie, Nilli
2015-01-01
Due to capacity limits on perception, conditions of high perceptual load lead to reduced processing of unattended stimuli (Lavie et al., 2014). Accumulating work demonstrates the effects of visual perceptual load on visual cortex responses, but the effects on auditory processing remain poorly understood. Here we establish the neural mechanisms underlying “inattentional deafness”—the failure to perceive auditory stimuli under high visual perceptual load. Participants performed a visual search task of low (target dissimilar to nontarget items) or high (target similar to nontarget items) load. On a random subset (50%) of trials, irrelevant tones were presented concurrently with the visual stimuli. Brain activity was recorded with magnetoencephalography, and time-locked responses to the visual search array and to the incidental presence of unattended tones were assessed. High, compared to low, perceptual load led to increased early visual evoked responses (within 100 ms from onset). This was accompanied by reduced early (∼100 ms from tone onset) auditory evoked activity in superior temporal sulcus and posterior middle temporal gyrus. A later suppression of the P3 “awareness” response to the tones was also observed under high load. A behavioral experiment revealed reduced tone detection sensitivity under high visual load, indicating that the reduction in neural responses was indeed associated with reduced awareness of the sounds. These findings support a neural account of shared audiovisual resources, which, when depleted under load, leads to failures of sensory perception and awareness. SIGNIFICANCE STATEMENT The present work clarifies the neural underpinning of inattentional deafness under high visual load. The findings of near-simultaneous load effects on both visual and auditory evoked responses suggest shared audiovisual processing capacity. Temporary depletion of shared capacity in perceptually demanding visual tasks leads to a momentary reduction in sensory processing of auditory stimuli, resulting in inattentional deafness. The dynamic “push–pull” pattern of load effects on visual and auditory processing furthers our understanding of both the neural mechanisms of attention and of cross-modal effects across visual and auditory processing. These results also offer an explanation for many previous failures to find cross-modal effects in experiments where the visual load effects may not have coincided directly with auditory sensory processing. PMID:26658858
ACTIVIS: Visual Exploration of Industry-Scale Deep Neural Network Models.
Kahng, Minsuk; Andrews, Pierre Y; Kalro, Aditya; Polo Chau, Duen Horng
2017-08-30
While deep learning models have achieved state-of-the-art accuracies for many prediction tasks, understanding these models remains a challenge. Despite the recent interest in developing visual tools to help users interpret deep learning models, the complexity and wide variety of models deployed in industry, and the large-scale datasets that they used, pose unique design challenges that are inadequately addressed by existing work. Through participatory design sessions with over 15 researchers and engineers at Facebook, we have developed, deployed, and iteratively improved ACTIVIS, an interactive visualization system for interpreting large-scale deep learning models and results. By tightly integrating multiple coordinated views, such as a computation graph overview of the model architecture, and a neuron activation view for pattern discovery and comparison, users can explore complex deep neural network models at both the instance- and subset-level. ACTIVIS has been deployed on Facebook's machine learning platform. We present case studies with Facebook researchers and engineers, and usage scenarios of how ACTIVIS may work with different models.
NASA Astrophysics Data System (ADS)
Clawson, Wesley Patrick
Previous studies, both theoretical and experimental, of network level dynamics in the cerebral cortex show evidence for a statistical phenomenon called criticality; a phenomenon originally studied in the context of phase transitions in physical systems and that is associated with favorable information processing in the context of the brain. The focus of this thesis is to expand upon past results with new experimentation and modeling to show a relationship between criticality and the ability to detect and discriminate sensory input. A line of theoretical work predicts maximal sensory discrimination as a functional benefit of criticality, which can then be characterized using mutual information between sensory input, visual stimulus, and neural response,. The primary finding of our experiments in the visual cortex in turtles and neuronal network modeling confirms this theoretical prediction. We show that sensory discrimination is maximized when visual cortex operates near criticality. In addition to presenting this primary finding in detail, this thesis will also address our preliminary results on change-point-detection in experimentally measured cortical dynamics.
Fear Processing in Dental Phobia during Crossmodal Symptom Provocation: An fMRI Study
Maslowski, Nina Isabel; Wittchen, Hans-Ulrich; Lueken, Ulrike
2014-01-01
While previous studies successfully identified the core neural substrates of the animal subtype of specific phobia, only few and inconsistent research is available for dental phobia. These findings might partly relate to the fact that, typically, visual stimuli were employed. The current study aimed to investigate the influence of stimulus modality on neural fear processing in dental phobia. Thirteen dental phobics (DP) and thirteen healthy controls (HC) attended a block-design functional magnetic resonance imaging (fMRI) symptom provocation paradigm encompassing both visual and auditory stimuli. Drill sounds and matched neutral sinus tones served as auditory stimuli and dentist scenes and matched neutral videos as visual stimuli. Group comparisons showed increased activation in the insula, anterior cingulate cortex, orbitofrontal cortex, and thalamus in DP compared to HC during auditory but not visual stimulation. On the contrary, no differential autonomic reactions were observed in DP. Present results are largely comparable to brain areas identified in animal phobia, but also point towards a potential downregulation of autonomic outflow by neural fear circuits in this disorder. Findings enlarge our knowledge about neural correlates of dental phobia and may help to understand the neural underpinnings of the clinical and physiological characteristics of the disorder. PMID:24738049
Real-time data acquisition and control system for the measurement of motor and neural data
Bryant, Christopher L.; Gandhi, Neeraj J.
2013-01-01
This paper outlines a powerful, yet flexible real-time data acquisition and control system for use in the triggering and measurement of both analog and digital events. Built using the LabVIEW development architecture (version 7.1) and freely available, this system provides precisely timed auditory and visual stimuli to a subject while recording analog data and timestamps of neural activity retrieved from a window discriminator. The system utilizes the most recent real-time (RT) technology in order to provide not only a guaranteed data acquisition rate of 1 kHz, but a much more difficult to achieve guaranteed system response time of 1 ms. The system interface is windows-based and easy to use, providing a host of configurable options for end-user customization. PMID:15698659
Minot, Thomas; Dury, Hannah L; Eguchi, Akihiro; Humphreys, Glyn W; Stringer, Simon M
2017-03-01
We use an established neural network model of the primate visual system to show how neurons might learn to encode the gender of faces. The model consists of a hierarchy of 4 competitive neuronal layers with associatively modifiable feedforward synaptic connections between successive layers. During training, the network was presented with many realistic images of male and female faces, during which the synaptic connections are modified using biologically plausible local associative learning rules. After training, we found that different subsets of output neurons have learned to respond exclusively to either male or female faces. With the inclusion of short range excitation within each neuronal layer to implement a self-organizing map architecture, neurons representing either male or female faces were clustered together in the output layer. This learning process is entirely unsupervised, as the gender of the face images is not explicitly labeled and provided to the network as a supervisory training signal. These simulations are extended to training the network on rotating faces. It is found that by using a trace learning rule incorporating a temporal memory trace of recent neuronal activity, neurons responding selectively to either male or female faces were also able to learn to respond invariantly over different views of the faces. This kind of trace learning has been previously shown to operate within the primate visual system by neurophysiological and psychophysical studies. The computer simulations described here predict that similar neurons encoding the gender of faces will be present within the primate visual system. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Visual gravitational motion and the vestibular system in humans
Lacquaniti, Francesco; Bosco, Gianfranco; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Moscatelli, Alessandro; Zago, Myrka
2013-01-01
The visual system is poorly sensitive to arbitrary accelerations, but accurately detects the effects of gravity on a target motion. Here we review behavioral and neuroimaging data about the neural mechanisms for dealing with object motion and egomotion under gravity. The results from several experiments show that the visual estimates of a target motion under gravity depend on the combination of a prior of gravity effects with on-line visual signals on target position and velocity. These estimates are affected by vestibular inputs, and are encoded in a visual-vestibular network whose core regions lie within or around the Sylvian fissure, and are represented by the posterior insula/retroinsula/temporo-parietal junction. This network responds both to target motions coherent with gravity and to vestibular caloric stimulation in human fMRI studies. Transient inactivation of the temporo-parietal junction selectively disrupts the interception of targets accelerated by gravity. PMID:24421761
Visual gravitational motion and the vestibular system in humans.
Lacquaniti, Francesco; Bosco, Gianfranco; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Moscatelli, Alessandro; Zago, Myrka
2013-12-26
The visual system is poorly sensitive to arbitrary accelerations, but accurately detects the effects of gravity on a target motion. Here we review behavioral and neuroimaging data about the neural mechanisms for dealing with object motion and egomotion under gravity. The results from several experiments show that the visual estimates of a target motion under gravity depend on the combination of a prior of gravity effects with on-line visual signals on target position and velocity. These estimates are affected by vestibular inputs, and are encoded in a visual-vestibular network whose core regions lie within or around the Sylvian fissure, and are represented by the posterior insula/retroinsula/temporo-parietal junction. This network responds both to target motions coherent with gravity and to vestibular caloric stimulation in human fMRI studies. Transient inactivation of the temporo-parietal junction selectively disrupts the interception of targets accelerated by gravity.
ERIC Educational Resources Information Center
van Gog, Tamara; Paas, Fred; Marcus, Nadine; Ayres, Paul; Sweller, John
2009-01-01
Learning by observing and imitating others has long been recognized as constituting a powerful learning strategy for humans. Recent findings from neuroscience research, more specifically on the mirror neuron system, begin to provide insight into the neural bases of learning by observation and imitation. These findings are discussed here, along…
Persichetti, Andrew S; Aguirre, Geoffrey K; Thompson-Schill, Sharon L
2015-05-01
A central concern in the study of learning and decision-making is the identification of neural signals associated with the values of choice alternatives. An important factor in understanding the neural correlates of value is the representation of the object itself, separate from the act of choosing. Is it the case that the representation of an object within visual areas will change if it is associated with a particular value? We used fMRI adaptation to measure the neural similarity of a set of novel objects before and after participants learned to associate monetary values with the objects. We used a range of both positive and negative values to allow us to distinguish effects of behavioral salience (i.e., large vs. small values) from effects of valence (i.e., positive vs. negative values). During the scanning session, participants made a perceptual judgment unrelated to value. Crucially, the similarity of the visual features of any pair of objects did not predict the similarity of their value, so we could distinguish adaptation effects due to each dimension of similarity. Within early visual areas, we found that value similarity modulated the neural response to the objects after training. These results show that an abstract dimension, in this case, monetary value, modulates neural response to an object in visual areas of the brain even when attention is diverted.
Explaining neural signals in human visual cortex with an associative learning model.
Jiang, Jiefeng; Schmajuk, Nestor; Egner, Tobias
2012-08-01
"Predictive coding" models posit a key role for associative learning in visual cognition, viewing perceptual inference as a process of matching (learned) top-down predictions (or expectations) against bottom-up sensory evidence. At the neural level, these models propose that each region along the visual processing hierarchy entails one set of processing units encoding predictions of bottom-up input, and another set computing mismatches (prediction error or surprise) between predictions and evidence. This contrasts with traditional views of visual neurons operating purely as bottom-up feature detectors. In support of the predictive coding hypothesis, a recent human neuroimaging study (Egner, Monti, & Summerfield, 2010) showed that neural population responses to expected and unexpected face and house stimuli in the "fusiform face area" (FFA) could be well-described as a summation of hypothetical face-expectation and -surprise signals, but not by feature detector responses. Here, we used computer simulations to test whether these imaging data could be formally explained within the broader framework of a mathematical neural network model of associative learning (Schmajuk, Gray, & Lam, 1996). Results show that FFA responses could be fit very closely by model variables coding for conditional predictions (and their violations) of stimuli that unconditionally activate the FFA. These data document that neural population signals in the ventral visual stream that deviate from classic feature detection responses can formally be explained by associative prediction and surprise signals.
Computing with scale-invariant neural representations
NASA Astrophysics Data System (ADS)
Howard, Marc; Shankar, Karthik
The Weber-Fechner law is perhaps the oldest quantitative relationship in psychology. Consider the problem of the brain representing a function f (x) . Different neurons have receptive fields that support different parts of the range, such that the ith neuron has a receptive field at xi. Weber-Fechner scaling refers to the finding that the width of the receptive field scales with xi as does the difference between the centers of adjacent receptive fields. Weber-Fechner scaling is exponentially resource-conserving. Neurophysiological evidence suggests that neural representations obey Weber-Fechner scaling in the visual system and perhaps other systems as well. We describe an optimality constraint that is solved by Weber-Fechner scaling, providing an information-theoretic rationale for this principle of neural coding. Weber-Fechner scaling can be generated within a mathematical framework using the Laplace transform. Within this framework, simple computations such as translation, correlation and cross-correlation can be accomplished. This framework can in principle be extended to provide a general computational language for brain-inspired cognitive computation on scale-invariant representations. Supported by NSF PHY 1444389 and the BU Initiative for the Physics and Mathematics of Neural Systems,.
Simultaneous neural and movement recording in large-scale immersive virtual environments.
Snider, Joseph; Plank, Markus; Lee, Dongpyo; Poizner, Howard
2013-10-01
Virtual reality (VR) allows precise control and manipulation of rich, dynamic stimuli that, when coupled with on-line motion capture and neural monitoring, can provide a powerful means both of understanding brain behavioral relations in the high dimensional world and of assessing and treating a variety of neural disorders. Here we present a system that combines state-of-the-art, fully immersive, 3D, multi-modal VR with temporally aligned electroencephalographic (EEG) recordings. The VR system is dynamic and interactive across visual, auditory, and haptic interactions, providing sight, sound, touch, and force. Crucially, it does so with simultaneous EEG recordings while subjects actively move about a 20 × 20 ft² space. The overall end-to-end latency between real movement and its simulated movement in the VR is approximately 40 ms. Spatial precision of the various devices is on the order of millimeters. The temporal alignment with the neural recordings is accurate to within approximately 1 ms. This powerful combination of systems opens up a new window into brain-behavioral relations and a new means of assessment and rehabilitation of individuals with motor and other disorders.
Neuronal integration in visual cortex elevates face category tuning to conscious face perception
Fahrenfort, Johannes J.; Snijders, Tineke M.; Heinen, Klaartje; van Gaal, Simon; Scholte, H. Steven; Lamme, Victor A. F.
2012-01-01
The human brain has the extraordinary capability to transform cluttered sensory input into distinct object representations. For example, it is able to rapidly and seemingly without effort detect object categories in complex natural scenes. Surprisingly, category tuning is not sufficient to achieve conscious recognition of objects. What neural process beyond category extraction might elevate neural representations to the level where objects are consciously perceived? Here we show that visible and invisible faces produce similar category-selective responses in the ventral visual cortex. The pattern of neural activity evoked by visible faces could be used to decode the presence of invisible faces and vice versa. However, only visible faces caused extensive response enhancements and changes in neural oscillatory synchronization, as well as increased functional connectivity between higher and lower visual areas. We conclude that conscious face perception is more tightly linked to neural processes of sustained information integration and binding than to processes accommodating face category tuning. PMID:23236162
Common capacity-limited neural mechanisms of selective attention and spatial working memory encoding
Fusser, Fabian; Linden, David E J; Rahm, Benjamin; Hampel, Harald; Haenschel, Corinna; Mayer, Jutta S
2011-01-01
One characteristic feature of visual working memory (WM) is its limited capacity, and selective attention has been implicated as limiting factor. A possible reason why attention constrains the number of items that can be encoded into WM is that the two processes share limited neural resources. Functional magnetic resonance imaging (fMRI) studies have indeed demonstrated commonalities between the neural substrates of WM and attention. Here we investigated whether such overlapping activations reflect interacting neural mechanisms that could result in capacity limitations. To independently manipulate the demands on attention and WM encoding within one single task, we combined visual search and delayed discrimination of spatial locations. Participants were presented with a search array and performed easy or difficult visual search in order to encode one, three or five positions of target items into WM. Our fMRI data revealed colocalised activation for attention-demanding visual search and WM encoding in distributed posterior and frontal regions. However, further analysis yielded two patterns of results. Activity in prefrontal regions increased additively with increased demands on WM and attention, indicating regional overlap without functional interaction. Conversely, the WM load-dependent activation in visual, parietal and premotor regions was severely reduced during high attentional demand. We interpret this interaction as indicating the sites of shared capacity-limited neural resources. Our findings point to differential contributions of prefrontal and posterior regions to the common neural mechanisms that support spatial WM encoding and attention, providing new imaging evidence for attention-based models of WM encoding. PMID:21781193
Rapid Processing of a Global Feature in the ON Visual Pathways of Behaving Monkeys.
Huang, Jun; Yang, Yan; Zhou, Ke; Zhao, Xudong; Zhou, Quan; Zhu, Hong; Yang, Yingshan; Zhang, Chunming; Zhou, Yifeng; Zhou, Wu
2017-01-01
Visual objects are recognized by their features. Whereas, some features are based on simple components (i.e., local features, such as orientation of line segments), some features are based on the whole object (i.e., global features, such as an object having a hole in it). Over the past five decades, behavioral, physiological, anatomical, and computational studies have established a general model of vision, which starts from extracting local features in the lower visual pathways followed by a feature integration process that extracts global features in the higher visual pathways. This local-to-global model is successful in providing a unified account for a vast sets of perception experiments, but it fails to account for a set of experiments showing human visual systems' superior sensitivity to global features. Understanding the neural mechanisms underlying the "global-first" process will offer critical insights into new models of vision. The goal of the present study was to establish a non-human primate model of rapid processing of global features for elucidating the neural mechanisms underlying differential processing of global and local features. Monkeys were trained to make a saccade to a target in the black background, which was different from the distractors (white circle) in color (e.g., red circle target), local features (e.g., white square target), a global feature (e.g., white ring with a hole target) or their combinations (e.g., red square target). Contrary to the predictions of the prevailing local-to-global model, we found that (1) detecting a distinction or a change in the global feature was faster than detecting a distinction or a change in color or local features; (2) detecting a distinction in color was facilitated by a distinction in the global feature, but not in the local features; and (3) detecting the hole was interfered by the local features of the hole (e.g., white ring with a squared hole). These results suggest that monkey ON visual systems have a subsystem that is more sensitive to distinctions in the global feature than local features. They also provide the behavioral constraints for identifying the underlying neural substrates.
Jack, Bradley N; Roeber, Urte; O'Shea, Robert P
2017-01-01
When dissimilar images are presented one to each eye, we do not see both images; rather, we see one at a time, alternating unpredictably. This is called binocular rivalry, and it has recently been used to study brain processes that correlate with visual consciousness, because perception changes without any change in the sensory input. Such studies have used various types of images, but the most popular have been gratings: sets of bright and dark lines of orthogonal orientations presented one to each eye. We studied whether using cardinal rival gratings (vertical, 0°, and horizontal, 90°) versus oblique rival gratings (left-oblique, -45°, and right-oblique, 45°) influences early neural correlates of visual consciousness, because of the oblique effect: the tendency for visual performance to be greater for cardinal gratings than for oblique gratings. Participants viewed rival gratings and pressed keys indicating which of the two gratings they perceived, was dominant. Next, we changed one of the gratings to match the grating shown to the other eye, yielding binocular fusion. Participants perceived the rivalry-to-fusion change to the dominant grating and not to the other, suppressed grating. Using event-related potentials (ERPs), we found neural correlates of visual consciousness at the P1 for both sets of gratings, as well as at the P1-N1 for oblique gratings, and we found a neural correlate of the oblique effect at the N1, but only for perceived changes. These results show that the P1 is the earliest neural activity associated with visual consciousness and that visual consciousness might be necessary to elicit the oblique effect.
Marathe, Amar R; Lawhern, Vernon J; Wu, Dongrui; Slayback, David; Lance, Brent J
2016-03-01
The application space for brain-computer interface (BCI) technologies is rapidly expanding with improvements in technology. However, most real-time BCIs require extensive individualized calibration prior to use, and systems often have to be recalibrated to account for changes in the neural signals due to a variety of factors including changes in human state, the surrounding environment, and task conditions. Novel approaches to reduce calibration time or effort will dramatically improve the usability of BCI systems. Active Learning (AL) is an iterative semi-supervised learning technique for learning in situations in which data may be abundant, but labels for the data are difficult or expensive to obtain. In this paper, we apply AL to a simulated BCI system for target identification using data from a rapid serial visual presentation (RSVP) paradigm to minimize the amount of training samples needed to initially calibrate a neural classifier. Our results show AL can produce similar overall classification accuracy with significantly less labeled data (in some cases less than 20%) when compared to alternative calibration approaches. In fact, AL classification performance matches performance of 10-fold cross-validation (CV) in over 70% of subjects when training with less than 50% of the data. To our knowledge, this is the first work to demonstrate the use of AL for offline electroencephalography (EEG) calibration in a simulated BCI paradigm. While AL itself is not often amenable for use in real-time systems, this work opens the door to alternative AL-like systems that are more amenable for BCI applications and thus enables future efforts for developing highly adaptive BCI systems.
Johnson, J L
1994-09-10
The linking-field neural network model of Eckhorn et al. [Neural Comput. 2, 293-307 (1990)] was introduced to explain the experimentally observed synchronous activity among neural assemblies in the cat cortex induced by feature-dependent visual activity. The model produces synchronous bursts of pulses from neurons with similar activity, effectively grouping them by phase and pulse frequency. It gives a basic new function: grouping by similarity. The synchronous bursts are obtained in the limit of strong linking strengths. The linking-field model in the limit of moderate-to-weak linking characterized by few if any multiple bursts is investigated. In this limit dynamic, locally periodic traveling waves exist whose time signal encodes the geometrical structure of a two-dimensional input image. The signal can be made insensitive to translation, scale, rotation, distortion, and intensity. The waves transmit information beyond the physical interconnect distance. The model is implemented in an optical hybrid demonstration system. Results of the simulations and the optical system are presented.
Neural Mechanisms of Information Storage in Visual Short-Term Memory
Serences, John T.
2016-01-01
The capacity to briefly memorize fleeting sensory information supports visual search and behavioral interactions with relevant stimuli in the environment. Traditionally, studies investigating the neural basis of visual short term memory (STM) have focused on the role of prefrontal cortex (PFC) in exerting executive control over what information is stored and how it is adaptively used to guide behavior. However, the neural substrates that support the actual storage of content-specific information in STM are more controversial, with some attributing this function to PFC and others to the specialized areas of early visual cortex that initially encode incoming sensory stimuli. In contrast to these traditional views, I will review evidence suggesting that content-specific information can be flexibly maintained in areas across the cortical hierarchy ranging from early visual cortex to PFC. While the factors that determine exactly where content-specific information is represented are not yet entirely clear, recognizing the importance of task-demands and better understanding the operation of non-spiking neural codes may help to constrain new theories about how memories are maintained at different resolutions, across different timescales, and in the presence of distracting information. PMID:27668990
Neural mechanisms of information storage in visual short-term memory.
Serences, John T
2016-11-01
The capacity to briefly memorize fleeting sensory information supports visual search and behavioral interactions with relevant stimuli in the environment. Traditionally, studies investigating the neural basis of visual short term memory (STM) have focused on the role of prefrontal cortex (PFC) in exerting executive control over what information is stored and how it is adaptively used to guide behavior. However, the neural substrates that support the actual storage of content-specific information in STM are more controversial, with some attributing this function to PFC and others to the specialized areas of early visual cortex that initially encode incoming sensory stimuli. In contrast to these traditional views, I will review evidence suggesting that content-specific information can be flexibly maintained in areas across the cortical hierarchy ranging from early visual cortex to PFC. While the factors that determine exactly where content-specific information is represented are not yet entirely clear, recognizing the importance of task-demands and better understanding the operation of non-spiking neural codes may help to constrain new theories about how memories are maintained at different resolutions, across different timescales, and in the presence of distracting information. Copyright © 2016 Elsevier Ltd. All rights reserved.
Developmental Social Cognitive Neuroscience: Insights from Deafness
ERIC Educational Resources Information Center
Corina, David; Singleton, Jenny
2009-01-01
The condition of deafness presents a developmental context that provides insight into the biological, cultural, and linguistic factors underlying the development of neural systems that impact social cognition. Studies of visual attention, behavioral regulation, language development, and face and human action perception are discussed. Visually…
A Neurobehavioral Model of Flexible Spatial Language Behaviors
ERIC Educational Resources Information Center
Lipinski, John; Schneegans, Sebastian; Sandamirskaya, Yulia; Spencer, John P.; Schoner, Gregor
2012-01-01
We propose a neural dynamic model that specifies how low-level visual processes can be integrated with higher level cognition to achieve flexible spatial language behaviors. This model uses real-word visual input that is linked to relational spatial descriptions through a neural mechanism for reference frame transformations. We demonstrate that…
Aslam, Tariq M; Zaki, Haider R; Mahmood, Sajjad; Ali, Zaria C; Ahmad, Nur A; Thorell, Mariana R; Balaskas, Konstantinos
2018-01-01
To develop a neural network for the estimation of visual acuity from optical coherence tomography (OCT) images of patients with neovascular age-related macular degeneration (AMD) and to demonstrate its use to model the impact of specific controlled OCT changes on vision. Artificial intelligence (neural network) study. We assessed 1400 OCT scans of patients with neovascular AMD. Fifteen physical features for each eligible OCT, as well as patient age, were used as input data and corresponding recorded visual acuity as the target data to train, validate, and test a supervised neural network. We then applied this network to model the impact on acuity of defined OCT changes in subretinal fluid, subretinal hyperreflective material, and loss of external limiting membrane (ELM) integrity. A total of 1210 eligible OCT scans were analyzed, resulting in 1210 data points, which were each 16-dimensional. A 10-layer feed-forward neural network with 1 hidden layer of 10 neurons was trained to predict acuity and demonstrated a root mean square error of 8.2 letters for predicted compared to actual visual acuity and a mean regression coefficient of 0.85. A virtual model using this network demonstrated the relationship of visual acuity to specific, programmed changes in OCT characteristics. When ELM is intact, there is a shallow decline in acuity with increasing subretinal fluid but a much steeper decline with equivalent increasing subretinal hyperreflective material. When ELM is not intact, all visual acuities are reduced. Increasing subretinal hyperreflective material or subretinal fluid in this circumstance reduces vision further still, but with a smaller gradient than when ELM is intact. The supervised machine learning neural network developed is able to generate an estimated visual acuity value from OCT images in a population of patients with AMD. These findings should be of clinical and research interest in macular degeneration, for example in estimating visual prognosis or highlighting the importance of developing treatments targeting more visually destructive pathologies. Copyright © 2017 Elsevier Inc. All rights reserved.
High-performance object tracking and fixation with an online neural estimator.
Kumarawadu, Sisil; Watanabe, Keigo; Lee, Tsu-Tian
2007-02-01
Vision-based target tracking and fixation to keep objects that move in three dimensions in view is important for many tasks in several fields including intelligent transportation systems and robotics. Much of the visual control literature has focused on the kinematics of visual control and ignored a number of significant dynamic control issues that limit performance. In line with this, this paper presents a neural network (NN)-based binocular tracking scheme for high-performance target tracking and fixation with minimum sensory information. The procedure allows the designer to take into account the physical (Lagrangian dynamics) properties of the vision system in the control law. The design objective is to synthesize a binocular tracking controller that explicitly takes the systems dynamics into account, yet needs no knowledge of dynamic nonlinearities and joint velocity sensory information. The combined neurocontroller-observer scheme can guarantee the uniform ultimate bounds of the tracking, observer, and NN weight estimation errors under fairly general conditions on the controller-observer gains. The controller is tested and verified via simulation tests in the presence of severe target motion changes.
ARM-based visual processing system for prosthetic vision.
Matteucci, Paul B; Byrnes-Preston, Philip; Chen, Spencer C; Lovell, Nigel H; Suaning, Gregg J
2011-01-01
A growing number of prosthetic devices have been shown to provide visual perception to the profoundly blind through electrical neural stimulation. These first-generation devices offer promising outcomes to those affected by degenerative disorders such as retinitis pigmentosa. Although prosthetic approaches vary in their placement of the stimulating array (visual cortex, optic-nerve, epi-retinal surface, sub-retinal surface, supra-choroidal space, etc.), most of the solutions incorporate an externally-worn device to acquire and process video to provide the implant with instructions on how to deliver electrical stimulation to the patient, in order to elicit phosphenized vision. With the significant increase in availability and performance of low power-consumption smart phone and personal device processors, the authors investigated the use of a commercially available ARM (Advanced RISC Machine) device as an externally-worn processing unit for a prosthetic neural stimulator for the retina. A 400 MHz Samsung S3C2440A ARM920T single-board computer was programmed to extract 98 values from a 1.3 Megapixel OV9650 CMOS camera using impulse, regional averaging and Gaussian sampling algorithms. Power consumption and speed of video processing were compared to results obtained to similar reported devices. The results show that by using code optimization, the system is capable of driving a 98 channel implantable device for the restoration of visual percepts to the blind.
Neural network models for spatial data mining, map production, and cortical direction selectivity
NASA Astrophysics Data System (ADS)
Parsons, Olga
A family of ARTMAP neural networks for incremental supervised learning has been developed over the last decade. The Sensor Exploitation Group of MIT Lincoln Laboratory (LL) has incorporated an early version of this network as the recognition engine of a hierarchical system for fusion and data mining of multiple registered geospatial images. The LL system has been successfully fielded, but it is limited to target vs. non-target identifications and does not produce whole maps. This dissertation expands the capabilities of the LL system so that it learns to identify arbitrarily many target classes at once and can thus produce a whole map. This new spatial data mining system is designed particularly to cope with the highly skewed class distributions of typical mapping problems. Specification of a consistent procedure and a benchmark testbed has permitted the evaluation of candidate recognition networks as well as pre- and post-processing and feature extraction options. The resulting default ARTMAP network and mapping methodology set a standard for a variety of related mapping problems and application domains. The second part of the dissertation investigates the development of cortical direction selectivity. The possible role of visual experience and oculomotor behavior in the maturation of cells in the primary visual cortex is studied. The responses of neurons in the thalamus and cortex of the cat are modeled when natural scenes are scanned by several types of eye movements. Inspired by the Hebbian-like synaptic plasticity, which is based upon correlations between cell activations, the second-order statistical structure of thalamo-cortical activity is examined. In the simulations, patterns of neural activity that lead to a correct refinement of cell responses are observed during visual fixation, when small ocular movements occur, but are not observed in the presence of large saccades. Simulations also replicate experiments in which kittens are reared under stroboscopic illumination. The abnormal fixational eye movements of these cats may account for the puzzling finding of a specific loss of cortical direction selectivity but preservation of orientation selectivity. This work indicates that the oculomotor behavior of visual fixation may play an important role in the refinement of cell response selectivity.
Supèr, Hans; Spekreijse, Henk; Lamme, Victor A F
2003-06-26
To look at an object its position in the visual scene has to be localized and subsequently appropriate oculo-motor behavior needs to be initiated. This kind of behavior is largely controlled by the cortical executive system, such as the frontal eye field. In this report, we analyzed neural activity in the visual cortex in relation to oculo-motor behavior. We show that in a figure-ground detection task, the strength of late modulated activity in the primary visual cortex correlates with the saccade latency. We propose that this may indicate that the variability of reaction times in the detection of a visual stimulus is reflected in low-level visual areas as well as in high-level areas.
Explaining seeing? Disentangling qualia from perceptual organization.
Ibáñez, Agustin; Bekinschtein, Tristan
2010-09-01
Abstract Visual perception and integration seem to play an essential role in our conscious phenomenology. Relatively local neural processing of reentrant nature may explain several visual integration processes (feature binding or figure-ground segregation, object recognition, inference, competition), even without attention or cognitive control. Based on the above statements, should the neural signatures of visual integration (via reentrant process) be non-reportable phenomenological qualia? We argue that qualia are not required to understand this perceptual organization.
Magosso, Elisa; Bertini, Caterina; Cuppini, Cristiano; Ursino, Mauro
2016-10-01
Hemianopic patients retain some abilities to integrate audiovisual stimuli in the blind hemifield, showing both modulation of visual perception by auditory stimuli and modulation of auditory perception by visual stimuli. Indeed, conscious detection of a visual target in the blind hemifield can be improved by a spatially coincident auditory stimulus (auditory enhancement of visual detection), while a visual stimulus in the blind hemifield can improve localization of a spatially coincident auditory stimulus (visual enhancement of auditory localization). To gain more insight into the neural mechanisms underlying these two perceptual phenomena, we propose a neural network model including areas of neurons representing the retina, primary visual cortex (V1), extrastriate visual cortex, auditory cortex and the Superior Colliculus (SC). The visual and auditory modalities in the network interact via both direct cortical-cortical connections and subcortical-cortical connections involving the SC; the latter, in particular, integrates visual and auditory information and projects back to the cortices. Hemianopic patients were simulated by unilaterally lesioning V1, and preserving spared islands of V1 tissue within the lesion, to analyze the role of residual V1 neurons in mediating audiovisual integration. The network is able to reproduce the audiovisual phenomena in hemianopic patients, linking perceptions to neural activations, and disentangles the individual contribution of specific neural circuits and areas via sensitivity analyses. The study suggests i) a common key role of SC-cortical connections in mediating the two audiovisual phenomena; ii) a different role of visual cortices in the two phenomena: auditory enhancement of conscious visual detection being conditional on surviving V1 islands, while visual enhancement of auditory localization persisting even after complete V1 damage. The present study may contribute to advance understanding of the audiovisual dialogue between cortical and subcortical structures in healthy and unisensory deficit conditions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Decoding Information for Grasping from the Macaque Dorsomedial Visual Stream.
Filippini, Matteo; Breveglieri, Rossella; Akhras, M Ali; Bosco, Annalisa; Chinellato, Eris; Fattori, Patrizia
2017-04-19
Neurodecoders have been developed by researchers mostly to control neuroprosthetic devices, but also to shed new light on neural functions. In this study, we show that signals representing grip configurations can be reliably decoded from neural data acquired from area V6A of the monkey medial posterior parietal cortex. Two Macaca fascicularis monkeys were trained to perform an instructed-delay reach-to-grasp task in the dark and in the light toward objects of different shapes. Population neural activity was extracted at various time intervals on vision of the objects, the delay before movement, and grasp execution. This activity was used to train and validate a Bayes classifier used for decoding objects and grip types. Recognition rates were well over chance level for all the epochs analyzed in this study. Furthermore, we detected slightly different decoding accuracies, depending on the task's visual condition. Generalization analysis was performed by training and testing the system during different time intervals. This analysis demonstrated that a change of code occurred during the course of the task. Our classifier was able to discriminate grasp types fairly well in advance with respect to grasping onset. This feature might be important when the timing is critical to send signals to external devices before the movement start. Our results suggest that the neural signals from the dorsomedial visual pathway can be a good substrate to feed neural prostheses for prehensile actions. SIGNIFICANCE STATEMENT Recordings of neural activity from nonhuman primate frontal and parietal cortex have led to the development of methods of decoding movement information to restore coordinated arm actions in paralyzed human beings. Our results show that the signals measured from the monkey medial posterior parietal cortex are valid for correctly decoding information relevant for grasping. Together with previous studies on decoding reach trajectories from the medial posterior parietal cortex, this highlights the medial parietal cortex as a target site for transforming neural activity into control signals to command prostheses to allow human patients to dexterously perform grasping actions. Copyright © 2017 the authors 0270-6474/17/374311-12$15.00/0.
NASA Astrophysics Data System (ADS)
Treder, Matthias S.
2012-08-01
Restoring the ability to communicate and interact with the environment in patients with severe motor disabilities is a vision that has been the main catalyst of early brain-computer interface (BCI) research. The past decade has brought a diversification of the field. BCIs have been examined as a tool for motor rehabilitation and their benefit in non-medical applications such as mental-state monitoring for improved human-computer interaction and gaming has been confirmed. At the same time, the weaknesses of some approaches have been pointed out. One of these weaknesses is gaze-dependence, that is, the requirement that the user of a BCI system voluntarily directs his or her eye gaze towards a visual target in order to efficiently operate a BCI. This not only contradicts the main doctrine of BCI research, namely that BCIs should be independent of muscle activity, but it can also limit its real-world applicability both in clinical and non-medical settings. It is only in a scenario devoid of any motor activity that a BCI solution is without alternative. Gaze-dependencies have surfaced at two different points in the BCI loop. Firstly, a BCI that relies on visual stimulation may require users to fixate on the target location. Secondly, feedback is often presented visually, which implies that the user may have to move his or her eyes in order to perceive the feedback. This special section was borne out of a BCI workshop on gaze-independent BCIs held at the 2011 Society for Applied Neurosciences (SAN) Conference and has then been extended with additional contributions from other research groups. It compiles experimental and methodological work that aims toward gaze-independent communication and mental-state monitoring. Riccio et al review the current state-of-the-art in research on gaze-independent BCIs [1]. Van der Waal et al present a tactile speller that builds on the stimulation of the fingers of the right and left hand [2]. H¨ohne et al analyze the ergonomic aspects of stimuli and systematic class confusions in auditory BCIs [3]. Andersson et al use fMRI for online-decoding of covert shifts of visual attention [4]. Thurlings et al show that multi-sensory integration of tactile and visual information can enhance the amplitude of ERP components [5]. Schaeff et al investigate the use of motion VEPs in gaze-independent visual BCIs [6]. Wilson et al substitute visual feedback by mapping the screen's cursor onto a tactor grid that stimulates the tongue [7]. Brouwer et al explore the use of ERP features and spectral features for estimating mental workload in an n-back task [8]. Falzon et al extend the Common Spatial Patterns (CSP) method to the complex plane, taking into account both amplitude and phase relationships [9]. Eliseyev et al present a method for the sparse sub-selection of electrodes for classification [10]. Tonin et al demonstrate that the classification of covert attention shifts is improved by considering sub-bands of the alpha band [11]. Aloise et al investigate effects of classification scheme and decimation on the performance of a gaze-independent BCI [12]. References [1] Riccio A et al 2012 J. Neural Eng. 9 045001 [2] van der Waal M et al 2012 J. Neural Eng. 9 045002 [3] Höhne J et al 2012 J. Neural Eng. 9 045003 [4] Andersson P et al 2012 J. Neural Eng. 9 045004 [5] Thurlings M E et al 2012 J. Neural Eng. 9 045005 [6] Schaeff S et al 2012 J. Neural Eng. 9 045006 [7] Wilson J A et al 2012 J. Neural Eng. 9 045007 [8] Brouwer A-M et al 2012 J. Neural Eng. 9 045008 [9] Falzon O et al 2012 J. Neural Eng. 9 045009 [10] Eliseyev A et al 2012 J. Neural Eng. 9 045010 [11] Tonin L et al 2012 J. Neural Eng. 9 045011 [12] Aloise F et al 2012 J. Neural Eng. 9 045012
Real-time emulation of neural images in the outer retinal circuit.
Hasegawa, Jun; Yagi, Tetsuya
2008-12-01
We describe a novel real-time system that emulates the architecture and functionality of the vertebrate retina. This system reconstructs the neural images formed by the retinal neurons in real time by using a combination of analog and digital systems consisting of a neuromorphic silicon retina chip, a field-programmable gate array, and a digital computer. While the silicon retina carries out the spatial filtering of input images instantaneously, using the embedded resistive networks that emulate the receptive field structure of the outer retinal neurons, the digital computer carries out the temporal filtering of the spatially filtered images to emulate the dynamical properties of the outer retinal circuits. The emulations of the neural image, including 128 x 128 bipolar cells, are carried out at a frame rate of 62.5 Hz. The emulation of the response to the Hermann grid and a spot of light and an annulus of lights has demonstrated that the system responds as expected by previous physiological and psychophysical observations. Furthermore, the emulated dynamics of neural images in response to natural scenes revealed the complex nature of retinal neuron activity. We have concluded that the system reflects the spatiotemporal responses of bipolar cells in the vertebrate retina. The proposed emulation system is expected to aid in understanding the visual computation in the retina and the brain.
Challinor, Kirsten L; Mond, Jonathan; Stephen, Ian D; Mitchison, Deborah; Stevenson, Richard J; Hay, Phillipa; Brooks, Kevin R
2017-12-01
Although body size and shape misperception (BSSM) is a common feature of anorexia nervosa, bulimia nervosa and muscle dysmorphia, little is known about its underlying neural mechanisms. Recently, a new approach has emerged, based on the long-established non-invasive technique of perceptual adaptation, which allows for inferences about the structure of the neural apparatus responsible for alterations in visual appearance. Here, we describe several recent experimental examples of BSSM, wherein exposure to "extreme" body stimuli causes visual aftereffects of biased perception. The implications of these studies for our understanding of the neural and cognitive representation of human bodies, along with their implications for clinical practice are discussed.
Embedding Task-Based Neural Models into a Connectome-Based Model of the Cerebral Cortex
Ulloa, Antonio; Horwitz, Barry
2016-01-01
A number of recent efforts have used large-scale, biologically realistic, neural models to help understand the neural basis for the patterns of activity observed in both resting state and task-related functional neural imaging data. An example of the former is The Virtual Brain (TVB) software platform, which allows one to apply large-scale neural modeling in a whole brain framework. TVB provides a set of structural connectomes of the human cerebral cortex, a collection of neural processing units for each connectome node, and various forward models that can convert simulated neural activity into a variety of functional brain imaging signals. In this paper, we demonstrate how to embed a previously or newly constructed task-based large-scale neural model into the TVB platform. We tested our method on a previously constructed large-scale neural model (LSNM) of visual object processing that consisted of interconnected neural populations that represent, primary and secondary visual, inferotemporal, and prefrontal cortex. Some neural elements in the original model were “non-task-specific” (NS) neurons that served as noise generators to “task-specific” neurons that processed shapes during a delayed match-to-sample (DMS) task. We replaced the NS neurons with an anatomical TVB connectome model of the cerebral cortex comprising 998 regions of interest interconnected by white matter fiber tract weights. We embedded our LSNM of visual object processing into corresponding nodes within the TVB connectome. Reciprocal connections between TVB nodes and our task-based modules were included in this framework. We ran visual object processing simulations and showed that the TVB simulator successfully replaced the noise generation originally provided by NS neurons; i.e., the DMS tasks performed with the hybrid LSNM/TVB simulator generated equivalent neural and fMRI activity to that of the original task-based models. Additionally, we found partial agreement between the functional connectivities using the hybrid LSNM/TVB model and the original LSNM. Our framework thus presents a way to embed task-based neural models into the TVB platform, enabling a better comparison between empirical and computational data, which in turn can lead to a better understanding of how interacting neural populations give rise to human cognitive behaviors. PMID:27536235
Overcoming Presbyopia by Manipulating the Eyes' Optics
NASA Astrophysics Data System (ADS)
Zheleznyak, Leonard A.
Presbyopia, the age-related loss of accommodation, is a visual condition affecting all adults over the age of 45 years. In presbyopia, individuals lose the ability to focus on nearby objects, due to a lifelong growth and stiffening of the eye's crystalline lens. This leads to poor near visual performance and affects patients' quality of life. The objective of this thesis is aimed towards the correction of presbyopia and can be divided into four aims. First, we examined the characteristics and limitations of currently available strategies for the correction of presbyopia. A natural-view wavefront sensor was used to objectively measure the accommodative ability of patients implanted with an accommodative intraocular lens (IOL). Although these patients had little accommodative ability based on changes in power, pupil miosis and higher order aberrations led to an improvement in through-focus retinal image quality in some cases. To quantify the through-focus retinal image quality of accommodative and multifocal IOLs directly, an adaptive optics (AO) IOL metrology system was developed. Using this system, the impact of corneal aberrations in regard to presbyopia-correcting IOLs was assessed, providing an objective measure of through-focus retinal image quality and practical guidelines for patient selection. To improve upon existing multifocal designs, we investigated retinal image quality metrics for the prediction of through-focus visual performance. The preferred metric was based on the fidelity of an image convolved with an aberrated point spread function. Using this metric, we investigated the potential of higher order aberrations and pupil amplitude apodization to increase the depth of focus of the presbyopic eye. Thirdly, we investigated modified monovision, a novel binocular approach to presbyopia correction using a binocular AO vision simulator. In modified monovision, different magnitudes of defocus and spherical aberration are introduced to each eye, thereby taking advantage of the binocular visual system. Several experiments using the binocular AO vision simulator found modified monovision led to significant improvements in through-focus visual performance, binocular summation and stereoacuity, as compared to traditional monovision. Finally, we addressed neural factors, affecting visual performance in modified monovision, such as ocular dominance and neural plasticity. We found that pairing modified monovision with a vision training regimen may further improve visual performance beyond the limits set by optics via neural plasticity. This opens the door to an exciting new avenue of vision correction to accompany optical interventions. The research presented in this thesis offers important guidelines for the clinical and scientific communities. Furthermore, the techniques described herein may be applied to other fields of ophthalmology, such as childhood myopia progression.
Neurophysiology and Neuroanatomy of Smooth Pursuit in Humans
ERIC Educational Resources Information Center
Lencer, Rebekka; Trillenberg, Peter
2008-01-01
Smooth pursuit eye movements enable us to focus our eyes on moving objects by utilizing well-established mechanisms of visual motion processing, sensorimotor transformation and cognition. Novel smooth pursuit tasks and quantitative measurement techniques can help unravel the different smooth pursuit components and complex neural systems involved…
Park, Joonkoo; Chiang, Crystal; Brannon, Elizabeth M.; Woldorff, Marty G.
2014-01-01
Recent functional magnetic resonance imaging research has demonstrated that letters and numbers are preferentially processed in distinct regions and hemispheres in the visual cortex. In particular, the left visual cortex preferentially processes letters compared to numbers, while the right visual cortex preferentially processes numbers compared to letters. Because letters and numbers are cultural inventions and are otherwise physically arbitrary, such a double dissociation is strong evidence for experiential effects on neural architecture. Here, we use the high temporal resolution of event-related potentials (ERPs) to investigate the temporal dynamics of the neural dissociation between letters and numbers. We show that the divergence between ERP traces to letters and numbers emerges very early in processing. Letters evoked greater N1 waves (latencies 140–170 ms) than did numbers over left occipital channels, while numbers evoked greater N1s than letters over the right, suggesting letters and numbers are preferentially processed in opposite hemispheres early in visual encoding. Moreover, strings of letters, but not single letters, elicited greater P2 ERP waves, (starting around 250 ms) than numbers did over the left hemisphere, suggesting that the visual cortex is tuned to selectively process combinations of letters, but not numbers, further along in the visual processing stream. Additionally, the processing of both of these culturally defined stimulus types differentiated from similar but unfamiliar visual stimulus forms (false fonts) even earlier in the processing stream (the P1 at 100 ms). These findings imply major cortical specialization processes within the visual system driven by experience with reading and mathematics. PMID:24669789
Park, Joonkoo; Chiang, Crystal; Brannon, Elizabeth M; Woldorff, Marty G
2014-10-01
Recent fMRI research has demonstrated that letters and numbers are preferentially processed in distinct regions and hemispheres in the visual cortex. In particular, the left visual cortex preferentially processes letters compared with numbers, whereas the right visual cortex preferentially processes numbers compared with letters. Because letters and numbers are cultural inventions and are otherwise physically arbitrary, such a double dissociation is strong evidence for experiential effects on neural architecture. Here, we use the high temporal resolution of ERPs to investigate the temporal dynamics of the neural dissociation between letters and numbers. We show that the divergence between ERP traces to letters and numbers emerges very early in processing. Letters evoked greater N1 waves (latencies 140-170 msec) than did numbers over left occipital channels, whereas numbers evoked greater N1s than letters over the right, suggesting letters and numbers are preferentially processed in opposite hemispheres early in visual encoding. Moreover, strings of letters, but not single letters, elicited greater P2 ERP waves (starting around 250 msec) than numbers did over the left hemisphere, suggesting that the visual cortex is tuned to selectively process combinations of letters, but not numbers, further along in the visual processing stream. Additionally, the processing of both of these culturally defined stimulus types differentiated from similar but unfamiliar visual stimulus forms (false fonts) even earlier in the processing stream (the P1 at 100 msec). These findings imply major cortical specialization processes within the visual system driven by experience with reading and mathematics.
Neural dynamics of motion processing and speed discrimination.
Chey, J; Grossberg, S; Mingolla, E
1998-09-01
A neural network model of visual motion perception and speed discrimination is presented. The model shows how a distributed population code of speed tuning, that realizes a size-speed correlation, can be derived from the simplest mechanisms whereby activations of multiple spatially short-range filters of different size are transformed into speed-turned cell responses. These mechanisms use transient cell responses to moving stimuli, output thresholds that covary with filter size, and competition. These mechanisms are proposed to occur in the V1-->MT cortical processing stream. The model reproduces empirically derived speed discrimination curves and simulates data showing how visual speed perception and discrimination can be affected by stimulus contrast, duration, dot density and spatial frequency. Model motion mechanisms are analogous to mechanisms that have been used to model 3-D form and figure-ground perception. The model forms the front end of a larger motion processing system that has been used to simulate how global motion capture occurs, and how spatial attention is drawn to moving forms. It provides a computational foundation for an emerging neural theory of 3-D form and motion perception.
Behavioral and neural effects of congruency of visual feedback during short-term motor learning.
Ossmy, Ori; Mukamel, Roy
2018-05-15
Visual feedback can facilitate or interfere with movement execution. Here, we describe behavioral and neural mechanisms by which the congruency of visual feedback during physical practice of a motor skill modulates subsequent performance gains. 18 healthy subjects learned to execute rapid sequences of right hand finger movements during fMRI scans either with or without visual feedback. Feedback consisted of a real-time, movement-based display of virtual hands that was either congruent (right virtual hand movement), or incongruent (left virtual hand movement yoked to the executing right hand). At the group level, right hand performance gains following training with congruent visual feedback were significantly higher relative to training without visual feedback. Conversely, performance gains following training with incongruent visual feedback were significantly lower. Interestingly, across individual subjects these opposite effects correlated. Activation in the Supplementary Motor Area (SMA) during training corresponded to individual differences in subsequent performance gains. Furthermore, functional coupling of SMA with visual cortices predicted individual differences in behavior. Our results demonstrate that some individuals are more sensitive than others to congruency of visual feedback during short-term motor learning and that neural activation in SMA correlates with such inter-individual differences. Copyright © 2017 Elsevier Inc. All rights reserved.
Functional neuroanatomy of visual masking deficits in schizophrenia.
Green, Michael F; Lee, Junghee; Cohen, Mark S; Engel, Steven A; Korb, Alexander S; Nuechterlein, Keith H; Wynn, Jonathan K; Glahn, David C
2009-12-01
Visual masking procedures assess the earliest stages of visual processing. Patients with schizophrenia reliably show deficits on visual masking, and these procedures have been used to explore vulnerability to schizophrenia, probe underlying neural circuits, and help explain functional outcome. To identify and compare regional brain activity associated with one form of visual masking (ie, backward masking) in schizophrenic patients and healthy controls. Subjects received functional magnetic resonance imaging scans. While in the scanner, subjects performed a backward masking task and were given 3 functional localizer activation scans to identify early visual processing regions of interest (ROIs). University of California, Los Angeles, and the Department of Veterans Affairs Greater Los Angeles Healthcare System. Nineteen patients with schizophrenia and 19 healthy control subjects. Main Outcome Measure The magnitude of the functional magnetic resonance imaging signal during backward masking. Two ROIs (lateral occipital complex [LO] and the human motion selective cortex [hMT+]) showed sensitivity to the effects of masking, meaning that signal in these areas increased as the target became more visible. Patients had lower activation than controls in LO across all levels of visibility but did not differ in other visual processing ROIs. Using whole-brain analyses, we also identified areas outside the ROIs that were sensitive to masking effects (including bilateral inferior parietal lobe and thalamus), but groups did not differ in signal magnitude in these areas. The study results support a key role in LO for visual masking, consistent with previous studies in healthy controls. The current results indicate that patients fail to activate LO to the same extent as controls during visual processing regardless of stimulus visibility, suggesting a neural basis for the visual masking deficit, and possibly other visual integration deficits, in schizophrenia.
Moy, Kyle; Li, Weiyu; Tran, Huu Phuoc; Simonis, Valerie; Story, Evan; Brandon, Christopher; Furst, Jacob; Raicu, Daniela; Kim, Hongkyun
2015-01-01
The nematode Caenorhabditis elegans provides a unique opportunity to interrogate the neural basis of behavior at single neuron resolution. In C. elegans, neural circuits that control behaviors can be formulated based on its complete neural connection map, and easily assessed by applying advanced genetic tools that allow for modulation in the activity of specific neurons. Importantly, C. elegans exhibits several elaborate behaviors that can be empirically quantified and analyzed, thus providing a means to assess the contribution of specific neural circuits to behavioral output. Particularly, locomotory behavior can be recorded and analyzed with computational and mathematical tools. Here, we describe a robust single worm-tracking system, which is based on the open-source Python programming language, and an analysis system, which implements path-related algorithms. Our tracking system was designed to accommodate worms that explore a large area with frequent turns and reversals at high speeds. As a proof of principle, we used our tracker to record the movements of wild-type animals that were freshly removed from abundant bacterial food, and determined how wild-type animals change locomotory behavior over a long period of time. Consistent with previous findings, we observed that wild-type animals show a transition from area-restricted local search to global search over time. Intriguingly, we found that wild-type animals initially exhibit short, random movements interrupted by infrequent long trajectories. This movement pattern often coincides with local/global search behavior, and visually resembles Lévy flight search, a search behavior conserved across species. Our mathematical analysis showed that while most of the animals exhibited Brownian walks, approximately 20% of the animals exhibited Lévy flights, indicating that C. elegans can use Lévy flights for efficient food search. In summary, our tracker and analysis software will help analyze the neural basis of the alteration and transition of C. elegans locomotory behavior in a food-deprived condition. PMID:26713869
Muthukumaraswamy, Suresh D.; Hibbs, Carina S.; Shapiro, Kimron L.; Bracewell, R. Martyn; Singh, Krish D.; Linden, David E. J.
2011-01-01
The mechanism by which distinct subprocesses in the brain are coordinated is a central conundrum of systems neuroscience. The parietal lobe is thought to play a key role in visual feature integration, and oscillatory activity in the gamma frequency range has been associated with perception of coherent objects and other tasks requiring neural coordination. Here, we examined the neural correlates of integrating mental representations in working memory and hypothesized that parietal gamma activity would be related to the success of cognitive coordination. Working memory is a classic example of a cognitive operation that requires the coordinated processing of different types of information and the contribution of multiple cognitive domains. Using magnetoencephalography (MEG), we report parietal activity in the high gamma (80–100 Hz) range during manipulation of visual and spatial information (colors and angles) in working memory. This parietal gamma activity was significantly higher during manipulation of visual-spatial conjunctions compared with single features. Furthermore, gamma activity correlated with successful performance during the conjunction task but not during the component tasks. Cortical gamma activity in parietal cortex may therefore play a role in cognitive coordination. PMID:21940605
Enhanced Pure-Tone Pitch Discrimination among Persons with Autism but not Asperger Syndrome
ERIC Educational Resources Information Center
Bonnel, Anna; McAdams, Stephen; Smith, Bennett; Berthiaume, Claude; Bertone, Armando; Ciocca, Valter; Burack, Jacob A.; Mottron, Laurent
2010-01-01
Persons with Autism spectrum disorders (ASD) display atypical perceptual processing in visual and auditory tasks. In vision, Bertone, Mottron, Jelenic, and Faubert (2005) found that enhanced and diminished visual processing is linked to the level of neural complexity required to process stimuli, as proposed in the neural complexity hypothesis.…
ERIC Educational Resources Information Center
McDowell, Jennifer E.; Dyckman, Kara A.; Austin, Benjamin P.; Clementz, Brett A.
2008-01-01
This review provides a summary of the contributions made by human functional neuroimaging studies to the understanding of neural correlates of saccadic control. The generation of simple visually guided saccades (redirections of gaze to a visual stimulus or pro-saccades) and more complex volitional saccades require similar basic neural circuitry…
Stimulus information contaminates summation tests of independent neural representations of features
NASA Technical Reports Server (NTRS)
Shimozaki, Steven S.; Eckstein, Miguel P.; Abbey, Craig K.
2002-01-01
Many models of visual processing assume that visual information is analyzed into separable and independent neural codes, or features. A common psychophysical test of independent features is known as a summation study, which measures performance in a detection, discrimination, or visual search task as the number of proposed features increases. Improvement in human performance with increasing number of available features is typically attributed to the summation, or combination, of information across independent neural coding of the features. In many instances, however, increasing the number of available features also increases the stimulus information in the task, as assessed by an optimal observer that does not include the independent neural codes. In a visual search task with spatial frequency and orientation as the component features, a particular set of stimuli were chosen so that all searches had equivalent stimulus information, regardless of the number of features. In this case, human performance did not improve with increasing number of features, implying that the improvement observed with additional features may be due to stimulus information and not the combination across independent features.
NASA Astrophysics Data System (ADS)
Yuan, Wu-Jie; Zhou, Jian-Fang; Zhou, Changsong
2016-04-01
Microsaccades are very small eye movements during fixation. Experimentally, they have been found to play an important role in visual information processing. However, neural responses induced by microsaccades are not yet well understood and are rarely studied theoretically. Here we propose a network model with a cascading adaptation including both retinal adaptation and short-term depression (STD) at thalamocortical synapses. In the neural network model, we compare the microsaccade-induced neural responses in the presence of STD and those without STD. It is found that the cascading with STD can give rise to faster and sharper responses to microsaccades. Moreover, STD can enhance response effectiveness and sensitivity to microsaccadic spatiotemporal changes, suggesting improved detection of small eye movements (or moving visual objects). We also explore the mechanism of the response properties in the model. Our studies strongly indicate that STD plays an important role in neural responses to microsaccades. Our model considers simultaneously retinal adaptation and STD at thalamocortical synapses in the study of microsaccade-induced neural activity, and may be useful for further investigation of the functional roles of microsaccades in visual information processing.
Corina, David P; Knapp, Heather Patterson
2008-12-01
In the quest to further understand the neural underpinning of human communication, researchers have turned to studies of naturally occurring signed languages used in Deaf communities. The comparison of the commonalities and differences between spoken and signed languages provides an opportunity to determine core neural systems responsible for linguistic communication independent of the modality in which a language is expressed. The present article examines such studies, and in addition asks what we can learn about human languages by contrasting formal visual-gestural linguistic systems (signed languages) with more general human action perception. To understand visual language perception, it is important to distinguish the demands of general human motion processing from the highly task-dependent demands associated with extracting linguistic meaning from arbitrary, conventionalized gestures. This endeavor is particularly important because theorists have suggested close homologies between perception and production of actions and functions of human language and social communication. We review recent behavioral, functional imaging, and neuropsychological studies that explore dissociations between the processing of human actions and signed languages. These data suggest incomplete overlap between the mirror-neuron systems proposed to mediate human action and language.
Neuronal pathway finding: from neurons to initial neural networks.
Roscigno, Cecelia I
2004-10-01
Neuronal pathway finding is crucial for structured cellular organization and development of neural circuits within the nervous system. Neuronal pathway finding within the visual system has been extensively studied and therefore is used as a model to review existing knowledge regarding concepts of this developmental process. General principles of neuron pathway finding throughout the nervous system exist. Comprehension of these concepts guides neuroscience nurses in gaining an understanding of the developmental course of action, the implications of different anomalies, as well as the theoretical basis and nursing implications of some provocative new therapies being proposed to treat neurodegenerative diseases and neurologic injuries. These therapies have limitations in light of current ethical, developmental, and delivery modes and what is known about the development of neuronal pathways.
Harris, Joseph A.; McMahon, Alex R.; Woldorff, Marty G.
2015-01-01
Any information represented in the brain holds the potential to influence behavior. It is therefore of broad interest to determine the extent and quality of neural processing of stimulus input that occurs with and without awareness. The attentional blink is a useful tool for dissociating neural and behavioral measures of perceptual visual processing across conditions of awareness. The extent of higher-order visual information beyond basic sensory signaling that is processed during the attentional blink remains controversial. To determine what neural processing at the level of visual-object identification occurs in the absence of awareness, electrophysiological responses to images of faces and houses were recorded both within and outside of the attentional blink period during a rapid serial visual presentation (RSVP) stream. Electrophysiological results were sorted according to behavioral performance (correctly identified targets versus missed targets) within these blink and non-blink periods. An early index of face-specific processing (the N170, 140–220 ms post-stimulus) was observed regardless of whether the subject demonstrated awareness of the stimulus, whereas a later face-specific effect with the same topographic distribution (500–700 ms post-stimulus) was only seen for accurate behavioral discrimination of the stimulus content. The present findings suggest a multi-stage process of object-category processing, with only the later phase being associated with explicit visual awareness. PMID:23859644
Automated visual inspection system based on HAVNET architecture
NASA Astrophysics Data System (ADS)
Burkett, K.; Ozbayoglu, Murat A.; Dagli, Cihan H.
1994-10-01
In this study, the HAusdorff-Voronoi NETwork (HAVNET) developed at the UMR Smart Engineering Systems Lab is tested in the recognition of mounted circuit components commonly used in printed circuit board assembly systems. The automated visual inspection system used consists of a CCD camera, a neural network based image processing software and a data acquisition card connected to a PC. The experiments are run in the Smart Engineering Systems Lab in the Engineering Management Dept. of the University of Missouri-Rolla. The performance analysis shows that the vision system is capable of recognizing different components under uncontrolled lighting conditions without being effected by rotation or scale differences. The results obtained are promising and the system can be used in real manufacturing environments. Currently the system is being customized for a specific manufacturing application.
A neural network model of ventriloquism effect and aftereffect.
Magosso, Elisa; Cuppini, Cristiano; Ursino, Mauro
2012-01-01
Presenting simultaneous but spatially discrepant visual and auditory stimuli induces a perceptual translocation of the sound towards the visual input, the ventriloquism effect. General explanation is that vision tends to dominate over audition because of its higher spatial reliability. The underlying neural mechanisms remain unclear. We address this question via a biologically inspired neural network. The model contains two layers of unimodal visual and auditory neurons, with visual neurons having higher spatial resolution than auditory ones. Neurons within each layer communicate via lateral intra-layer synapses; neurons across layers are connected via inter-layer connections. The network accounts for the ventriloquism effect, ascribing it to a positive feedback between the visual and auditory neurons, triggered by residual auditory activity at the position of the visual stimulus. Main results are: i) the less localized stimulus is strongly biased toward the most localized stimulus and not vice versa; ii) amount of the ventriloquism effect changes with visual-auditory spatial disparity; iii) ventriloquism is a robust behavior of the network with respect to parameter value changes. Moreover, the model implements Hebbian rules for potentiation and depression of lateral synapses, to explain ventriloquism aftereffect (that is, the enduring sound shift after exposure to spatially disparate audio-visual stimuli). By adaptively changing the weights of lateral synapses during cross-modal stimulation, the model produces post-adaptive shifts of auditory localization that agree with in-vivo observations. The model demonstrates that two unimodal layers reciprocally interconnected may explain ventriloquism effect and aftereffect, even without the presence of any convergent multimodal area. The proposed study may provide advancement in understanding neural architecture and mechanisms at the basis of visual-auditory integration in the spatial realm.
Learning of spatio-temporal codes in a coupled oscillator system.
Orosz, Gábor; Ashwin, Peter; Townley, Stuart
2009-07-01
In this paper, we consider a learning strategy that allows one to transmit information between two coupled phase oscillator systems (called teaching and learning systems) via frequency adaptation. The dynamics of these systems can be modeled with reference to a number of partially synchronized cluster states and transitions between them. Forcing the teaching system by steady but spatially nonhomogeneous inputs produces cyclic sequences of transitions between the cluster states, that is, information about inputs is encoded via a "winnerless competition" process into spatio-temporal codes. The large variety of codes can be learned by the learning system that adapts its frequencies to those of the teaching system. We visualize the dynamics using "weighted order parameters (WOPs)" that are analogous to "local field potentials" in neural systems. Since spatio-temporal coding is a mechanism that appears in olfactory systems, the developed learning rules may help to extract information from these neural ensembles.
Neural Architecture for Feature Binding in Visual Working Memory.
Schneegans, Sebastian; Bays, Paul M
2017-04-05
Binding refers to the operation that groups different features together into objects. We propose a neural architecture for feature binding in visual working memory that employs populations of neurons with conjunction responses. We tested this model using cued recall tasks, in which subjects had to memorize object arrays composed of simple visual features (color, orientation, and location). After a brief delay, one feature of one item was given as a cue, and the observer had to report, on a continuous scale, one or two other features of the cued item. Binding failure in this task is associated with swap errors, in which observers report an item other than the one indicated by the cue. We observed that the probability of swapping two items strongly correlated with the items' similarity in the cue feature dimension, and found a strong correlation between swap errors occurring in spatial and nonspatial report. The neural model explains both swap errors and response variability as results of decoding noisy neural activity, and can account for the behavioral results in quantitative detail. We then used the model to compare alternative mechanisms for binding nonspatial features. We found the behavioral results fully consistent with a model in which nonspatial features are bound exclusively via their shared location, with no indication of direct binding between color and orientation. These results provide evidence for a special role of location in feature binding, and the model explains how this special role could be realized in the neural system. SIGNIFICANCE STATEMENT The problem of feature binding is of central importance in understanding the mechanisms of working memory. How do we remember not only that we saw a red and a round object, but that these features belong together to a single object rather than to different objects in our environment? Here we present evidence for a neural mechanism for feature binding in working memory, based on encoding of visual information by neurons that respond to the conjunction of features. We find clear evidence that nonspatial features are bound via space: we memorize directly where a color or an orientation appeared, but we memorize which color belonged with which orientation only indirectly by virtue of their shared location. Copyright © 2017 Schneegans and Bays.
Evolution and Optimality of Similar Neural Mechanisms for Perception and Action during Search
Zhang, Sheng; Eckstein, Miguel P.
2010-01-01
A prevailing theory proposes that the brain's two visual pathways, the ventral and dorsal, lead to differing visual processing and world representations for conscious perception than those for action. Others have claimed that perception and action share much of their visual processing. But which of these two neural architectures is favored by evolution? Successful visual search is life-critical and here we investigate the evolution and optimality of neural mechanisms mediating perception and eye movement actions for visual search in natural images. We implement an approximation to the ideal Bayesian searcher with two separate processing streams, one controlling the eye movements and the other stream determining the perceptual search decisions. We virtually evolved the neural mechanisms of the searchers' two separate pathways built from linear combinations of primary visual cortex receptive fields (V1) by making the simulated individuals' probability of survival depend on the perceptual accuracy finding targets in cluttered backgrounds. We find that for a variety of targets, backgrounds, and dependence of target detectability on retinal eccentricity, the mechanisms of the searchers' two processing streams converge to similar representations showing that mismatches in the mechanisms for perception and eye movements lead to suboptimal search. Three exceptions which resulted in partial or no convergence were a case of an organism for which the targets are equally detectable across the retina, an organism with sufficient time to foveate all possible target locations, and a strict two-pathway model with no interconnections and differential pre-filtering based on parvocellular and magnocellular lateral geniculate cell properties. Thus, similar neural mechanisms for perception and eye movement actions during search are optimal and should be expected from the effects of natural selection on an organism with limited time to search for food that is not equi-detectable across its retina and interconnected perception and action neural pathways. PMID:20838589
1985-09-30
layers of the retina as seen in retinitis pigmentosa (Wolbarsht & Landers, 1980; Stefansson et al, 1981 a). Those are all long-term effects with a delay...block numoer) FIELD GROUP SUB-GROUP retinal damage center-surround 20 05 laser injury cat retina 20 06 visual perception N02 anesthesia 19. ABSTRACT...Continue on reverse if necessary and identify by block number) The reports of retinal damage from exposure to short pulse laser energy without any
Maurer, Urs; Zevin, Jason D.; McCandliss, Bruce D.
2015-01-01
The N170 component of the event-related potential (ERP) reflects experience-dependent neural changes in several forms of visual expertise, including expertise for visual words. Readers skilled in writing systems that link characters to phonemes (i.e., alphabetic writing) typically produce a left-lateralized N170 to visual word forms. This study examined the N170 in three Japanese scripts that link characters to larger phonological units. Participants were monolingual English speakers (EL1) and native Japanese speakers (JL1) who were also proficient in English. ERPs were collected using a 129-channel array, as participants performed a series of experiments viewing words or novel control stimuli in a repetition detection task. The N170 was strongly left-lateralized for all three Japanese scripts (including logographic Kanji characters) in JL1 participants, but bilateral in EL1 participants viewing these same stimuli. This demonstrates that left-lateralization of the N170 is dependent on specific reading expertise and is not limited to alphabetic scripts. Additional contrasts within the moraic Katakana script revealed equivalent N170 responses in JL1 speakers for familiar Katakana words and for Kanji words transcribed into novel Katakana words, suggesting that the N170 expertise effect is driven by script familiarity rather than familiarity with particular visual word forms. Finally, for English words and novel symbol string stimuli, both EL1 and JL1 subjects produced equivalent responses for the novel symbols, and more left-lateralized N170 responses for the English words, indicating that such effects are not limited to the first language. Taken together, these cross-linguistic results suggest that similar neural processes underlie visual expertise for print in very different writing systems. PMID:18370600
Optimization of a GCaMP calcium indicator for neural activity imaging
Akerboom, Jasper; Chen, Tsai-Wen; Wardill, Trevor J.; Tian, Lin; Marvin, Jonathan S.; Mutlu, Sevinç; Calderón, Nicole Carreras; Esposti, Federico; Borghuis, Bart G.; Sun, Xiaonan Richard; Gordus, Andrew; Orger, Michael B.; Portugues, Ruben; Engert, Florian; Macklin, John J.; Filosa, Alessandro; Aggarwal, Aman; Kerr, Rex; Takagi, Ryousuke; Kracun, Sebastian; Shigetomi, Eiji; Khakh, Baljit S.; Baier, Herwig; Lagnado, Leon; Wang, Samuel S.-H.; Bargmann, Cornelia I.; Kimmel, Bruce E.; Jayaraman, Vivek; Svoboda, Karel; Kim, Douglas S.; Schreiter, Eric R.; Looger, Loren L.
2012-01-01
Genetically encoded calcium indicators (GECIs) are powerful tools for systems neuroscience. Recent efforts in protein engineering have significantly increased the performance of GECIs. The state-of-the art single-wavelength GECI, GCaMP3, has been deployed in a number of model organisms and can reliably detect three or more action potentials (APs) in short bursts in several systems in vivo. Through protein structure determination, targeted mutagenesis, high-throughput screening, and a battery of in vitro assays, we have increased the dynamic range of GCaMP3 by several-fold, creating a family of “GCaMP5” sensors. We tested GCaMP5s in several systems: cultured neurons and astrocytes, mouse retina, and in vivo in Caenorhabditis chemosensory neurons, Drosophila larval neuromuscular junction and adult antennal lobe, zebrafish retina and tectum, and mouse visual cortex. Signal-to-noise ratio was improved by at least 2–3-fold. In the visual cortex, two GCaMP5 variants detected twice as many visual stimulus-responsive cells as GCaMP3. By combining in vivo imaging with electrophysiology we show that GCaMP5 fluorescence provides a more reliable measure of neuronal activity than its predecessor GCaMP3. GCaMP5 allows more sensitive detection of neural activity in vivo and may find widespread applications for cellular imaging in general. PMID:23035093
Okuyama, Teruhiro; Isoe, Yasuko; Hoki, Masahito; Suehiro, Yuji; Yamagishi, Genki; Naruse, Kiyoshi; Kinoshita, Masato; Kamei, Yasuhiro; Shimizu, Atushi; Kubo, Takeo; Takeuchi, Hideaki
2013-01-01
Background Genetic mosaic techniques have been used to visualize and/or genetically modify a neuronal subpopulation within complex neural circuits in various animals. Neural populations available for mosaic analysis, however, are limited in the vertebrate brain. Methodology/Principal Findings To establish methodology to genetically manipulate neural circuits in medaka, we first created two transgenic (Tg) medaka lines, Tg (HSP:Cre) and Tg (HuC:loxP-DsRed-loxP-GFP). We confirmed medaka HuC promoter-derived expression of the reporter gene in juvenile medaka whole brain, and in neuronal precursor cells in the adult brain. We then demonstrated that stochastic recombination can be induced by micro-injection of Cre mRNA into Tg (HuC:loxP-DsRed-loxP-GFP) embryos at the 1-cell stage, which allowed us to visualize some subpopulations of GFP-positive cells in compartmentalized regions of the telencephalon in the adult medaka brain. This finding suggested that the distribution of clonally-related cells derived from single or a few progenitor cells was restricted to a compartmentalized region. Heat treatment of Tg(HSP:Cre x HuC:loxP-DsRed-loxP-GFP) embryos (0–1 day post fertilization [dpf]) in a thermalcycler (39°C) led to Cre/loxP recombination in the whole brain. The recombination efficiency was notably low when using 2–3 dpf embyos compared with 0–1 dpf embryos, indicating the possibility of stage-dependent sensitivity of heat-inducible recombination. Finally, using an infrared laser-evoked gene operator (IR-LEGO) system, heat shock induced in a micro area in the developing brains led to visualization of clonally-related cells in both juvenile and adult medaka fish. Conclusions/Significance We established a noninvasive method to control Cre/loxP site-specific recombination in the developing nervous system in medaka fish. This method will broaden the neural population available for mosaic analyses and allow for lineage tracing of the vertebrate nervous system in both juvenile and adult stages. PMID:23825546
NASA Astrophysics Data System (ADS)
Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.
1989-03-01
The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.
Toward Model Building for Visual Aesthetic Perception
Lughofer, Edwin; Zeng, Xianyi
2017-01-01
Several models of visual aesthetic perception have been proposed in recent years. Such models have drawn on investigations into the neural underpinnings of visual aesthetics, utilizing neurophysiological techniques and brain imaging techniques including functional magnetic resonance imaging, magnetoencephalography, and electroencephalography. The neural mechanisms underlying the aesthetic perception of the visual arts have been explained from the perspectives of neuropsychology, brain and cognitive science, informatics, and statistics. Although corresponding models have been constructed, the majority of these models contain elements that are difficult to be simulated or quantified using simple mathematical functions. In this review, we discuss the hypotheses, conceptions, and structures of six typical models for human aesthetic appreciation in the visual domain: the neuropsychological, information processing, mirror, quartet, and two hierarchical feed-forward layered models. Additionally, the neural foundation of aesthetic perception, appreciation, or judgement for each model is summarized. The development of a unified framework for the neurobiological mechanisms underlying the aesthetic perception of visual art and the validation of this framework via mathematical simulation is an interesting challenge in neuroaesthetics research. This review aims to provide information regarding the most promising proposals for bridging the gap between visual information processing and brain activity involved in aesthetic appreciation. PMID:29270194
Variability and Correlations in Primary Visual Cortical Neurons Driven by Fixational Eye Movements
McFarland, James M.; Cumming, Bruce G.
2016-01-01
The ability to distinguish between elements of a sensory neuron's activity that are stimulus independent versus driven by the stimulus is critical for addressing many questions in systems neuroscience. This is typically accomplished by measuring neural responses to repeated presentations of identical stimuli and identifying the trial-variable components of the response as noise. In awake primates, however, small “fixational” eye movements (FEMs) introduce uncontrolled trial-to-trial differences in the visual stimulus itself, potentially confounding this distinction. Here, we describe novel analytical methods that directly quantify the stimulus-driven and stimulus-independent components of visual neuron responses in the presence of FEMs. We apply this approach, combined with precise model-based eye tracking, to recordings from primary visual cortex (V1), finding that standard approaches that ignore FEMs typically miss more than half of the stimulus-driven neural response variance, creating substantial biases in measures of response reliability. We show that these effects are likely not isolated to the particular experimental conditions used here, such as the choice of visual stimulus or spike measurement time window, and thus will be a more general problem for V1 recordings in awake primates. We also demonstrate that measurements of the stimulus-driven and stimulus-independent correlations among pairs of V1 neurons can be greatly biased by FEMs. These results thus illustrate the potentially dramatic impact of FEMs on measures of signal and noise in visual neuron activity and also demonstrate a novel approach for controlling for these eye-movement-induced effects. SIGNIFICANCE STATEMENT Distinguishing between the signal and noise in a sensory neuron's activity is typically accomplished by measuring neural responses to repeated presentations of an identical stimulus. For recordings from the visual cortex of awake animals, small “fixational” eye movements (FEMs) inevitably introduce trial-to-trial variability in the visual stimulus, potentially confounding such measures. Here, we show that FEMs often have a dramatic impact on several important measures of response variability for neurons in primary visual cortex. We also present an analytical approach for quantifying signal and noise in visual neuron activity in the presence of FEMs. These results thus highlight the importance of controlling for FEMs in studies of visual neuron function, and demonstrate novel methods for doing so. PMID:27277801
Visual Attention in the First Years: Typical Development and Developmental Disorders
ERIC Educational Resources Information Center
Atkinson, Janette; Braddick, Oliver
2012-01-01
The development of attention is critical for the young child's competence in dealing with the demands of everyday life. Here we review evidence from infants and preschool children regarding the development of three neural subsystems of attention: selective attention, sustained attention, and attentional (executive) control. These systems overlap…
Reciprocal Inhibitory Connections Within a Neural Network for Rotational Optic-Flow Processing
Haag, Juergen; Borst, Alexander
2007-01-01
Neurons in the visual system of the blowfly have large receptive fields that are selective for specific optic flow fields. Here, we studied the neural mechanisms underlying flow–field selectivity in proximal Vertical System (VS)-cells, a particular subset of tangential cells in the fly. These cells have local preferred directions that are distributed such as to match the flow field occurring during a rotation of the fly. However, the neural circuitry leading to this selectivity is not fully understood. Through dual intracellular recordings from proximal VS cells and other tangential cells, we characterized the specific wiring between VS cells themselves and between proximal VS cells and horizontal sensitive tangential cells. We discovered a spiking neuron (Vi) involved in this circuitry that has not been described before. This neuron turned out to be connected to proximal VS cells via gap junctions and, in addition, it was found to be inhibitory onto VS1. PMID:18982122
Sex differences in the development of brain mechanisms for processing biological motion.
Anderson, L C; Bolling, D Z; Schelinski, S; Coffman, M C; Pelphrey, K A; Kaiser, M D
2013-12-01
Disorders related to social functioning including autism and schizophrenia differ drastically in incidence and severity between males and females. Little is known about the neural systems underlying these sex-linked differences in risk and resiliency. Using functional magnetic resonance imaging and a task involving the visual perception of point-light displays of coherent and scrambled biological motion, we discovered sex differences in the development of neural systems for basic social perception. In adults, we identified enhanced activity during coherent biological motion perception in females relative to males in a network of brain regions previously implicated in social perception including amygdala, medial temporal gyrus, and temporal pole. These sex differences were less pronounced in our sample of school-age youth. We hypothesize that the robust neural circuitry supporting social perception in females, which diverges from males beginning in childhood, may underlie sex differences in disorders related to social processing. © 2013 Elsevier Inc. All rights reserved.
Whole-brain activity mapping onto a zebrafish brain atlas.
Randlett, Owen; Wee, Caroline L; Naumann, Eva A; Nnaemeka, Onyeka; Schoppik, David; Fitzgerald, James E; Portugues, Ruben; Lacoste, Alix M B; Riegler, Clemens; Engert, Florian; Schier, Alexander F
2015-11-01
In order to localize the neural circuits involved in generating behaviors, it is necessary to assign activity onto anatomical maps of the nervous system. Using brain registration across hundreds of larval zebrafish, we have built an expandable open-source atlas containing molecular labels and definitions of anatomical regions, the Z-Brain. Using this platform and immunohistochemical detection of phosphorylated extracellular signal–regulated kinase (ERK) as a readout of neural activity, we have developed a system to create and contextualize whole-brain maps of stimulus- and behavior-dependent neural activity. This mitogen-activated protein kinase (MAP)-mapping assay is technically simple, and data analysis is completely automated. Because MAP-mapping is performed on freely swimming fish, it is applicable to studies of nearly any stimulus or behavior. Here we demonstrate our high-throughput approach using pharmacological, visual and noxious stimuli, as well as hunting and feeding. The resultant maps outline hundreds of areas associated with behaviors.
Sex Differences in the Development of Brain Mechanisms for Processing Biological Motion
Anderson, L.C.; Bolling, D.Z.; Schelinski, S.; Coffman, M.C.; Pelphrey, K.A.; Kaiser, M.D.
2013-01-01
Disorders related to social functioning including autism and schizophrenia differ drastically in incidence and severity between males and females. Little is known about the neural systems underlying these sex-linked differences in risk and resiliency. Using functional magnetic resonance imaging and a task involving the visual perception of point-light displays of coherent and scrambled biological motion, we discovered sex differences in the development of neural systems for basic social perception. In adults, we identified enhanced activity during coherent biological motion perception in females relative to males in a network of brain regions previously implicated in social perception including amygdala, medial temporal gyrus, and temporal pole. These sex differences were less pronounced in our sample of school-age youth. We hypothesize that the robust neural circuitry supporting social perception in females, which diverges from males beginning in childhood, may underlie sex differences in disorders related to social processing. PMID:23876243
Simple Smartphone-Based Guiding System for Visually Impaired People
Lin, Bor-Shing; Lee, Cheng-Che; Chiang, Pei-Ying
2017-01-01
Visually impaired people are often unaware of dangers in front of them, even in familiar environments. Furthermore, in unfamiliar environments, such people require guidance to reduce the risk of colliding with obstacles. This study proposes a simple smartphone-based guiding system for solving the navigation problems for visually impaired people and achieving obstacle avoidance to enable visually impaired people to travel smoothly from a beginning point to a destination with greater awareness of their surroundings. In this study, a computer image recognition system and smartphone application were integrated to form a simple assisted guiding system. Two operating modes, online mode and offline mode, can be chosen depending on network availability. When the system begins to operate, the smartphone captures the scene in front of the user and sends the captured images to the backend server to be processed. The backend server uses the faster region convolutional neural network algorithm or the you only look once algorithm to recognize multiple obstacles in every image, and it subsequently sends the results back to the smartphone. The results of obstacle recognition in this study reached 60%, which is sufficient for assisting visually impaired people in realizing the types and locations of obstacles around them. PMID:28608811
Simple Smartphone-Based Guiding System for Visually Impaired People.
Lin, Bor-Shing; Lee, Cheng-Che; Chiang, Pei-Ying
2017-06-13
Visually impaired people are often unaware of dangers in front of them, even in familiar environments. Furthermore, in unfamiliar environments, such people require guidance to reduce the risk of colliding with obstacles. This study proposes a simple smartphone-based guiding system for solving the navigation problems for visually impaired people and achieving obstacle avoidance to enable visually impaired people to travel smoothly from a beginning point to a destination with greater awareness of their surroundings. In this study, a computer image recognition system and smartphone application were integrated to form a simple assisted guiding system. Two operating modes, online mode and offline mode, can be chosen depending on network availability. When the system begins to operate, the smartphone captures the scene in front of the user and sends the captured images to the backend server to be processed. The backend server uses the faster region convolutional neural network algorithm or the you only look once algorithm to recognize multiple obstacles in every image, and it subsequently sends the results back to the smartphone. The results of obstacle recognition in this study reached 60%, which is sufficient for assisting visually impaired people in realizing the types and locations of obstacles around them.
A Biophysical Neural Model To Describe Spatial Visual Attention
NASA Astrophysics Data System (ADS)
Hugues, Etienne; José, Jorge V.
2008-02-01
Visual scenes have enormous spatial and temporal information that are transduced into neural spike trains. Psychophysical experiments indicate that only a small portion of a spatial image is consciously accessible. Electrophysiological experiments in behaving monkeys have revealed a number of modulations of the neural activity in special visual area known as V4, when the animal is paying attention directly towards a particular stimulus location. The nature of the attentional input to V4, however, remains unknown as well as to the mechanisms responsible for these modulations. We use a biophysical neural network model of V4 to address these issues. We first constrain our model to reproduce the experimental results obtained for different external stimulus configurations and without paying attention. To reproduce the known neuronal response variability, we found that the neurons should receive about equal, or balanced, levels of excitatory and inhibitory inputs and whose levels are high as they are in in vivo conditions. Next we consider attentional inputs that can induce and reproduce the observed spiking modulations. We also elucidate the role played by the neural network to generate these modulations.
A Biophysical Neural Model To Describe Spatial Visual Attention
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hugues, Etienne; Jose, Jorge V.
2008-02-14
Visual scenes have enormous spatial and temporal information that are transduced into neural spike trains. Psychophysical experiments indicate that only a small portion of a spatial image is consciously accessible. Electrophysiological experiments in behaving monkeys have revealed a number of modulations of the neural activity in special visual area known as V4, when the animal is paying attention directly towards a particular stimulus location. The nature of the attentional input to V4, however, remains unknown as well as to the mechanisms responsible for these modulations. We use a biophysical neural network model of V4 to address these issues. We firstmore » constrain our model to reproduce the experimental results obtained for different external stimulus configurations and without paying attention. To reproduce the known neuronal response variability, we found that the neurons should receive about equal, or balanced, levels of excitatory and inhibitory inputs and whose levels are high as they are in in vivo conditions. Next we consider attentional inputs that can induce and reproduce the observed spiking modulations. We also elucidate the role played by the neural network to generate these modulations.« less
Donderi, Don C
2006-01-01
The idea of visual complexity, the history of its measurement, and its implications for behavior are reviewed, starting with structuralism and Gestalt psychology at the beginning of the 20th century and ending with visual complexity theory, perceptual learning theory, and neural circuit theory at the beginning of the 21st. Evidence is drawn from research on single forms, form and texture arrays and visual displays. Form complexity and form probability are shown to be linked through their reciprocal relationship in complexity theory, which is in turn shown to be consistent with recent developments in perceptual learning and neural circuit theory. Directions for further research are suggested.
Teng, Santani
2017-01-01
In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044019
Cichy, Radoslaw Martin; Teng, Santani
2017-02-19
In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.
Meng, Qianli; Huang, Yan; Cui, Ding; He, Lixia; Chen, Lin; Ma, Yuanye; Zhao, Xudong
2018-05-01
"Where to begin" is a fundamental question of vision. A "Global-first" topological approach proposed that the first step in object representation was to extract topological properties, especially whether the object had a hole or not. Numerous psychophysical studies found that the hole (closure) could be rapidly recognized by visual system as a primitive property. However, neuroimaging studies showed that the temporal lobe (IT), which lied at a late stage of ventral pathway, was involved as a dedicated region. It appeared paradoxical that IT served as a key region for processing the early component of visual information. Did there exist a distinct fast route to transit hole information to IT? We hypothesized that a fast noncortical pathway might participate in processing holes. To address this issue, a backward masking paradigm combined with functional magnetic resonance imaging (fMRI) was applied to measure neural responses to hole and no-hole stimuli in anatomically defined cortical and subcortical regions of interest (ROIs) under different visual awareness levels by modulating masking delays. For no-hole stimuli, the neural activation of cortical sites was greatly attenuated when the no-hole perception was impaired by strong masking, whereas an enhanced neural response to hole stimuli in non-cortical sites was obtained when the stimulus was rendered more invisible. The results suggested that whereas the cortical route was required to drive a perceptual response for no-hole stimuli, a subcortical route might be involved in coding the hole feature, resulting in a rapid hole perception in primitive vision.
Using CNN Features to Better Understand What Makes Visual Artworks Special.
Brachmann, Anselm; Barth, Erhardt; Redies, Christoph
2017-01-01
One of the goal of computational aesthetics is to understand what is special about visual artworks. By analyzing image statistics, contemporary methods in computer vision enable researchers to identify properties that distinguish artworks from other (non-art) types of images. Such knowledge will eventually allow inferences with regard to the possible neural mechanisms that underlie aesthetic perception in the human visual system. In the present study, we define measures that capture variances of features of a well-established Convolutional Neural Network (CNN), which was trained on millions of images to recognize objects. Using an image dataset that represents traditional Western, Islamic and Chinese art, as well as various types of non-art images, we show that we need only two variance measures to distinguish between the artworks and non-art images with a high classification accuracy of 93.0%. Results for the first variance measure imply that, in the artworks, the subregions of an image tend to be filled with pictorial elements, to which many diverse CNN features respond ( richness of feature responses). Results for the second measure imply that this diversity is tied to a relatively large variability of the responses of individual CNN feature across the subregions of an image. We hypothesize that this combination of richness and variability of CNN feature responses is one of properties that makes traditional visual artworks special. We discuss the possible neural underpinnings of this perceptual quality of artworks and propose to study the same quality also in other types of aesthetic stimuli, such as music and literature.
Using CNN Features to Better Understand What Makes Visual Artworks Special
Brachmann, Anselm; Barth, Erhardt; Redies, Christoph
2017-01-01
One of the goal of computational aesthetics is to understand what is special about visual artworks. By analyzing image statistics, contemporary methods in computer vision enable researchers to identify properties that distinguish artworks from other (non-art) types of images. Such knowledge will eventually allow inferences with regard to the possible neural mechanisms that underlie aesthetic perception in the human visual system. In the present study, we define measures that capture variances of features of a well-established Convolutional Neural Network (CNN), which was trained on millions of images to recognize objects. Using an image dataset that represents traditional Western, Islamic and Chinese art, as well as various types of non-art images, we show that we need only two variance measures to distinguish between the artworks and non-art images with a high classification accuracy of 93.0%. Results for the first variance measure imply that, in the artworks, the subregions of an image tend to be filled with pictorial elements, to which many diverse CNN features respond (richness of feature responses). Results for the second measure imply that this diversity is tied to a relatively large variability of the responses of individual CNN feature across the subregions of an image. We hypothesize that this combination of richness and variability of CNN feature responses is one of properties that makes traditional visual artworks special. We discuss the possible neural underpinnings of this perceptual quality of artworks and propose to study the same quality also in other types of aesthetic stimuli, such as music and literature. PMID:28588537
ERIC Educational Resources Information Center
Oh, Hwamee; Leung, Hoi-Chung
2010-01-01
In this fMRI study, we investigated prefrontal cortex (PFC) and visual association regions during selective information processing. We recorded behavioral responses and neural activity during a delayed recognition task with a cue presented during the delay period. A specific cue ("Face" or "Scene") was used to indicate which one of the two…
ERIC Educational Resources Information Center
Park, Joonkoo; Hebrank, Andrew; Polk, Thad A.; Park, Denise C.
2012-01-01
The visual recognition of letters dissociates from the recognition of numbers at both the behavioral and neural level. In this article, using fMRI, we investigate whether the visual recognition of numbers dissociates from letters, thereby establishing a double dissociation. In Experiment 1, participants viewed strings of consonants and Arabic…
Neural Correlates of Morphological Decomposition in a Morphologically Rich Language: An fMRI Study
ERIC Educational Resources Information Center
Lehtonen, Minna; Vorobyev, Victor A.; Hugdahl, Kenneth; Tuokkola, Terhi; Laine, Matti
2006-01-01
By employing visual lexical decision and functional MRI, we studied the neural correlates of morphological decomposition in a highly inflected language (Finnish) where most inflected noun forms elicit a consistent processing cost during word recognition. This behavioral effect could reflect suffix stripping at the visual word form level and/or…
Neural network-based systems for handprint OCR applications.
Ganis, M D; Wilson, C L; Blue, J L
1998-01-01
Over the last five years or so, neural network (NN)-based approaches have been steadily gaining performance and popularity for a wide range of optical character recognition (OCR) problems, from isolated digit recognition to handprint recognition. We present an NN classification scheme based on an enhanced multilayer perceptron (MLP) and describe an end-to-end system for form-based handprint OCR applications designed by the National Institute of Standards and Technology (NIST) Visual Image Processing Group. The enhancements to the MLP are based on (i) neuron activations functions that reduce the occurrences of singular Jacobians; (ii) successive regularization to constrain the volume of the weight space; and (iii) Boltzmann pruning to constrain the dimension of the weight space. Performance characterization studies of NN systems evaluated at the first OCR systems conference and the NIST form-based handprint recognition system are also summarized.
Altered figure-ground perception in monkeys with an extra-striate lesion.
Supèr, Hans; Lamme, Victor A F
2007-11-05
The visual system binds and segments the elements of an image into coherent objects and their surroundings. Recent findings demonstrate that primary visual cortex is involved in this process of figure-ground organization. In the primary visual cortex the late part of a neural response to a stimulus correlates with figure-ground segregation and perception. Such a late onset indicates an involvement of feedback projections from higher visual areas. To investigate the possible role of feedback in figure-ground perception we removed dorsal extra-striate areas of the monkey visual cortex. The findings show that figure-ground perception is reduced when the figure is presented in the lesioned hemifield and perception is normal when the figure appeared in the intact hemifield. In conclusion, our observations show the importance for recurrent processing in visual perception.
Texture and art with deep neural networks.
Gatys, Leon A; Ecker, Alexander S; Bethge, Matthias
2017-10-01
Although the study of biological vision and computer vision attempt to understand powerful visual information processing from different angles, they have a long history of informing each other. Recent advances in texture synthesis that were motivated by visual neuroscience have led to a substantial advance in image synthesis and manipulation in computer vision using convolutional neural networks (CNNs). Here, we review these recent advances and discuss how they can in turn inspire new research in visual perception and computational neuroscience. Copyright © 2017. Published by Elsevier Ltd.
Hierarchical neural network model of the visual system determining figure/ground relation
NASA Astrophysics Data System (ADS)
Kikuchi, Masayuki
2017-07-01
One of the most important functions of the visual perception in the brain is figure/ground interpretation from input images. Figural region in 2D image corresponding to object in 3D space are distinguished from background region extended behind the object. Previously the author proposed a neural network model of figure/ground separation constructed on the standpoint that local geometric features such as curvatures and outer angles at corners are extracted and propagated along input contour in a single layer network (Kikuchi & Akashi, 2001). However, such a processing principle has the defect that signal propagation requires manyiterations despite the fact that actual visual system determines figure/ground relation within the short period (Zhou et al., 2000). In order to attain speed-up for determining figure/ground, this study incorporates hierarchical architecture into the previous model. This study confirmed the effect of the hierarchization as for the computation time by simulation. As the number of layers increased, the required computation time reduced. However, such speed-up effect was saturatedas the layers increased to some extent. This study attempted to explain this saturation effect by the notion of average distance between vertices in the area of complex network, and succeeded to mimic the saturation effect by computer simulation.
Mender, Bedeho M W; Stringer, Simon M
2015-01-01
We propose and examine a model for how perisaccadic visual receptive field dynamics, observed in a range of primate brain areas such as LIP, FEF, SC, V3, V3A, V2, and V1, may develop through a biologically plausible process of unsupervised visually guided learning. These dynamics are associated with remapping, which is the phenomenon where receptive fields anticipate the consequences of saccadic eye movements. We find that a neural network model using a local associative synaptic learning rule, when exposed to visual scenes in conjunction with saccades, can account for a range of associated phenomena. In particular, our model demonstrates predictive and pre-saccadic remapping, responsiveness shifts around the time of saccades, and remapping from multiple directions.
Mender, Bedeho M. W.; Stringer, Simon M.
2015-01-01
We propose and examine a model for how perisaccadic visual receptive field dynamics, observed in a range of primate brain areas such as LIP, FEF, SC, V3, V3A, V2, and V1, may develop through a biologically plausible process of unsupervised visually guided learning. These dynamics are associated with remapping, which is the phenomenon where receptive fields anticipate the consequences of saccadic eye movements. We find that a neural network model using a local associative synaptic learning rule, when exposed to visual scenes in conjunction with saccades, can account for a range of associated phenomena. In particular, our model demonstrates predictive and pre-saccadic remapping, responsiveness shifts around the time of saccades, and remapping from multiple directions. PMID:25717301
Strabismus and the Oculomotor System: Insights from Macaque Models
Das, Vallabh E.
2017-01-01
Disrupting binocular vision in infancy leads to strabismus and oftentimes to a variety of associated visual sensory deficits and oculomotor abnormalities. Investigation of this disorder has been aided by the development of various animal models, each of which has advantages and disadvantages. In comparison to studies of binocular visual responses in cortical structures, investigations of neural oculomotor structures that mediate the misalignment and abnormalities of eye movements have been more recent, and these studies have shown that different brain areas are intimately involved in driving several aspects of the strabismic condition, including horizontal misalignment, dissociated deviations, A and V patterns of strabismus, disconjugate eye movements, nystagmus, and fixation switch. The responses of cells in visual and oculomotor areas that potentially drive the sensory deficits and also eye alignment and eye movement abnormalities follow a general theme of disrupted calibration, lower sensitivity, and poorer specificity compared with the normally developed visual oculomotor system. PMID:28532347
Caudate nucleus reactivity predicts perceptual learning rate for visual feature conjunctions.
Reavis, Eric A; Frank, Sebastian M; Tse, Peter U
2015-04-15
Useful information in the visual environment is often contained in specific conjunctions of visual features (e.g., color and shape). The ability to quickly and accurately process such conjunctions can be learned. However, the neural mechanisms responsible for such learning remain largely unknown. It has been suggested that some forms of visual learning might involve the dopaminergic neuromodulatory system (Roelfsema et al., 2010; Seitz and Watanabe, 2005), but this hypothesis has not yet been directly tested. Here we test the hypothesis that learning visual feature conjunctions involves the dopaminergic system, using functional neuroimaging, genetic assays, and behavioral testing techniques. We use a correlative approach to evaluate potential associations between individual differences in visual feature conjunction learning rate and individual differences in dopaminergic function as indexed by neuroimaging and genetic markers. We find a significant correlation between activity in the caudate nucleus (a component of the dopaminergic system connected to visual areas of the brain) and visual feature conjunction learning rate. Specifically, individuals who showed a larger difference in activity between positive and negative feedback on an unrelated cognitive task, indicative of a more reactive dopaminergic system, learned visual feature conjunctions more quickly than those who showed a smaller activity difference. This finding supports the hypothesis that the dopaminergic system is involved in visual learning, and suggests that visual feature conjunction learning could be closely related to associative learning. However, no significant, reliable correlations were found between feature conjunction learning and genotype or dopaminergic activity in any other regions of interest. Copyright © 2015 Elsevier Inc. All rights reserved.
Chordate evolution and the origin of craniates: an old brain in a new head.
Butler, A B
2000-06-15
The earliest craniates achieved a unique condition among bilaterally symmetrical animals: they possessed enlarged, elaborated brains with paired sense organs and unique derivatives of neural crest and placodal tissues, including peripheral sensory ganglia, visceral arches, and head skeleton. The craniate sister taxon, cephalochordates, has rostral portions of the neuraxis that are homologous to some of the major divisions of craniate brains. Moreover, recent data indicate that many genes involved in patterning the nervous system are common to all bilaterally symmetrical animals and have been inherited from a common ancestor. Craniates, thus, have an "old" brain in a new head, due to re-expression of these anciently acquired genes. The transition to the craniate brain from a cephalochordate-like ancestral form may have involved a mediolateral shift in expression of the genes that specify nervous system development from various parts of the ectoderm. It is suggested here that the transition was sequential. The first step involved the presence of paired, lateral eyes, elaboration of the alar plate, and enhancement of the descending visual pathway to brainstem motor centers. Subsequently, this central visual pathway served as a template for the additional sensory systems that were elaborated and/or augmented with the "bloom" of migratory neural crest and placodes. This model accounts for the marked uniformity of pattern across central sensory pathways and for the lack of any neural crest-placode cranial nerve for either the diencephalon or mesencephalon. Anat Rec (New Anat) 261:111-125, 2000. Copyright 2000 Wiley-Liss, Inc.
Integrating neuroinformatics tools in TheVirtualBrain.
Woodman, M Marmaduke; Pezard, Laurent; Domide, Lia; Knock, Stuart A; Sanz-Leon, Paula; Mersmann, Jochen; McIntosh, Anthony R; Jirsa, Viktor
2014-01-01
TheVirtualBrain (TVB) is a neuroinformatics Python package representing the convergence of clinical, systems, and theoretical neuroscience in the analysis, visualization and modeling of neural and neuroimaging dynamics. TVB is composed of a flexible simulator for neural dynamics measured across scales from local populations to large-scale dynamics measured by electroencephalography (EEG), magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI), and core analytic and visualization functions, all accessible through a web browser user interface. A datatype system modeling neuroscientific data ties together these pieces with persistent data storage, based on a combination of SQL and HDF5. These datatypes combine with adapters allowing TVB to integrate other algorithms or computational systems. TVB provides infrastructure for multiple projects and multiple users, possibly participating under multiple roles. For example, a clinician might import patient data to identify several potential lesion points in the patient's connectome. A modeler, working on the same project, tests these points for viability through whole brain simulation, based on the patient's connectome, and subsequent analysis of dynamical features. TVB also drives research forward: the simulator itself represents the culmination of several simulation frameworks in the modeling literature. The availability of the numerical methods, set of neural mass models and forward solutions allows for the construction of a wide range of brain-scale simulation scenarios. This paper briefly outlines the history and motivation for TVB, describing the framework and simulator, giving usage examples in the web UI and Python scripting.
Integrating neuroinformatics tools in TheVirtualBrain
Woodman, M. Marmaduke; Pezard, Laurent; Domide, Lia; Knock, Stuart A.; Sanz-Leon, Paula; Mersmann, Jochen; McIntosh, Anthony R.; Jirsa, Viktor
2014-01-01
TheVirtualBrain (TVB) is a neuroinformatics Python package representing the convergence of clinical, systems, and theoretical neuroscience in the analysis, visualization and modeling of neural and neuroimaging dynamics. TVB is composed of a flexible simulator for neural dynamics measured across scales from local populations to large-scale dynamics measured by electroencephalography (EEG), magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI), and core analytic and visualization functions, all accessible through a web browser user interface. A datatype system modeling neuroscientific data ties together these pieces with persistent data storage, based on a combination of SQL and HDF5. These datatypes combine with adapters allowing TVB to integrate other algorithms or computational systems. TVB provides infrastructure for multiple projects and multiple users, possibly participating under multiple roles. For example, a clinician might import patient data to identify several potential lesion points in the patient's connectome. A modeler, working on the same project, tests these points for viability through whole brain simulation, based on the patient's connectome, and subsequent analysis of dynamical features. TVB also drives research forward: the simulator itself represents the culmination of several simulation frameworks in the modeling literature. The availability of the numerical methods, set of neural mass models and forward solutions allows for the construction of a wide range of brain-scale simulation scenarios. This paper briefly outlines the history and motivation for TVB, describing the framework and simulator, giving usage examples in the web UI and Python scripting. PMID:24795617
Giesbrecht, Barry; Sy, Jocelyn L.; Guerin, Scott A.
2012-01-01
Environmental context learned without awareness can facilitate visual processing of goal-relevant information. According to one view, the benefit of implicitly learned context relies on the neural systems involved in spatial attention and hippocampus-mediated memory. While this view has received empirical support, it contradicts traditional models of hippocampal function. The purpose of the present work was to clarify the influence of spatial context on visual search performance and on brain structures involved memory and attention. Event-related functional magnetic resonance imaging revealed that activity in the hippocampus as well as in visual and parietal cortex was modulated by learned visual context even though participants’ subjective reports and performance on a post-experiment recognition task indicated no explicit knowledge of the learned context. Moreover, the magnitude of the initial selective hippocampus response predicted the magnitude of the behavioral benefit due to context observed at the end of the experiment. The results suggest that implicit contextual learning is mediated by attention and memory and that these systems interact to support search of our environment. PMID:23099047
Common neural substrates for visual working memory and attention.
Mayer, Jutta S; Bittner, Robert A; Nikolić, Danko; Bledowski, Christoph; Goebel, Rainer; Linden, David E J
2007-06-01
Humans are severely limited in their ability to memorize visual information over short periods of time. Selective attention has been implicated as a limiting factor. Here we used functional magnetic resonance imaging to test the hypothesis that this limitation is due to common neural resources shared by visual working memory (WM) and selective attention. We combined visual search and delayed discrimination of complex objects and independently modulated the demands on selective attention and WM encoding. Participants were presented with a search array and performed easy or difficult visual search in order to encode one or three complex objects into visual WM. Overlapping activation for attention-demanding visual search and WM encoding was observed in distributed posterior and frontal regions. In the right prefrontal cortex and bilateral insula blood oxygen-level-dependent activation additively increased with increased WM load and attentional demand. Conversely, several visual, parietal and premotor areas showed overlapping activation for the two task components and were severely reduced in their WM load response under the condition with high attentional demand. Regions in the left prefrontal cortex were selectively responsive to WM load. Areas selectively responsive to high attentional demand were found within the right prefrontal and bilateral occipital cortex. These results indicate that encoding into visual WM and visual selective attention require to a high degree access to common neural resources. We propose that competition for resources shared by visual attention and WM encoding can limit processing capabilities in distributed posterior brain regions.
Contextual effects on perceived contrast: figure-ground assignment and orientation contrast.
Self, Matthew W; Mookhoek, Aart; Tjalma, Nienke; Roelfsema, Pieter R
2015-02-02
Figure-ground segregation is an important step in the path leading to object recognition. The visual system segregates objects ('figures') in the visual scene from their backgrounds ('ground'). Electrophysiological studies in awake-behaving monkeys have demonstrated that neurons in early visual areas increase their firing rate when responding to a figure compared to responding to the background. We hypothesized that similar changes in neural firing would take place in early visual areas of the human visual system, leading to changes in the perception of low-level visual features. In this study, we investigated whether contrast perception is affected by figure-ground assignment using stimuli similar to those in the electrophysiological studies in monkeys. We measured contrast discrimination thresholds and perceived contrast for Gabor probes placed on figures or the background and found that the perceived contrast of the probe was increased when it was placed on a figure. Furthermore, we tested how this effect compared with the well-known effect of orientation contrast on perceived contrast. We found that figure-ground assignment and orientation contrast produced changes in perceived contrast of a similar magnitude, and that they interacted. Our results demonstrate that figure-ground assignment influences perceived contrast, consistent with an effect of figure-ground assignment on activity in early visual areas of the human visual system. © 2015 ARVO.
Neural Network Machine Learning and Dimension Reduction for Data Visualization
NASA Technical Reports Server (NTRS)
Liles, Charles A.
2014-01-01
Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.
Review On Applications Of Neural Network To Computer Vision
NASA Astrophysics Data System (ADS)
Li, Wei; Nasrabadi, Nasser M.
1989-03-01
Neural network models have many potential applications to computer vision due to their parallel structures, learnability, implicit representation of domain knowledge, fault tolerance, and ability of handling statistical data. This paper demonstrates the basic principles, typical models and their applications in this field. Variety of neural models, such as associative memory, multilayer back-propagation perceptron, self-stabilized adaptive resonance network, hierarchical structured neocognitron, high order correlator, network with gating control and other models, can be applied to visual signal recognition, reinforcement, recall, stereo vision, motion, object tracking and other vision processes. Most of the algorithms have been simulated on com-puters. Some have been implemented with special hardware. Some systems use features, such as edges and profiles, of images as the data form for input. Other systems use raw data as input signals to the networks. We will present some novel ideas contained in these approaches and provide a comparison of these methods. Some unsolved problems are mentioned, such as extracting the intrinsic properties of the input information, integrating those low level functions to a high-level cognitive system, achieving invariances and other problems. Perspectives of applications of some human vision models and neural network models are analyzed.
Chimera states in a Hodgkin-Huxley model of thermally sensitive neurons
NASA Astrophysics Data System (ADS)
Glaze, Tera A.; Lewis, Scott; Bahar, Sonya
2016-08-01
Chimera states occur when identically coupled groups of nonlinear oscillators exhibit radically different dynamics, with one group exhibiting synchronized oscillations and the other desynchronized behavior. This dynamical phenomenon has recently been studied in computational models and demonstrated experimentally in mechanical, optical, and chemical systems. The theoretical basis of these states is currently under active investigation. Chimera behavior is of particular relevance in the context of neural synchronization, given the phenomenon of unihemispheric sleep and the recent observation of asymmetric sleep in human patients with sleep apnea. The similarity of neural chimera states to neural "bump" states, which have been suggested as a model for working memory and visual orientation tuning in the cortex, adds to their interest as objects of study. Chimera states have been demonstrated in the FitzHugh-Nagumo model of excitable cells and in the Hindmarsh-Rose neural model. Here, we demonstrate chimera states and chimera-like behaviors in a Hodgkin-Huxley-type model of thermally sensitive neurons both in a system with Abrams-Strogatz (mean field) coupling and in a system with Kuramoto (distance-dependent) coupling. We map the regions of parameter space for which chimera behavior occurs in each of the two coupling schemes.
Harvey, Ben M; Dumoulin, Serge O
2016-02-15
Several studies demonstrate that visual stimulus motion affects neural receptive fields and fMRI response amplitudes. Here we unite results of these two approaches and extend them by examining the effects of visual motion on neural position preferences throughout the hierarchy of human visual field maps. We measured population receptive field (pRF) properties using high-field fMRI (7T), characterizing position preferences simultaneously over large regions of the visual cortex. We measured pRFs properties using sine wave gratings in stationary apertures, moving at various speeds in either the direction of pRF measurement or the orthogonal direction. We find direction- and speed-dependent changes in pRF preferred position and size in all visual field maps examined, including V1, V3A, and the MT+ map TO1. These effects on pRF properties increase up the hierarchy of visual field maps. However, both within and between visual field maps the extent of pRF changes was approximately proportional to pRF size. This suggests that visual motion transforms the representation of visual space similarly throughout the visual hierarchy. Visual motion can also produce an illusory displacement of perceived stimulus position. We demonstrate perceptual displacements using the same stimulus configuration. In contrast to effects on pRF properties, perceptual displacements show only weak effects of motion speed, with far larger speed-independent effects. We describe a model where low-level mechanisms could underlie the observed effects on neural position preferences. We conclude that visual motion induces similar transformations of visuo-spatial representations throughout the visual hierarchy, which may arise through low-level mechanisms. Copyright © 2015 Elsevier Inc. All rights reserved.
An Active System for Visually-Guided Reaching in 3D across Binocular Fixations
2014-01-01
Based on the importance of relative disparity between objects for accurate hand-eye coordination, this paper presents a biological approach inspired by the cortical neural architecture. So, the motor information is coded in egocentric coordinates obtained from the allocentric representation of the space (in terms of disparity) generated from the egocentric representation of the visual information (image coordinates). In that way, the different aspects of the visuomotor coordination are integrated: an active vision system, composed of two vergent cameras; a module for the 2D binocular disparity estimation based on a local estimation of phase differences performed through a bank of Gabor filters; and a robotic actuator to perform the corresponding tasks (visually-guided reaching). The approach's performance is evaluated through experiments on both simulated and real data. PMID:24672295
The function and failure of sensory predictions.
Bansal, Sonia; Ford, Judith M; Spering, Miriam
2018-04-23
Humans and other primates are equipped with neural mechanisms that allow them to automatically make predictions about future events, facilitating processing of expected sensations and actions. Prediction-driven control and monitoring of perceptual and motor acts are vital to normal cognitive functioning. This review provides an overview of corollary discharge mechanisms involved in predictions across sensory modalities and discusses consequences of predictive coding for cognition and behavior. Converging evidence now links impairments in corollary discharge mechanisms to neuropsychiatric symptoms such as hallucinations and delusions. We review studies supporting a prediction-failure hypothesis of perceptual and cognitive disturbances. We also outline neural correlates underlying prediction function and failure, highlighting similarities across the visual, auditory, and somatosensory systems. In linking basic psychophysical and psychophysiological evidence of visual, auditory, and somatosensory prediction failures to neuropsychiatric symptoms, our review furthers our understanding of disease mechanisms. © 2018 New York Academy of Sciences.
Xiao, Jianbo
2015-01-01
Segmenting visual scenes into distinct objects and surfaces is a fundamental visual function. To better understand the underlying neural mechanism, we investigated how neurons in the middle temporal cortex (MT) of macaque monkeys represent overlapping random-dot stimuli moving transparently in slightly different directions. It has been shown that the neuronal response elicited by two stimuli approximately follows the average of the responses elicited by the constituent stimulus components presented alone. In this scheme of response pooling, the ability to segment two simultaneously presented motion directions is limited by the width of the tuning curve to motion in a single direction. We found that, although the population-averaged neuronal tuning showed response averaging, subgroups of neurons showed distinct patterns of response tuning and were capable of representing component directions that were separated by a small angle—less than the tuning width to unidirectional stimuli. One group of neurons preferentially represented the component direction at a specific side of the bidirectional stimuli, weighting one stimulus component more strongly than the other. Another group of neurons pooled the component responses nonlinearly and showed two separate peaks in their tuning curves even when the average of the component responses was unimodal. We also show for the first time that the direction tuning of MT neurons evolved from initially representing the vector-averaged direction of slightly different stimuli to gradually representing the component directions. Our results reveal important neural processes underlying image segmentation and suggest that information about slightly different stimulus components is computed dynamically and distributed across neurons. SIGNIFICANCE STATEMENT Natural scenes often contain multiple entities. The ability to segment visual scenes into distinct objects and surfaces is fundamental to sensory processing and is crucial for generating the perception of our environment. Because cortical neurons are broadly tuned to a given visual feature, segmenting two stimuli that differ only slightly is a challenge for the visual system. In this study, we discovered that many neurons in the visual cortex are capable of representing individual components of slightly different stimuli by selectively and nonlinearly pooling the responses elicited by the stimulus components. We also show for the first time that the neural representation of individual stimulus components developed over a period of ∼70–100 ms, revealing a dynamic process of image segmentation. PMID:26658869
Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli
Störmer, Viola S.; McDonald, John J.; Hillyard, Steven A.
2009-01-01
The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex. PMID:20007778
Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli.
Störmer, Viola S; McDonald, John J; Hillyard, Steven A
2009-12-29
The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex.
Normalization is a general neural mechanism for context-dependent decision making
Louie, Kenway; Khaw, Mel W.; Glimcher, Paul W.
2013-01-01
Understanding the neural code is critical to linking brain and behavior. In sensory systems, divisive normalization seems to be a canonical neural computation, observed in areas ranging from retina to cortex and mediating processes including contrast adaptation, surround suppression, visual attention, and multisensory integration. Recent electrophysiological studies have extended these insights beyond the sensory domain, demonstrating an analogous algorithm for the value signals that guide decision making, but the effects of normalization on choice behavior are unknown. Here, we show that choice models using normalization generate significant (and classically irrational) choice phenomena driven by either the value or number of alternative options. In value-guided choice experiments, both monkey and human choosers show novel context-dependent behavior consistent with normalization. These findings suggest that the neural mechanism of value coding critically influences stochastic choice behavior and provide a generalizable quantitative framework for examining context effects in decision making. PMID:23530203
Deep Neural Network for Structural Prediction and Lane Detection in Traffic Scene.
Li, Jun; Mei, Xue; Prokhorov, Danil; Tao, Dacheng
2017-03-01
Hierarchical neural networks have been shown to be effective in learning representative image features and recognizing object classes. However, most existing networks combine the low/middle level cues for classification without accounting for any spatial structures. For applications such as understanding a scene, how the visual cues are spatially distributed in an image becomes essential for successful analysis. This paper extends the framework of deep neural networks by accounting for the structural cues in the visual signals. In particular, two kinds of neural networks have been proposed. First, we develop a multitask deep convolutional network, which simultaneously detects the presence of the target and the geometric attributes (location and orientation) of the target with respect to the region of interest. Second, a recurrent neuron layer is adopted for structured visual detection. The recurrent neurons can deal with the spatial distribution of visible cues belonging to an object whose shape or structure is difficult to explicitly define. Both the networks are demonstrated by the practical task of detecting lane boundaries in traffic scenes. The multitask convolutional neural network provides auxiliary geometric information to help the subsequent modeling of the given lane structures. The recurrent neural network automatically detects lane boundaries, including those areas containing no marks, without any explicit prior knowledge or secondary modeling.
ERIC Educational Resources Information Center
Kaya, Deniz
2017-01-01
The purpose of the study is to perform a less-dimensional thorough visualization process for the purpose of determining the images of the students on the concept of angle. The Ward clustering analysis combined with Self-Organizing Neural Network Map (SOM) has been used for the dimension process. The Conceptual Understanding Tool, which consisted…
Stavisky, Sergey D; Kao, Jonathan C; Ryu, Stephen I; Shenoy, Krishna V
2017-07-05
Neural circuits must transform new inputs into outputs without prematurely affecting downstream circuits while still maintaining other ongoing communication with these targets. We investigated how this isolation is achieved in the motor cortex when macaques received visual feedback signaling a movement perturbation. To overcome limitations in estimating the mapping from cortex to arm movements, we also conducted brain-machine interface (BMI) experiments where we could definitively identify neural firing patterns as output-null or output-potent. This revealed that perturbation-evoked responses were initially restricted to output-null patterns that cancelled out at the neural population code readout and only later entered output-potent neural dimensions. This mechanism was facilitated by the circuit's large null space and its ability to strongly modulate output-potent dimensions when generating corrective movements. These results show that the nervous system can temporarily isolate portions of a circuit's activity from its downstream targets by restricting this activity to the circuit's output-null neural dimensions. Copyright © 2017 Elsevier Inc. All rights reserved.
Chansanroj, Krisanin; Petrović, Jelena; Ibrić, Svetlana; Betz, Gabriele
2011-10-09
Artificial neural networks (ANNs) were applied for system understanding and prediction of drug release properties from direct compacted matrix tablets using sucrose esters (SEs) as matrix-forming agents for controlled release of a highly water soluble drug, metoprolol tartrate. Complexity of the system was presented through the effects of SE concentration and tablet porosity at various hydrophilic-lipophilic balance (HLB) values of SEs ranging from 0 to 16. Both effects contributed to release behaviors especially in the system containing hydrophilic SEs where swelling phenomena occurred. A self-organizing map neural network (SOM) was applied for visualizing interrelation among the variables and multilayer perceptron neural networks (MLPs) were employed to generalize the system and predict the drug release properties based on HLB value and concentration of SEs and tablet properties, i.e., tablet porosity, volume and tensile strength. Accurate prediction was obtained after systematically optimizing network performance based on learning algorithm of MLP. Drug release was mainly attributed to the effects of SEs, tablet volume and tensile strength in multi-dimensional interrelation whereas tablet porosity gave a small impact. Ability of system generalization and accurate prediction of the drug release properties proves the validity of SOM and MLPs for the formulation modeling of direct compacted matrix tablets containing controlled release agents of different material properties. Copyright © 2011 Elsevier B.V. All rights reserved.
Jackson, Jade; Rich, Anina N; Williams, Mark A; Woolgar, Alexandra
2017-02-01
Human cognition is characterized by astounding flexibility, enabling us to select appropriate information according to the objectives of our current task. A circuit of frontal and parietal brain regions, often referred to as the frontoparietal attention network or multiple-demand (MD) regions, are believed to play a fundamental role in this flexibility. There is evidence that these regions dynamically adjust their responses to selectively process information that is currently relevant for behavior, as proposed by the "adaptive coding hypothesis" [Duncan, J. An adaptive coding model of neural function in prefrontal cortex. Nature Reviews Neuroscience, 2, 820-829, 2001]. Could this provide a neural mechanism for feature-selective attention, the process by which we preferentially process one feature of a stimulus over another? We used multivariate pattern analysis of fMRI data during a perceptually challenging categorization task to investigate whether the representation of visual object features in the MD regions flexibly adjusts according to task relevance. Participants were trained to categorize visually similar novel objects along two orthogonal stimulus dimensions (length/orientation) and performed short alternating blocks in which only one of these dimensions was relevant. We found that multivoxel patterns of activation in the MD regions encoded the task-relevant distinctions more strongly than the task-irrelevant distinctions: The MD regions discriminated between stimuli of different lengths when length was relevant and between the same objects according to orientation when orientation was relevant. The data suggest a flexible neural system that adjusts its representation of visual objects to preferentially encode stimulus features that are currently relevant for behavior, providing a neural mechanism for feature-selective attention.
Miconi, Thomas; VanRullen, Rufin
2016-02-01
Visual attention has many effects on neural responses, producing complex changes in firing rates, as well as modifying the structure and size of receptive fields, both in topological and feature space. Several existing models of attention suggest that these effects arise from selective modulation of neural inputs. However, anatomical and physiological observations suggest that attentional modulation targets higher levels of the visual system (such as V4 or MT) rather than input areas (such as V1). Here we propose a simple mechanism that explains how a top-down attentional modulation, falling on higher visual areas, can produce the observed effects of attention on neural responses. Our model requires only the existence of modulatory feedback connections between areas, and short-range lateral inhibition within each area. Feedback connections redistribute the top-down modulation to lower areas, which in turn alters the inputs of other higher-area cells, including those that did not receive the initial modulation. This produces firing rate modulations and receptive field shifts. Simultaneously, short-range lateral inhibition between neighboring cells produce competitive effects that are automatically scaled to receptive field size in any given area. Our model reproduces the observed attentional effects on response rates (response gain, input gain, biased competition automatically scaled to receptive field size) and receptive field structure (shifts and resizing of receptive fields both spatially and in complex feature space), without modifying model parameters. Our model also makes the novel prediction that attentional effects on response curves should shift from response gain to contrast gain as the spatial focus of attention drifts away from the studied cell.
Hogendoorn, Hinze; Burkitt, Anthony N
2018-05-01
Due to the delays inherent in neuronal transmission, our awareness of sensory events necessarily lags behind the occurrence of those events in the world. If the visual system did not compensate for these delays, we would consistently mislocalize moving objects behind their actual position. Anticipatory mechanisms that might compensate for these delays have been reported in animals, and such mechanisms have also been hypothesized to underlie perceptual effects in humans such as the Flash-Lag Effect. However, to date no direct physiological evidence for anticipatory mechanisms has been found in humans. Here, we apply multivariate pattern classification to time-resolved EEG data to investigate anticipatory coding of object position in humans. By comparing the time-course of neural position representation for objects in both random and predictable apparent motion, we isolated anticipatory mechanisms that could compensate for neural delays when motion trajectories were predictable. As well as revealing an early neural position representation (lag 80-90 ms) that was unaffected by the predictability of the object's trajectory, we demonstrate a second neural position representation at 140-150 ms that was distinct from the first, and that was pre-activated ahead of the moving object when it moved on a predictable trajectory. The latency advantage for predictable motion was approximately 16 ± 2 ms. To our knowledge, this provides the first direct experimental neurophysiological evidence of anticipatory coding in human vision, revealing the time-course of predictive mechanisms without using a spatial proxy for time. The results are numerically consistent with earlier animal work, and suggest that current models of spatial predictive coding in visual cortex can be effectively extended into the temporal domain. Copyright © 2018 Elsevier Inc. All rights reserved.
Cocchi, Luca; Sale, Martin V; L Gollo, Leonardo; Bell, Peter T; Nguyen, Vinh T; Zalesky, Andrew; Breakspear, Michael; Mattingley, Jason B
2016-01-01
Within the primate visual system, areas at lower levels of the cortical hierarchy process basic visual features, whereas those at higher levels, such as the frontal eye fields (FEF), are thought to modulate sensory processes via feedback connections. Despite these functional exchanges during perception, there is little shared activity between early and late visual regions at rest. How interactions emerge between regions encompassing distinct levels of the visual hierarchy remains unknown. Here we combined neuroimaging, non-invasive cortical stimulation and computational modelling to characterize changes in functional interactions across widespread neural networks before and after local inhibition of primary visual cortex or FEF. We found that stimulation of early visual cortex selectively increased feedforward interactions with FEF and extrastriate visual areas, whereas identical stimulation of the FEF decreased feedback interactions with early visual areas. Computational modelling suggests that these opposing effects reflect a fast-slow timescale hierarchy from sensory to association areas. DOI: http://dx.doi.org/10.7554/eLife.15252.001 PMID:27596931
Cocchi, Luca; Sale, Martin V; L Gollo, Leonardo; Bell, Peter T; Nguyen, Vinh T; Zalesky, Andrew; Breakspear, Michael; Mattingley, Jason B
2016-09-06
Within the primate visual system, areas at lower levels of the cortical hierarchy process basic visual features, whereas those at higher levels, such as the frontal eye fields (FEF), are thought to modulate sensory processes via feedback connections. Despite these functional exchanges during perception, there is little shared activity between early and late visual regions at rest. How interactions emerge between regions encompassing distinct levels of the visual hierarchy remains unknown. Here we combined neuroimaging, non-invasive cortical stimulation and computational modelling to characterize changes in functional interactions across widespread neural networks before and after local inhibition of primary visual cortex or FEF. We found that stimulation of early visual cortex selectively increased feedforward interactions with FEF and extrastriate visual areas, whereas identical stimulation of the FEF decreased feedback interactions with early visual areas. Computational modelling suggests that these opposing effects reflect a fast-slow timescale hierarchy from sensory to association areas.
Luminance gradient at object borders communicates object location to the human oculomotor system.
Kilpeläinen, Markku; Georgeson, Mark A
2018-01-25
The locations of objects in our environment constitute arguably the most important piece of information our visual system must convey to facilitate successful visually guided behaviour. However, the relevant objects are usually not point-like and do not have one unique location attribute. Relatively little is known about how the visual system represents the location of such large objects as visual processing is, both on neural and perceptual level, highly edge dominated. In this study, human observers made saccades to the centres of luminance defined squares (width 4 deg), which appeared at random locations (8 deg eccentricity). The phase structure of the square was manipulated such that the points of maximum luminance gradient at the square's edges shifted from trial to trial. The average saccade endpoints of all subjects followed those shifts in remarkable quantitative agreement. Further experiments showed that the shifts were caused by the edge manipulations, not by changes in luminance structure near the centre of the square or outside the square. We conclude that the human visual system programs saccades to large luminance defined square objects based on edge locations derived from the points of maximum luminance gradients at the square's edges.
Auditory and visual cortex of primates: a comparison of two sensory systems
Rauschecker, Josef P.
2014-01-01
A comparative view of the brain, comparing related functions across species and sensory systems, offers a number of advantages. In particular, it allows separating the formal purpose of a model structure from its implementation in specific brains. Models of auditory cortical processing can be conceived by analogy to the visual cortex, incorporating neural mechanisms that are found in both the visual and auditory systems. Examples of such canonical features on the columnar level are direction selectivity, size/bandwidth selectivity, as well as receptive fields with segregated versus overlapping on- and off-sub-regions. On a larger scale, parallel processing pathways have been envisioned that represent the two main facets of sensory perception: 1) identification of objects and 2) processing of space. Expanding this model in terms of sensorimotor integration and control offers an overarching view of cortical function independent of sensory modality. PMID:25728177
You, Hongzhi; Wang, Da-Hui
2017-01-01
Neural networks configured with winner-take-all (WTA) competition and N-methyl-D-aspartate receptor (NMDAR)-mediated synaptic dynamics are endowed with various dynamic characteristics of attractors underlying many cognitive functions. This paper presents a novel method for neuromorphic implementation of a two-variable WTA circuit with NMDARs aimed at implementing decision-making, working memory and hysteresis in visual perceptions. The method proposed is a dynamical system approach of circuit synthesis based on a biophysically plausible WTA model. Notably, slow and non-linear temporal dynamics of NMDAR-mediated synapses was generated. Circuit simulations in Cadence reproduced ramping neural activities observed in electrophysiological recordings in experiments of decision-making, the sustained activities observed in the prefrontal cortex during working memory, and classical hysteresis behavior during visual discrimination tasks. Furthermore, theoretical analysis of the dynamical system approach illuminated the underlying mechanisms of decision-making, memory capacity and hysteresis loops. The consistence between the circuit simulations and theoretical analysis demonstrated that the WTA circuit with NMDARs was able to capture the attractor dynamics underlying these cognitive functions. Their physical implementations as elementary modules are promising for assembly into integrated neuromorphic cognitive systems. PMID:28223913
Receptoral and postreceptoral visual processes in recovery from chromatic adaptation.
Jameson, D; Hurvich, L M; Varner, F D
1979-01-01
The time course of recovery from chromatic adaptation in human vision was tracked by determining the wavelength of light that appears uniquely yellow (neither red nor green) both before and after exposure to yellowish green and yellowish red adapting lights. Recovery is complete within 5 min after steady light exposure. After exposure to the alternating repeated sequence 10-sec light/10-sec dark, the initial magnitude of the aftereffect is reduced but recovery is retarded. The results are interpreted in terms of two processes located at different levels in the hierarchical organization of the visual system. One is a change in the balance of cone receptor sensitivities; the second is a shift in the equilibrium baseline between opposite-signed responses of the red/green channel at the opponent-process neural level. The baseline-shift mechanism is effective in the condition in which repeated input signals originating at the receptors are of sufficient strength to activate the system effectively. Hence, this process is revealed in the alternating adaptation condition when the receptors undergo partial recovery after each light exposure, but receptor adaptation during continued steady light exposure effectively protects the subsequent neural systems from continued strong activation. PMID:288087
You, Hongzhi; Wang, Da-Hui
2017-01-01
Neural networks configured with winner-take-all (WTA) competition and N-methyl-D-aspartate receptor (NMDAR)-mediated synaptic dynamics are endowed with various dynamic characteristics of attractors underlying many cognitive functions. This paper presents a novel method for neuromorphic implementation of a two-variable WTA circuit with NMDARs aimed at implementing decision-making, working memory and hysteresis in visual perceptions. The method proposed is a dynamical system approach of circuit synthesis based on a biophysically plausible WTA model. Notably, slow and non-linear temporal dynamics of NMDAR-mediated synapses was generated. Circuit simulations in Cadence reproduced ramping neural activities observed in electrophysiological recordings in experiments of decision-making, the sustained activities observed in the prefrontal cortex during working memory, and classical hysteresis behavior during visual discrimination tasks. Furthermore, theoretical analysis of the dynamical system approach illuminated the underlying mechanisms of decision-making, memory capacity and hysteresis loops. The consistence between the circuit simulations and theoretical analysis demonstrated that the WTA circuit with NMDARs was able to capture the attractor dynamics underlying these cognitive functions. Their physical implementations as elementary modules are promising for assembly into integrated neuromorphic cognitive systems.
Wilkinson, Nicholas M.; Metta, Giorgio
2014-01-01
Visual scan paths exhibit complex, stochastic dynamics. Even during visual fixation, the eye is in constant motion. Fixational drift and tremor are thought to reflect fluctuations in the persistent neural activity of neural integrators in the oculomotor brainstem, which integrate sequences of transient saccadic velocity signals into a short term memory of eye position. Despite intensive research and much progress, the precise mechanisms by which oculomotor posture is maintained remain elusive. Drift exhibits a stochastic statistical profile which has been modeled using random walk formalisms. Tremor is widely dismissed as noise. Here we focus on the dynamical profile of fixational tremor, and argue that tremor may be a signal which usefully reflects the workings of oculomotor postural control. We identify signatures reminiscent of a certain flavor of transient neurodynamics; toric traveling waves which rotate around a central phase singularity. Spiral waves play an organizational role in dynamical systems at many scales throughout nature, though their potential functional role in brain activity remains a matter of educated speculation. Spiral waves have a repertoire of functionally interesting dynamical properties, including persistence, which suggest that they could in theory contribute to persistent neural activity in the oculomotor postural control system. Whilst speculative, the singularity hypothesis of oculomotor postural control implies testable predictions, and could provide the beginnings of an integrated dynamical framework for eye movements across scales. PMID:24616670
Color, contrast sensitivity, and the cone mosaic.
Williams, D; Sekiguchi, N; Brainard, D
1993-01-01
This paper evaluates the role of various stages in the human visual system in the detection of spatial patterns. Contrast sensitivity measurements were made for interference fringe stimuli in three directions in color space with a psychophysical technique that avoided blurring by the eye's optics including chromatic aberration. These measurements were compared with the performance of an ideal observer that incorporated optical factors, such as photon catch in the cone mosaic, that influence the detection of interference fringes. The comparison of human and ideal observer performance showed that neural factors influence the shape as well as the height of the foveal contrast sensitivity function for all color directions, including those that involve luminance modulation. Furthermore, when optical factors are taken into account, the neural visual system has the same contrast sensitivity for isoluminant stimuli seen by the middle-wavelength-sensitive (M) and long-wavelength-sensitive (L) cones and isoluminant stimuli seen by the short-wavelength-sensitive (S) cones. Though the cone submosaics that feed these chromatic mechanisms have very different spatial properties, the later neural stages apparently have similar spatial properties. Finally, we review the evidence that cone sampling can produce aliasing distortion for gratings with spatial frequencies exceeding the resolution limit. Aliasing can be observed with gratings modulated in any of the three directions in color space we used. We discuss mechanisms that prevent aliasing in most ordinary viewing conditions. Images Fig. 1 Fig. 8 PMID:8234313
Trade-off between curvature tuning and position invariance in visual area V4
Sharpee, Tatyana O.; Kouh, Minjoon; Reynolds, John H.
2013-01-01
Humans can rapidly recognize a multitude of objects despite differences in their appearance. The neural mechanisms that endow high-level sensory neurons with both selectivity to complex stimulus features and “tolerance” or invariance to identity-preserving transformations, such as spatial translation, remain poorly understood. Previous studies have demonstrated that both tolerance and selectivity to conjunctions of features are increased at successive stages of the ventral visual stream that mediates visual recognition. Within a given area, such as visual area V4 or the inferotemporal cortex, tolerance has been found to be inversely related to the sparseness of neural responses, which in turn was positively correlated with conjunction selectivity. However, the direct relationship between tolerance and conjunction selectivity has been difficult to establish, with different studies reporting either an inverse or no significant relationship. To resolve this, we measured V4 responses to natural scenes, and using recently developed statistical techniques, we estimated both the relevant stimulus features and the range of translation invariance for each neuron. Focusing the analysis on tuning to curvature, a tractable example of conjunction selectivity, we found that neurons that were tuned to more curved contours had smaller ranges of position invariance and produced sparser responses to natural stimuli. These trade-offs provide empirical support for recent theories of how the visual system estimates 3D shapes from shading and texture flows, as well as the tiling hypothesis of the visual space for different curvature values. PMID:23798444
Neural basis of hierarchical visual form processing of Japanese Kanji characters.
Higuchi, Hiroki; Moriguchi, Yoshiya; Murakami, Hiroki; Katsunuma, Ruri; Mishima, Kazuo; Uno, Akira
2015-12-01
We investigated the neural processing of reading Japanese Kanji characters, which involves unique hierarchical visual processing, including the recognition of visual components specific to Kanji, such as "radicals." We performed functional MRI to measure brain activity in response to hierarchical visual stimuli containing (1) real Kanji characters (complete structure with semantic information), (2) pseudo Kanji characters (subcomponents without complete character structure), (3) artificial characters (character fragments), and (4) checkerboard (simple photic stimuli). As we expected, the peaks of the activation in response to different stimulus types were aligned within the left occipitotemporal visual region along the posterior-anterior axis in order of the structural complexity of the stimuli, from fragments (3) to complete characters (1). Moreover, only the real Kanji characters produced functional connectivity between the left inferotemporal area and the language area (left inferior frontal triangularis), while pseudo Kanji characters induced connectivity between the left inferotemporal area and the bilateral cerebellum and left putamen. Visual processing of Japanese Kanji takes place in the left occipitotemporal cortex, with a clear hierarchy within the region such that the neural activation differentiates the elements in Kanji characters' fragments, subcomponents, and semantics, with different patterns of connectivity to remote regions among the elements.
The primate amygdala represents the positive and negative value of visual stimuli during learning
Paton, Joseph J.; Belova, Marina A.; Morrison, Sara E.; Salzman, C. Daniel
2008-01-01
Visual stimuli can acquire positive or negative value through their association with rewards and punishments, a process called reinforcement learning. Although we now know a great deal about how the brain analyses visual information, we know little about how visual representations become linked with values. To study this process, we turned to the amygdala, a brain structure implicated in reinforcement learning1–5. We recorded the activity of individual amygdala neurons in monkeys while abstract images acquired either positive or negative value through conditioning. After monkeys had learned the initial associations, we reversed image value assignments. We examined neural responses in relation to these reversals in order to estimate the relative contribution to neural activity of the sensory properties of images and their conditioned values. Here we show that changes in the values of images modulate neural activity, and that this modulation occurs rapidly enough to account for, and correlates with, monkeys’ learning. Furthermore, distinct populations of neurons encode the positive and negative values of visual stimuli. Behavioural and physiological responses to visual stimuli may therefore be based in part on the plastic representation of value provided by the amygdala. PMID:16482160
Deep recurrent neural network reveals a hierarchy of process memory during dynamic natural vision.
Shi, Junxing; Wen, Haiguang; Zhang, Yizhen; Han, Kuan; Liu, Zhongming
2018-05-01
The human visual cortex extracts both spatial and temporal visual features to support perception and guide behavior. Deep convolutional neural networks (CNNs) provide a computational framework to model cortical representation and organization for spatial visual processing, but unable to explain how the brain processes temporal information. To overcome this limitation, we extended a CNN by adding recurrent connections to different layers of the CNN to allow spatial representations to be remembered and accumulated over time. The extended model, or the recurrent neural network (RNN), embodied a hierarchical and distributed model of process memory as an integral part of visual processing. Unlike the CNN, the RNN learned spatiotemporal features from videos to enable action recognition. The RNN better predicted cortical responses to natural movie stimuli than the CNN, at all visual areas, especially those along the dorsal stream. As a fully observable model of visual processing, the RNN also revealed a cortical hierarchy of temporal receptive window, dynamics of process memory, and spatiotemporal representations. These results support the hypothesis of process memory, and demonstrate the potential of using the RNN for in-depth computational understanding of dynamic natural vision. © 2018 Wiley Periodicals, Inc.
Brown, Lucy L.; Acevedo, Bianca; Fisher, Helen E.
2013-01-01
Four suites of behavioral traits have been associated with four broad neural systems: the 1) dopamine and related norepinephrine system; 2) serotonin; 3) testosterone; 4) and estrogen and oxytocin system. A 56-item questionnaire, the Fisher Temperament Inventory (FTI), was developed to define four temperament dimensions associated with these behavioral traits and neural systems. The questionnaire has been used to suggest romantic partner compatibility. The dimensions were named: Curious/Energetic; Cautious/Social Norm Compliant; Analytical/Tough-minded; and Prosocial/Empathetic. For the present study, the FTI was administered to participants in two functional magnetic resonance imaging studies that elicited feelings of love and attachment, near-universal human experiences. Scores for the Curious/Energetic dimension co-varied with activation in a region of the substantia nigra, consistent with the prediction that this dimension reflects activity in the dopamine system. Scores for the Cautious/Social Norm Compliant dimension correlated with activation in the ventrolateral prefrontal cortex in regions associated with social norm compliance, a trait linked with the serotonin system. Scores on the Analytical/Tough-minded scale co-varied with activity in regions of the occipital and parietal cortices associated with visual acuity and mathematical thinking, traits linked with testosterone. Also, testosterone contributes to brain architecture in these areas. Scores on the Prosocial/Empathetic scale correlated with activity in regions of the inferior frontal gyrus, anterior insula and fusiform gyrus. These are regions associated with mirror neurons or empathy, a trait linked with the estrogen/oxytocin system, and where estrogen contributes to brain architecture. These findings, replicated across two studies, suggest that the FTI measures influences of four broad neural systems, and that these temperament dimensions and neural systems could constitute foundational mechanisms in personality structure and play a role in romantic partnerships. PMID:24236043
Brown, Lucy L; Acevedo, Bianca; Fisher, Helen E
2013-01-01
Four suites of behavioral traits have been associated with four broad neural systems: the 1) dopamine and related norepinephrine system; 2) serotonin; 3) testosterone; 4) and estrogen and oxytocin system. A 56-item questionnaire, the Fisher Temperament Inventory (FTI), was developed to define four temperament dimensions associated with these behavioral traits and neural systems. The questionnaire has been used to suggest romantic partner compatibility. The dimensions were named: Curious/Energetic; Cautious/Social Norm Compliant; Analytical/Tough-minded; and Prosocial/Empathetic. For the present study, the FTI was administered to participants in two functional magnetic resonance imaging studies that elicited feelings of love and attachment, near-universal human experiences. Scores for the Curious/Energetic dimension co-varied with activation in a region of the substantia nigra, consistent with the prediction that this dimension reflects activity in the dopamine system. Scores for the Cautious/Social Norm Compliant dimension correlated with activation in the ventrolateral prefrontal cortex in regions associated with social norm compliance, a trait linked with the serotonin system. Scores on the Analytical/Tough-minded scale co-varied with activity in regions of the occipital and parietal cortices associated with visual acuity and mathematical thinking, traits linked with testosterone. Also, testosterone contributes to brain architecture in these areas. Scores on the Prosocial/Empathetic scale correlated with activity in regions of the inferior frontal gyrus, anterior insula and fusiform gyrus. These are regions associated with mirror neurons or empathy, a trait linked with the estrogen/oxytocin system, and where estrogen contributes to brain architecture. These findings, replicated across two studies, suggest that the FTI measures influences of four broad neural systems, and that these temperament dimensions and neural systems could constitute foundational mechanisms in personality structure and play a role in romantic partnerships.
An fMRI Study of Parietal Cortex Involvement in the Visual Guidance of Locomotion
ERIC Educational Resources Information Center
Billington, Jac; Field, David T.; Wilkie, Richard M.; Wann, John P.
2010-01-01
Locomoting through the environment typically involves anticipating impending changes in heading trajectory in addition to maintaining the current direction of travel. We explored the neural systems involved in the "far road" and "near road" mechanisms proposed by Land and Horwood (1995) using simulated forward or backward travel where participants…
ShapeShop: Towards Understanding Deep Learning Representations via Interactive Experimentation.
Hohman, Fred; Hodas, Nathan; Chau, Duen Horng
2017-05-01
Deep learning is the driving force behind many recent technologies; however, deep neural networks are often viewed as "black-boxes" due to their internal complexity that is hard to understand. Little research focuses on helping people explore and understand the relationship between a user's data and the learned representations in deep learning models. We present our ongoing work, ShapeShop, an interactive system for visualizing and understanding what semantics a neural network model has learned. Built using standard web technologies, ShapeShop allows users to experiment with and compare deep learning models to help explore the robustness of image classifiers.
Reading Stories Activates Neural Representations of Visual and Motor Experiences
Speer, Nicole K.; Reynolds, Jeremy R.; Swallow, Khena M.; Zacks, Jeffrey M.
2010-01-01
To understand and remember stories, readers integrate their knowledge of the world with information in the text. Here we present functional neuroimaging evidence that neural systems track changes in the situation described by a story. Different brain regions track different aspects of a story, such as a character’s physical location or current goals. Some of these regions mirror those involved when people perform, imagine, or observe similar real-world activities. These results support the view that readers understand a story by simulating the events in the story world and updating their simulation when features of that world change. PMID:19572969
Implications on visual apperception: energy, duration, structure and synchronization.
Bókkon, I; Vimal, Ram Lakhan Pandey
2010-07-01
Although primary visual cortex (V1 or striate) activity per se is not sufficient for visual apperception (normal conscious visual experiences and conscious functions such as detection, discrimination, and recognition), the same is also true for extrastriate visual areas (such as V2, V3, V4/V8/VO, V5/M5/MST, IT, and GF). In the lack of V1 area, visual signals can still reach several extrastriate parts but appear incapable of generating normal conscious visual experiences. It is scarcely emphasized in the scientific literature that conscious perceptions and representations must have also essential energetic conditions. These energetic conditions are achieved by spatiotemporal networks of dynamic mitochondrial distributions inside neurons. However, the highest density of neurons in neocortex (number of neurons per degree of visual angle) devoted to representing the visual field is found in retinotopic V1. It means that the highest mitochondrial (energetic) activity can be achieved in mitochondrial cytochrome oxidase-rich V1 areas. Thus, V1 bear the highest energy allocation for visual representation. In addition, the conscious perceptions also demand structural conditions, presence of adequate duration of information representation, and synchronized neural processes and/or 'interactive hierarchical structuralism.' For visual apperception, various visual areas are involved depending on context such as stimulus characteristics such as color, form/shape, motion, and other features. Here, we focus primarily on V1 where specific mitochondrial-rich retinotopic structures are found; we will concisely discuss V2 where smaller riches of these structures are found. We also point out that residual brain states are not fully reflected in active neural patterns after visual perception. Namely, after visual perception, subliminal residual states are not being reflected in passive neural recording techniques, but require active stimulation to be revealed.
Motion perception: behavior and neural substrate.
Mather, George
2011-05-01
Visual motion perception is vital for survival. Single-unit recordings in primate primary visual cortex (V1) have revealed the existence of specialized motion sensing neurons; perceptual effects such as the motion after-effect demonstrate their importance for motion perception. Human psychophysical data on motion detection can be explained by a computational model of cortical motion sensors. Both psychophysical and physiological data reveal at least two classes of motion sensor capable of sensing motion in luminance-defined and texture-defined patterns, respectively. Psychophysical experiments also reveal that motion can be seen independently of motion sensor output, based on attentive tracking of visual features. Sensor outputs are inherently ambiguous, due to the problem of univariance in neural responses. In order to compute stimulus direction and speed, the visual system must compare the responses of many different sensors sensitive to different directions and speeds. Physiological data show that this computation occurs in the visual middle temporal (MT) area. Recent psychophysical studies indicate that information about spatial form may also play a role in motion computations. Adaptation studies show that the human visual system is selectively sensitive to large-scale optic flow patterns, and physiological studies indicate that cells in the middle superior temporal (MST) area derive this sensitivity from the combined responses of many MT cells. Extraretinal signals used to control eye movements are an important source of signals to cancel out the retinal motion responses generated by eye movements, though visual information also plays a role. A number of issues remain to be resolved at all levels of the motion-processing hierarchy. WIREs Cogni Sci 2011 2 305-314 DOI: 10.1002/wcs.110 For further resources related to this article, please visit the WIREs website Additional Supporting Information may be found in http://www.lifesci.sussex.ac.uk/home/George_Mather/Motion/index.html. Copyright © 2010 John Wiley & Sons, Ltd.
Emrich, Stephen M; Riggall, Adam C; Larocque, Joshua J; Postle, Bradley R
2013-04-10
Traditionally, load sensitivity of sustained, elevated activity has been taken as an index of storage for a limited number of items in visual short-term memory (VSTM). Recently, studies have demonstrated that the contents of a single item held in VSTM can be decoded from early visual cortex, despite the fact that these areas do not exhibit elevated, sustained activity. It is unknown, however, whether the patterns of neural activity decoded from sensory cortex change as a function of load, as one would expect from a region storing multiple representations. Here, we use multivoxel pattern analysis to examine the neural representations of VSTM in humans across multiple memory loads. In an important extension of previous findings, our results demonstrate that the contents of VSTM can be decoded from areas that exhibit a transient response to visual stimuli, but not from regions that exhibit elevated, sustained load-sensitive delay-period activity. Moreover, the neural information present in these transiently activated areas decreases significantly with increasing load, indicating load sensitivity of the patterns of activity that support VSTM maintenance. Importantly, the decrease in classification performance as a function of load is correlated with within-subject changes in mnemonic resolution. These findings indicate that distributed patterns of neural activity in putatively sensory visual cortex support the representation and precision of information in VSTM.
Woolgar, Alexandra; Williams, Mark A; Rich, Anina N
2015-04-01
Selective attention is fundamental for human activity, but the details of its neural implementation remain elusive. One influential theory, the adaptive coding hypothesis (Duncan, 2001, An adaptive coding model of neural function in prefrontal cortex, Nature Reviews Neuroscience 2:820-829), proposes that single neurons in certain frontal and parietal regions dynamically adjust their responses to selectively encode relevant information. This selective representation may in turn support selective processing in more specialized brain regions such as the visual cortices. Here, we use multi-voxel decoding of functional magnetic resonance images to demonstrate selective representation of attended--and not distractor--objects in frontal, parietal, and visual cortices. In addition, we highlight a critical role for task demands in determining which brain regions exhibit selective coding. Strikingly, representation of attended objects in frontoparietal cortex was highest under conditions of high perceptual demand, when stimuli were hard to perceive and coding in early visual cortex was weak. Coding in early visual cortex varied as a function of attention and perceptual demand, while coding in higher visual areas was sensitive to the allocation of attention but robust to changes in perceptual difficulty. Consistent with high-profile reports, peripherally presented objects could also be decoded from activity at the occipital pole, a region which corresponds to the fovea. Our results emphasize the flexibility of frontoparietal and visual systems. They support the hypothesis that attention enhances the multi-voxel representation of information in the brain, and suggest that the engagement of this attentional mechanism depends critically on current task demands. Copyright © 2015 Elsevier Inc. All rights reserved.
Suzurikawa, Jun; Tani, Toshiki; Nakao, Masayuki; Tanaka, Shigeru; Takahashi, Hirokazu
2009-12-01
Recently, intrinsic signal optical imaging has been widely used as a routine procedure for visualizing cortical functional maps. We do not, however, have a well-established imaging method for visualizing cortical functional connectivity indicating spatio-temporal patterns of activity propagation in the cerebral cortex. In the present study, we developed a novel experimental setup for investigating the propagation of neural activities combining the intracortical microstimulation (ICMS) technique with voltage sensitive dye (VSD) imaging, and demonstrated the feasibility of this setup applying to the measurement of time-dependent intra- and inter-hemispheric spread of ICMS-evoked excitation in the cat visual cortices, areas 17 and 18. A microelectrode array for the ICMS was inserted with a specially designed easy-to-detach electrode holder around the 17/18 transition zones (TZs), where the left and right hemispheres were interconnected via the corpus callosum. The microelectrode array was stably anchored in agarose without any holder, which enabled us to visualize evoked activities even in the vicinity of penetration sites as well as in a wide recording region that covered a part of both hemispheres. The VSD imaging could successfully visualize ICMS-evoked excitation and subsequent propagation in the visual cortices contralateral as well as ipsilateral to the ICMS. Using the orientation maps as positional references, we showed that the activity propagation patterns were consistent with previously reported anatomical patterns of intracortical and interhemispheric connections. This finding indicates that our experimental system can serve for the investigation of cortical functional connectivity.
Neuronal Representation of Ultraviolet Visual Stimuli in Mouse Primary Visual Cortex
Tan, Zhongchao; Sun, Wenzhi; Chen, Tsai-Wen; Kim, Douglas; Ji, Na
2015-01-01
The mouse has become an important model for understanding the neural basis of visual perception. Although it has long been known that mouse lens transmits ultraviolet (UV) light and mouse opsins have absorption in the UV band, little is known about how UV visual information is processed in the mouse brain. Using a custom UV stimulation system and in vivo calcium imaging, we characterized the feature selectivity of layer 2/3 neurons in mouse primary visual cortex (V1). In adult mice, a comparable percentage of the neuronal population responds to UV and visible stimuli, with similar pattern selectivity and receptive field properties. In young mice, the orientation selectivity for UV stimuli increased steadily during development, but not direction selectivity. Our results suggest that, by expanding the spectral window through which the mouse can acquire visual information, UV sensitivity provides an important component for mouse vision. PMID:26219604
Otten, Marte; Banaji, Mahzarin R.
2012-01-01
A number of recent behavioral studies have shown that emotional expressions are differently perceived depending on the race of a face, and that perception of race cues is influenced by emotional expressions. However, neural processes related to the perception of invariant cues that indicate the identity of a face (such as race) are often described to proceed independently of processes related to the perception of cues that can vary over time (such as emotion). Using a visual face adaptation paradigm, we tested whether these behavioral interactions between emotion and race also reflect interdependent neural representation of emotion and race. We compared visual emotion aftereffects when the adapting face and ambiguous test face differed in race or not. Emotion aftereffects were much smaller in different race (DR) trials than same race (SR) trials, indicating that the neural representation of a facial expression is significantly different depending on whether the emotional face is black or white. It thus seems that invariable cues such as race interact with variable face cues such as emotion not just at a response level, but also at the level of perception and neural representation. PMID:22403531
Changing the Spatial Scope of Attention Alters Patterns of Neural Gain in Human Cortex
Garcia, Javier O.; Rungratsameetaweemana, Nuttida; Sprague, Thomas C.
2014-01-01
Over the last several decades, spatial attention has been shown to influence the activity of neurons in visual cortex in various ways. These conflicting observations have inspired competing models to account for the influence of attention on perception and behavior. Here, we used electroencephalography (EEG) to assess steady-state visual evoked potentials (SSVEP) in human subjects and showed that highly focused spatial attention primarily enhanced neural responses to high-contrast stimuli (response gain), whereas distributed attention primarily enhanced responses to medium-contrast stimuli (contrast gain). Together, these data suggest that different patterns of neural modulation do not reflect fundamentally different neural mechanisms, but instead reflect changes in the spatial extent of attention. PMID:24381272
Koushika, S P; Lisbin, M J; White, K
1996-12-01
Tissue-specific alternative pre-mRNA splicing is a widely used mechanism for gene regulation and the generation of different protein isoforms, but relatively little is known about the factors and mechanisms that mediate this process. Tissue-specific RNA-binding proteins could mediate alternative pre-mRNA splicing. In Drosophila melanogaster, the RNA-binding protein encoded by the elav (embryonic lethal abnormal visual system) gene is a candidate for such a role. The ELAV protein is expressed exclusively in neurons, and is important for the formation and maintenance of the nervous system. In this study, photoreceptor neurons genetically depleted of ELAV, and elav-null central nervous system neurons, were analyzed immunocytochemically for the expression of neural proteins. In both situations, the lack of ELAV corresponded with a decrease in the immunohistochemical signal of the neural-specific isoform of Neuroglian, which is generated by alternative splicing. Furthermore, when ELAV was expressed ectopically in cells that normally express only the non-neural isoform of Neuroglian, we observed the generation of the neural isoform of Neuroglian. Drosophila ELAV promotes the generation of the neuron-specific isoform of Neuroglian by the regulation of pre-mRNA splicing. The findings reported in this paper demonstrate that ELAV is necessary, and the ectopic expression of ELAV in imaginal disc cells is sufficient, to mediate neuron-specific alternative splicing.
Nomura, Emi M.; Reber, Paul J.
2012-01-01
Considerable evidence has argued in favor of multiple neural systems supporting human category learning, one based on conscious rule inference and one based on implicit information integration. However, there have been few attempts to study potential system interactions during category learning. The PINNACLE (Parallel Interactive Neural Networks Active in Category Learning) model incorporates multiple categorization systems that compete to provide categorization judgments about visual stimuli. Incorporating competing systems requires inclusion of cognitive mechanisms associated with resolving this competition and creates a potential credit assignment problem in handling feedback. The hypothesized mechanisms make predictions about internal mental states that are not always reflected in choice behavior, but may be reflected in neural activity. Two prior functional magnetic resonance imaging (fMRI) studies of category learning were re-analyzed using PINNACLE to identify neural correlates of internal cognitive states on each trial. These analyses identified additional brain regions supporting the two types of category learning, regions particularly active when the systems are hypothesized to be in maximal competition, and found evidence of covert learning activity in the “off system” (the category learning system not currently driving behavior). These results suggest that PINNACLE provides a plausible framework for how competing multiple category learning systems are organized in the brain and shows how computational modeling approaches and fMRI can be used synergistically to gain access to cognitive processes that support complex decision-making machinery. PMID:24962771
The neural organization of perception in chess experts.
Krawczyk, Daniel C; Boggan, Amy L; McClelland, M Michelle; Bartlett, James C
2011-07-20
The human visual system responds to expertise, and it has been suggested that regions that process faces also process other objects of expertise including chess boards by experts. We tested whether chess and face processing overlap in brain activity using fMRI. Chess experts and novices exhibited face selective areas, but these regions showed no selectivity to chess configurations relative to other stimuli. We next compared neural responses to chess and to scrambled chess displays to isolate areas relevant to expertise. Areas within the posterior cingulate, orbitofrontal cortex, and right temporal cortex were active in this comparison in experts over novices. We also compared chess and face responses within the posterior cingulate and found this area responsive to chess only in experts. These findings indicate that the configurations in chess are not strongly processed by face-selective regions that are selective for faces in individuals who have expertise in both domains. Further, the area most consistently involved in chess did not show overlap with faces. Overall, these results suggest that expert visual processing may be similar at the level of recognition, but need not show the same neural correlates. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Farzmahdi, Amirhossein; Rajaei, Karim; Ghodrati, Masoud; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi
2016-04-26
Converging reports indicate that face images are processed through specialized neural networks in the brain -i.e. face patches in monkeys and the fusiform face area (FFA) in humans. These studies were designed to find out how faces are processed in visual system compared to other objects. Yet, the underlying mechanism of face processing is not completely revealed. Here, we show that a hierarchical computational model, inspired by electrophysiological evidence on face processing in primates, is able to generate representational properties similar to those observed in monkey face patches (posterior, middle and anterior patches). Since the most important goal of sensory neuroscience is linking the neural responses with behavioral outputs, we test whether the proposed model, which is designed to account for neural responses in monkey face patches, is also able to predict well-documented behavioral face phenomena observed in humans. We show that the proposed model satisfies several cognitive face effects such as: composite face effect and the idea of canonical face views. Our model provides insights about the underlying computations that transfer visual information from posterior to anterior face patches.
Farzmahdi, Amirhossein; Rajaei, Karim; Ghodrati, Masoud; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi
2016-01-01
Converging reports indicate that face images are processed through specialized neural networks in the brain –i.e. face patches in monkeys and the fusiform face area (FFA) in humans. These studies were designed to find out how faces are processed in visual system compared to other objects. Yet, the underlying mechanism of face processing is not completely revealed. Here, we show that a hierarchical computational model, inspired by electrophysiological evidence on face processing in primates, is able to generate representational properties similar to those observed in monkey face patches (posterior, middle and anterior patches). Since the most important goal of sensory neuroscience is linking the neural responses with behavioral outputs, we test whether the proposed model, which is designed to account for neural responses in monkey face patches, is also able to predict well-documented behavioral face phenomena observed in humans. We show that the proposed model satisfies several cognitive face effects such as: composite face effect and the idea of canonical face views. Our model provides insights about the underlying computations that transfer visual information from posterior to anterior face patches. PMID:27113635
2013-01-01
Introduction Intestinal dysmotility following human necrotizing enterocolitis suggests that the enteric nervous system is injured during the disease. We examined human intestinal specimens to characterize the enteric nervous system injury that occurs in necrotizing enterocolitis, and then used an animal model of experimental necrotizing enterocolitis to determine whether transplantation of neural stem cells can protect the enteric nervous system from injury. Methods Human intestinal specimens resected from patients with necrotizing enterocolitis (n = 18), from control patients with bowel atresia (n = 8), and from necrotizing enterocolitis and control patients undergoing stoma closure several months later (n = 14 and n = 6 respectively) were subjected to histologic examination, immunohistochemistry, and real-time reverse-transcription polymerase chain reaction to examine the myenteric plexus structure and neurotransmitter expression. In addition, experimental necrotizing enterocolitis was induced in newborn rat pups and neurotransplantation was performed by administration of fluorescently labeled neural stem cells, with subsequent visualization of transplanted cells and determination of intestinal integrity and intestinal motility. Results There was significant enteric nervous system damage with increased enteric nervous system apoptosis, and decreased neuronal nitric oxide synthase expression in myenteric ganglia from human intestine resected for necrotizing enterocolitis compared with control intestine. Structural and functional abnormalities persisted months later at the time of stoma closure. Similar abnormalities were identified in rat pups exposed to experimental necrotizing enterocolitis. Pups receiving neural stem cell transplantation had improved enteric nervous system and intestinal integrity, differentiation of transplanted neural stem cells into functional neurons, significantly improved intestinal transit, and significantly decreased mortality compared with control pups. Conclusions Significant injury to the enteric nervous system occurs in both human and experimental necrotizing enterocolitis. Neural stem cell transplantation may represent a novel future therapy for patients with necrotizing enterocolitis. PMID:24423414
Methods and Apparatus for Autonomous Robotic Control
NASA Technical Reports Server (NTRS)
Gorshechnikov, Anatoly (Inventor); Livitz, Gennady (Inventor); Versace, Massimiliano (Inventor); Palma, Jesse (Inventor)
2017-01-01
Sensory processing of visual, auditory, and other sensor information (e.g., visual imagery, LIDAR, RADAR) is conventionally based on "stovepiped," or isolated processing, with little interactions between modules. Biological systems, on the other hand, fuse multi-sensory information to identify nearby objects of interest more quickly, more efficiently, and with higher signal-to-noise ratios. Similarly, examples of the OpenSense technology disclosed herein use neurally inspired processing to identify and locate objects in a robot's environment. This enables the robot to navigate its environment more quickly and with lower computational and power requirements.
General visual robot controller networks via artificial evolution
NASA Astrophysics Data System (ADS)
Cliff, David; Harvey, Inman; Husbands, Philip
1993-08-01
We discuss recent results from our ongoing research concerning the application of artificial evolution techniques (i.e., an extended form of genetic algorithm) to the problem of developing `neural' network controllers for visually guided robots. The robot is a small autonomous vehicle with extremely low-resolution vision, employing visual sensors which could readily be constructed from discrete analog components. In addition to visual sensing, the robot is equipped with a small number of mechanical tactile sensors. Activity from the sensors is fed to a recurrent dynamical artificial `neural' network, which acts as the robot controller, providing signals to motors governing the robot's motion. Prior to presentation of new results, this paper summarizes our rationale and past work, which has demonstrated that visually guided control networks can arise without any explicit specification that visual processing should be employed: the evolutionary process opportunistically makes use of visual information if it is available.
Eastman, Kyler M; Huk, Alexander C
2012-01-01
Neurophysiological studies in awake, behaving primates (both human and non-human) have focused with increasing scrutiny on the temporal relationship between neural signals and behaviors. Consequently, laboratories are often faced with the problem of developing experimental equipment that can support data recording with high temporal precision and also be flexible enough to accommodate a wide variety of experimental paradigms. To this end, we have developed a MATLAB toolbox that integrates several modern pieces of equipment, but still grants experimenters the flexibility of a high-level programming language. Our toolbox takes advantage of three popular and powerful technologies: the Plexon apparatus for neurophysiological recordings (Plexon, Inc., Dallas, TX, USA), a Datapixx peripheral (Vpixx Technologies, Saint-Bruno, QC, Canada) for control of analog, digital, and video input-output signals, and the Psychtoolbox MATLAB toolbox for stimulus generation (Brainard, 1997; Pelli, 1997; Kleiner et al., 2007). The PLDAPS ("Platypus") system is designed to support the study of the visual systems of awake, behaving primates during multi-electrode neurophysiological recordings, but can be easily applied to other related domains. Despite its wide range of capabilities and support for cutting-edge video displays and neural recording systems, the PLDAPS system is simple enough for someone with basic MATLAB programming skills to design their own experiments.
Hu, Meng; Liang, Hualou
2013-04-01
Generalized flash suppression (GFS), in which a salient visual stimulus can be rendered invisible despite continuous retinal input, provides a rare opportunity to directly study the neural mechanism of visual perception. Previous work based on linear methods, such as spectral analysis, on local field potential (LFP) during GFS has shown that the LFP power at distinctive frequency bands are differentially modulated by perceptual suppression. Yet, the linear method alone may be insufficient for the full assessment of neural dynamic due to the fundamentally nonlinear nature of neural signals. In this study, we set forth to analyze the LFP data collected from multiple visual areas in V1, V2 and V4 of macaque monkeys while performing the GFS task using a nonlinear method - adaptive multi-scale entropy (AME) - to reveal the neural dynamic of perceptual suppression. In addition, we propose a new cross-entropy measure at multiple scales, namely adaptive multi-scale cross-entropy (AMCE), to assess the nonlinear functional connectivity between two cortical areas. We show that: (1) multi-scale entropy exhibits percept-related changes in all three areas, with higher entropy observed during perceptual suppression; (2) the magnitude of the perception-related entropy changes increases systematically over successive hierarchical stages (i.e. from lower areas V1 to V2, up to higher area V4); and (3) cross-entropy between any two cortical areas reveals higher degree of asynchrony or dissimilarity during perceptual suppression, indicating a decreased functional connectivity between cortical areas. These results, taken together, suggest that perceptual suppression is related to a reduced functional connectivity and increased uncertainty of neural responses, and the modulation of perceptual suppression is more effective at higher visual cortical areas. AME is demonstrated to be a useful technique in revealing the underlying dynamic of nonlinear/nonstationary neural signal.
Sensitivity to timing and order in human visual cortex
Singer, Jedediah M.; Madsen, Joseph R.; Anderson, William S.
2014-01-01
Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. PMID:25429116
Supèr, Hans; van der Togt, Chris; Spekreijse, Henk; Lamme, Victor A. F.
2004-01-01
We continuously scan the visual world via rapid or saccadic eye movements. Such eye movements are guided by visual information, and thus the oculomotor structures that determine when and where to look need visual information to control the eye movements. To know whether visual areas contain activity that may contribute to the control of eye movements, we recorded neural responses in the visual cortex of monkeys engaged in a delayed figure-ground detection task and analyzed the activity during the period of oculomotor preparation. We show that ≈100 ms before the onset of visually and memory-guided saccades neural activity in V1 becomes stronger where the strongest presaccadic responses are found at the location of the saccade target. In addition, in memory-guided saccades the strength of presaccadic activity shows a correlation with the onset of the saccade. These findings indicate that the primary visual cortex contains saccade-related responses and participates in visually guided oculomotor behavior. PMID:14970334
Neural correlates of context-dependent feature conjunction learning in visual search tasks.
Reavis, Eric A; Frank, Sebastian M; Greenlee, Mark W; Tse, Peter U
2016-06-01
Many perceptual learning experiments show that repeated exposure to a basic visual feature such as a specific orientation or spatial frequency can modify perception of that feature, and that those perceptual changes are associated with changes in neural tuning early in visual processing. Such perceptual learning effects thus exert a bottom-up influence on subsequent stimulus processing, independent of task-demands or endogenous influences (e.g., volitional attention). However, it is unclear whether such bottom-up changes in perception can occur as more complex stimuli such as conjunctions of visual features are learned. It is not known whether changes in the efficiency with which people learn to process feature conjunctions in a task (e.g., visual search) reflect true bottom-up perceptual learning versus top-down, task-related learning (e.g., learning better control of endogenous attention). Here we show that feature conjunction learning in visual search leads to bottom-up changes in stimulus processing. First, using fMRI, we demonstrate that conjunction learning in visual search has a distinct neural signature: an increase in target-evoked activity relative to distractor-evoked activity (i.e., a relative increase in target salience). Second, we demonstrate that after learning, this neural signature is still evident even when participants passively view learned stimuli while performing an unrelated, attention-demanding task. This suggests that conjunction learning results in altered bottom-up perceptual processing of the learned conjunction stimuli (i.e., a perceptual change independent of the task). We further show that the acquired change in target-evoked activity is contextually dependent on the presence of distractors, suggesting that search array Gestalts are learned. Hum Brain Mapp 37:2319-2330, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Control of Wind Tunnel Operations Using Neural Net Interpretation of Flow Visualization Records
NASA Technical Reports Server (NTRS)
Buggele, Alvin E.; Decker, Arthur J.
1994-01-01
Neural net control of operations in a small subsonic/transonic/supersonic wind tunnel at Lewis Research Center is discussed. The tunnel and the layout for neural net control or control by other parallel processing techniques are described. The tunnel is an affordable, multiuser platform for testing instrumentation and components, as well as parallel processing and control strategies. Neural nets have already been tested on archival schlieren and holographic visualizations from this tunnel as well as recent supersonic and transonic shadowgraph. This paper discusses the performance of neural nets for interpreting shadowgraph images in connection with a recent exercise for tuning the tunnel in a subsonic/transonic cascade mode of operation. That mode was operated for performing wake surveys in connection with NASA's Advanced Subsonic Technology (AST) noise reduction program. The shadowgraph was presented to the neural nets as 60 by 60 pixel arrays. The outputs were tunnel parameters such as valve settings or tunnel state identifiers for selected tunnel operating points, conditions, or states. The neural nets were very sensitive, perhaps too sensitive, to shadowgraph pattern detail. However, the nets exhibited good immunity to variations in brightness, to noise, and to changes in contrast. The nets are fast enough so that ten or more can be combined per control operation to interpret flow visualization data, point sensor data, and model calculations. The pattern sensitivity of the nets will be utilized and tested to control wind tunnel operations at Mach 2.0 based on shock wave patterns.
Cohn, Neil; Jackendoff, Ray; Holcomb, Phillip J; Kuperberg, Gina R
2014-11-01
Constituent structure has long been established as a central feature of human language. Analogous to how syntax organizes words in sentences, a narrative grammar organizes sequential images into hierarchic constituents. Here we show that the brain draws upon this constituent structure to comprehend wordless visual narratives. We recorded neural responses as participants viewed sequences of visual images (comics strips) in which blank images either disrupted individual narrative constituents or fell at natural constituent boundaries. A disruption of either the first or the second narrative constituent produced a left-lateralized anterior negativity effect between 500 and 700ms. Disruption of the second constituent also elicited a posteriorly-distributed positivity (P600) effect. These neural responses are similar to those associated with structural violations in language and music. These findings provide evidence that comprehenders use a narrative structure to comprehend visual sequences and that the brain engages similar neurocognitive mechanisms to build structure across multiple domains. Copyright © 2014 Elsevier Ltd. All rights reserved.
Cohn, Neil; Jackendoff, Ray; Holcomb, Phillip J.; Kuperberg, Gina R.
2014-01-01
Constituent structure has long been established as a central feature of human language. Analogous to how syntax organizes words in sentences, a narrative grammar organizes sequential images into hierarchic constituents. Here we show that the brain draws upon this constituent structure to comprehend wordless visual narratives. We recorded neural responses as participants viewed sequences of visual images (comics strips) in which blank images either disrupted individual narrative constituents or fell at natural constituent boundaries. A disruption of either the first or the second narrative constituent produced a left-lateralized anterior negativity effect between 500-700ms. Disruption of the second constituent also elicited a posteriorly-distributed positivity (P600) effect. These neural responses are similar to those associated with structural violations in language and music. These findings provide evidence that comprehenders use a narrative structure to comprehend visual sequences and that the brain engages similar neurocognitive mechanisms to build structure across multiple domains. PMID:25241329
Complex Visual Adaptations in Squid for Specific Tasks in Different Environments
Chung, Wen-Sung; Marshall, N. Justin
2017-01-01
In common with their major competitors, the fish, squid are fast moving visual predators that live over a great range of depths in the ocean. Both squid and fish show a variety of adaptations with respect to optical properties, receptors and their underlying neural circuits, and these adaptations are often linked to the light conditions of their specific niche. In contrast to the extensive investigations of adaptive strategies in fish, vision in response to the varying quantity and quality of available light, our knowledge of visual adaptations in squid remains sparse. This study therefore undertook a comparative study of visual adaptations and capabilities in a number of squid species collected between 0 and 1,200 m. Histology, magnetic resonance imagery (MRI), and depth distributions were used to compare brains, eyes, and visual capabilities, revealing that the squid eye designs reflect the lifestyle and the versatility of neural architecture in its visual system. Tubular eyes and two types of regional retinal deformation were identified and these eye modifications are strongly associated with specific directional visual tasks. In addition, a combination of conventional and immuno-histology demonstrated a new form of a complex retina possessing two inner segment layers in two mid-water squid species which they rhythmically move across a broad range of depths (50–1,000 m). In contrast to their relatives with the regular single-layered inner segment retina live in the upper mesopelagic layer (50–400 m), the new form of retinal interneuronal layers suggests that the visual sensitivity of these two long distance vertical migrants may increase in response to dimmer environments. PMID:28286484
Predictive Coding: A Possible Explanation of Filling-In at the Blind Spot
Raman, Rajani; Sarkar, Sandip
2016-01-01
Filling-in at the blind spot is a perceptual phenomenon in which the visual system fills the informational void, which arises due to the absence of retinal input corresponding to the optic disc, with surrounding visual attributes. It is known that during filling-in, nonlinear neural responses are observed in the early visual area that correlates with the perception, but the knowledge of underlying neural mechanism for filling-in at the blind spot is far from complete. In this work, we attempted to present a fresh perspective on the computational mechanism of filling-in process in the framework of hierarchical predictive coding, which provides a functional explanation for a range of neural responses in the cortex. We simulated a three-level hierarchical network and observe its response while stimulating the network with different bar stimulus across the blind spot. We find that the predictive-estimator neurons that represent blind spot in primary visual cortex exhibit elevated non-linear response when the bar stimulated both sides of the blind spot. Using generative model, we also show that these responses represent the filling-in completion. All these results are consistent with the finding of psychophysical and physiological studies. In this study, we also demonstrate that the tolerance in filling-in qualitatively matches with the experimental findings related to non-aligned bars. We discuss this phenomenon in the predictive coding paradigm and show that all our results could be explained by taking into account the efficient coding of natural images along with feedback and feed-forward connections that allow priors and predictions to co-evolve to arrive at the best prediction. These results suggest that the filling-in process could be a manifestation of the general computational principle of hierarchical predictive coding of natural images. PMID:26959812
Degraded attentional modulation of cortical neural populations in strabismic amblyopia
Hou, Chuan; Kim, Yee-Joon; Lai, Xin Jie; Verghese, Preeti
2016-01-01
Behavioral studies have reported reduced spatial attention in amblyopia, a developmental disorder of spatial vision. However, the neural populations in the visual cortex linked with these behavioral spatial attention deficits have not been identified. Here, we use functional MRI–informed electroencephalography source imaging to measure the effect of attention on neural population activity in the visual cortex of human adult strabismic amblyopes who were stereoblind. We show that compared with controls, the modulatory effects of selective visual attention on the input from the amblyopic eye are substantially reduced in the primary visual cortex (V1) as well as in extrastriate visual areas hV4 and hMT+. Degraded attentional modulation is also found in the normal-acuity fellow eye in areas hV4 and hMT+ but not in V1. These results provide electrophysiological evidence that abnormal binocular input during a developmental critical period may impact cortical connections between the visual cortex and higher level cortices beyond the known amblyopic losses in V1 and V2, suggesting that a deficit of attentional modulation in the visual cortex is an important component of the functional impairment in amblyopia. Furthermore, we find that degraded attentional modulation in V1 is correlated with the magnitude of interocular suppression and the depth of amblyopia. These results support the view that the visual suppression often seen in strabismic amblyopia might be a form of attentional neglect of the visual input to the amblyopic eye. PMID:26885628
Degraded attentional modulation of cortical neural populations in strabismic amblyopia.
Hou, Chuan; Kim, Yee-Joon; Lai, Xin Jie; Verghese, Preeti
2016-01-01
Behavioral studies have reported reduced spatial attention in amblyopia, a developmental disorder of spatial vision. However, the neural populations in the visual cortex linked with these behavioral spatial attention deficits have not been identified. Here, we use functional MRI-informed electroencephalography source imaging to measure the effect of attention on neural population activity in the visual cortex of human adult strabismic amblyopes who were stereoblind. We show that compared with controls, the modulatory effects of selective visual attention on the input from the amblyopic eye are substantially reduced in the primary visual cortex (V1) as well as in extrastriate visual areas hV4 and hMT+. Degraded attentional modulation is also found in the normal-acuity fellow eye in areas hV4 and hMT+ but not in V1. These results provide electrophysiological evidence that abnormal binocular input during a developmental critical period may impact cortical connections between the visual cortex and higher level cortices beyond the known amblyopic losses in V1 and V2, suggesting that a deficit of attentional modulation in the visual cortex is an important component of the functional impairment in amblyopia. Furthermore, we find that degraded attentional modulation in V1 is correlated with the magnitude of interocular suppression and the depth of amblyopia. These results support the view that the visual suppression often seen in strabismic amblyopia might be a form of attentional neglect of the visual input to the amblyopic eye.
Visual Imagery without Visual Perception?
ERIC Educational Resources Information Center
Bertolo, Helder
2005-01-01
The question regarding visual imagery and visual perception remain an open issue. Many studies have tried to understand if the two processes share the same mechanisms or if they are independent, using different neural substrates. Most research has been directed towards the need of activation of primary visual areas during imagery. Here we review…
Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Liu, Yongjian; Liang, Changhong; Sun, Pei
2015-02-01
Previous studies have shown that audiovisual integration improves identification performance and enhances neural activity in heteromodal brain areas, for example, the posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG). Furthermore, it has also been demonstrated that attention plays an important role in crossmodal integration. In this study, we considered crossmodal integration in audiovisual facial perception and explored its effect on the neural representation of features. The audiovisual stimuli in the experiment consisted of facial movie clips that could be classified into 2 gender categories (male vs. female) or 2 emotion categories (crying vs. laughing). The visual/auditory-only stimuli were created from these movie clips by removing the auditory/visual contents. The subjects needed to make a judgment about the gender/emotion category for each movie clip in the audiovisual, visual-only, or auditory-only stimulus condition as functional magnetic resonance imaging (fMRI) signals were recorded. The neural representation of the gender/emotion feature was assessed using the decoding accuracy and the brain pattern-related reproducibility indices, obtained by a multivariate pattern analysis method from the fMRI data. In comparison to the visual-only and auditory-only stimulus conditions, we found that audiovisual integration enhanced the neural representation of task-relevant features and that feature-selective attention might play a role of modulation in the audiovisual integration. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Fast fMRI can detect oscillatory neural activity in humans.
Lewis, Laura D; Setsompop, Kawin; Rosen, Bruce R; Polimeni, Jonathan R
2016-10-25
Oscillatory neural dynamics play an important role in the coordination of large-scale brain networks. High-level cognitive processes depend on dynamics evolving over hundreds of milliseconds, so measuring neural activity in this frequency range is important for cognitive neuroscience. However, current noninvasive neuroimaging methods are not able to precisely localize oscillatory neural activity above 0.2 Hz. Electroencephalography and magnetoencephalography have limited spatial resolution, whereas fMRI has limited temporal resolution because it measures vascular responses rather than directly recording neural activity. We hypothesized that the recent development of fast fMRI techniques, combined with the extra sensitivity afforded by ultra-high-field systems, could enable precise localization of neural oscillations. We tested whether fMRI can detect neural oscillations using human visual cortex as a model system. We detected small oscillatory fMRI signals in response to stimuli oscillating at up to 0.75 Hz within single scan sessions, and these responses were an order of magnitude larger than predicted by canonical linear models. Simultaneous EEG-fMRI and simulations based on a biophysical model of the hemodynamic response to neuronal activity suggested that the blood oxygen level-dependent response becomes faster for rapidly varying stimuli, enabling the detection of higher frequencies than expected. Accounting for phase delays across voxels further improved detection, demonstrating that identifying vascular delays will be of increasing importance with higher-frequency activity. These results challenge the assumption that the hemodynamic response is slow, and demonstrate that fMRI has the potential to map neural oscillations directly throughout the brain.
Unaware Processing of Tools in the Neural System for Object-Directed Action Representation.
Tettamanti, Marco; Conca, Francesca; Falini, Andrea; Perani, Daniela
2017-11-01
The hypothesis that the brain constitutively encodes observed manipulable objects for the actions they afford is still debated. Yet, crucial evidence demonstrating that, even in the absence of perceptual awareness, the mere visual appearance of a manipulable object triggers a visuomotor coding in the action representation system including the premotor cortex, has hitherto not been provided. In this fMRI study, we instantiated reliable unaware visual perception conditions by means of continuous flash suppression, and we tested in 24 healthy human participants (13 females) whether the visuomotor object-directed action representation system that includes left-hemispheric premotor, parietal, and posterior temporal cortices is activated even under subliminal perceptual conditions. We found consistent activation in the target visuomotor cortices, both with and without perceptual awareness, specifically for pictures of manipulable versus non-manipulable objects. By means of a multivariate searchlight analysis, we also found that the brain activation patterns in this visuomotor network enabled the decoding of manipulable versus non-manipulable object picture processing, both with and without awareness. These findings demonstrate the intimate neural coupling between visual perception and motor representation that underlies manipulable object processing: manipulable object stimuli specifically engage the visuomotor object-directed action representation system, in a constitutive manner that is independent from perceptual awareness. This perceptuo-motor coupling endows the brain with an efficient mechanism for monitoring and planning reactions to external stimuli in the absence of awareness. SIGNIFICANCE STATEMENT Our brain constantly encodes the visual information that hits the retina, leading to a stimulus-specific activation of sensory and semantic representations, even for objects that we do not consciously perceive. Do these unconscious representations encompass the motor programming of actions that could be accomplished congruently with the objects' functions? In this fMRI study, we instantiated unaware visual perception conditions, by dynamically suppressing the visibility of manipulable object pictures with mondrian masks. Despite escaping conscious perception, manipulable objects activated an object-directed action representation system that includes left-hemispheric premotor, parietal, and posterior temporal cortices. This demonstrates that visuomotor encoding occurs independently of conscious object perception. Copyright © 2017 the authors 0270-6474/17/3710712-13$15.00/0.
Dissociation of neural mechanisms underlying orientation processing in humans
Ling, Sam; Pearson, Joel; Blake, Randolph
2009-01-01
Summary Orientation selectivity is a fundamental, emergent property of neurons in early visual cortex, and discovery of that property [1, 2] dramatically shaped how we conceptualize visual processing [3–6]. However, much remains unknown about the neural substrates of these basic building blocks of perception, and what is known primarily stems from animal physiology studies. To probe the neural concomitants of orientation processing in humans, we employed repetitive transcranial magnetic stimulation (rTMS) to attenuate neural responses evoked by stimuli presented within a local region of the visual field. Previous physiological studies have shown that rTMS can significantly suppress the neuronal spiking activity, hemodynamic responses, and local field potentials within a focused cortical region [7, 8]. By suppressing neural activity with rTMS, we were able to dissociate components of the neural circuitry underlying two distinct aspects of orientation processing: selectivity and contextual effects. Orientation selectivity gauged by masking was unchanged by rTMS, whereas an otherwise robust orientation repulsion illusion was weakened following rTMS. This dissociation implies that orientation processing relies on distinct mechanisms, only one of which was impacted by rTMS. These results are consistent with models positing that orientation selectivity is largely governed by the patterns of convergence of thalamic afferents onto cortical neurons, with intracortical activity then shaping population responses contained within those orientation-selective cortical neurons. PMID:19682905
Neural Entrainment to Rhythmically Presented Auditory, Visual, and Audio-Visual Speech in Children
Power, Alan James; Mead, Natasha; Barnes, Lisa; Goswami, Usha
2012-01-01
Auditory cortical oscillations have been proposed to play an important role in speech perception. It is suggested that the brain may take temporal “samples” of information from the speech stream at different rates, phase resetting ongoing oscillations so that they are aligned with similar frequency bands in the input (“phase locking”). Information from these frequency bands is then bound together for speech perception. To date, there are no explorations of neural phase locking and entrainment to speech input in children. However, it is clear from studies of language acquisition that infants use both visual speech information and auditory speech information in learning. In order to study neural entrainment to speech in typically developing children, we use a rhythmic entrainment paradigm (underlying 2 Hz or delta rate) based on repetition of the syllable “ba,” presented in either the auditory modality alone, the visual modality alone, or as auditory-visual speech (via a “talking head”). To ensure attention to the task, children aged 13 years were asked to press a button as fast as possible when the “ba” stimulus violated the rhythm for each stream type. Rhythmic violation depended on delaying the occurrence of a “ba” in the isochronous stream. Neural entrainment was demonstrated for all stream types, and individual differences in standardized measures of language processing were related to auditory entrainment at the theta rate. Further, there was significant modulation of the preferred phase of auditory entrainment in the theta band when visual speech cues were present, indicating cross-modal phase resetting. The rhythmic entrainment paradigm developed here offers a method for exploring individual differences in oscillatory phase locking during development. In particular, a method for assessing neural entrainment and cross-modal phase resetting would be useful for exploring developmental learning difficulties thought to involve temporal sampling, such as dyslexia. PMID:22833726
Pessoa, L; Thompson, E; Noë, A
1998-12-01
In visual science the term filling-in is used in different ways, which often leads to confusion. This target article presents a taxonomy of perceptual completion phenomena to organize and clarify theoretical and empirical discussion. Examples of boundary completion (illusory contours) and featural completion (color, brightness, motion, texture, and depth) are examined, and single-cell studies relevant to filling-in are reviewed and assessed. Filling-in issues must be understood in relation to theoretical issues about neural-perceptual isomorphism and linking propositions. Six main conclusions are drawn: (1) visual filling-in comprises a multitude of different perceptual completion phenomena; (2) certain forms of visual completion seem to involve spatially propagating neural activity (neural filling-in) and so, contrary to Dennett's (1991; 1992) recent discussion of filling-in, cannot be described as results of the brain's "ignoring an absence" or "jumping to a conclusion"; (3) in certain cases perceptual completion seems to have measurable effects that depend on neural signals representing a presence rather than ignoring an absence; (4) neural filling-in does not imply either "analytic isomorphism" or "Cartesian materialism," and thus the notion of the bridge locus--a particular neural stage that forms the immediate substrate of perceptual experience--is problematic and should be abandoned; (5) to reject the representational conception of vision in favor of an "enactive" or "animate" conception reduces the importance of filling-in as a theoretical category in the explanation of vision; and (6) the evaluation of perceptual content should not be determined by "subpersonal" considerations about internal processing, but rather by considerations about the task of vision at the level of the animal or person interacting with the world.
Linking normative models of natural tasks to descriptive models of neural response.
Jaini, Priyank; Burge, Johannes
2017-10-01
Understanding how nervous systems exploit task-relevant properties of sensory stimuli to perform natural tasks is fundamental to the study of perceptual systems. However, there are few formal methods for determining which stimulus properties are most useful for a given natural task. As a consequence, it is difficult to develop principled models for how to compute task-relevant latent variables from natural signals, and it is difficult to evaluate descriptive models fit to neural response. Accuracy maximization analysis (AMA) is a recently developed Bayesian method for finding the optimal task-specific filters (receptive fields). Here, we introduce AMA-Gauss, a new faster form of AMA that incorporates the assumption that the class-conditional filter responses are Gaussian distributed. Then, we use AMA-Gauss to show that its assumptions are justified for two fundamental visual tasks: retinal speed estimation and binocular disparity estimation. Next, we show that AMA-Gauss has striking formal similarities to popular quadratic models of neural response: the energy model and the generalized quadratic model (GQM). Together, these developments deepen our understanding of why the energy model of neural response have proven useful, improve our ability to evaluate results from subunit model fits to neural data, and should help accelerate psychophysics and neuroscience research with natural stimuli.
Covic, Amra; Keitel, Christian; Porcu, Emanuele; Schröger, Erich; Müller, Matthias M
2017-11-01
The neural processing of a visual stimulus can be facilitated by attending to its position or by a co-occurring auditory tone. Using frequency-tagging, we investigated whether facilitation by spatial attention and audio-visual synchrony rely on similar neural processes. Participants attended to one of two flickering Gabor patches (14.17 and 17 Hz) located in opposite lower visual fields. Gabor patches further "pulsed" (i.e. showed smooth spatial frequency variations) at distinct rates (3.14 and 3.63 Hz). Frequency-modulating an auditory stimulus at the pulse-rate of one of the visual stimuli established audio-visual synchrony. Flicker and pulsed stimulation elicited stimulus-locked rhythmic electrophysiological brain responses that allowed tracking the neural processing of simultaneously presented Gabor patches. These steady-state responses (SSRs) were quantified in the spectral domain to examine visual stimulus processing under conditions of synchronous vs. asynchronous tone presentation and when respective stimulus positions were attended vs. unattended. Strikingly, unique patterns of effects on pulse- and flicker driven SSRs indicated that spatial attention and audiovisual synchrony facilitated early visual processing in parallel and via different cortical processes. We found attention effects to resemble the classical top-down gain effect facilitating both, flicker and pulse-driven SSRs. Audio-visual synchrony, in turn, only amplified synchrony-producing stimulus aspects (i.e. pulse-driven SSRs) possibly highlighting the role of temporally co-occurring sights and sounds in bottom-up multisensory integration. Copyright © 2017 Elsevier Inc. All rights reserved.
fMRI studies of successful emotional memory encoding: a quantitative meta-analysis
Murty, Vishnu P.; Ritchey, Maureen; Adcock, R. Alison; LaBar, Kevin S.
2010-01-01
Over the past decade, fMRI techniques have been increasingly used to interrogate the neural correlates of successful emotional memory encoding. These investigations have typically aimed to either characterize the contributions of the amygdala and medial temporal lobe (MTL) memory system, replicating results in animals, or delineate the neural correlates of specific behavioral phenomena. It has remained difficult, however, to synthesize these findings into a systems neuroscience account of how networks across the whole brain support the enhancing effects of emotion on memory encoding. To this end, the present study employed a meta-analytic approach using activation likelihood estimates to assess the anatomical specificity and reliability of event-related fMRI activations related to successful memory encoding for emotional versus neutral information. The meta-analysis revealed consistent clusters within bilateral amygdala, anterior hippocampus, anterior and posterior parahippocampal gyrus, the ventral visual stream, left lateral prefrontal cortex and right ventral parietal cortex. The results within the amygdala and MTL support a wealth of findings from the animal literature linking these regions to arousal-mediated memory effects. The consistency of findings in cortical targets, including the visual, prefrontal, and parietal cortices, underscores the importance of generating hypotheses regarding their participation in emotional memory formation. In particular, we propose that the amygdala interacts with these structures to promote enhancements in perceptual processing, semantic elaboration, and attention, which serve to benefit subsequent memory for emotional material. These findings may motivate future research on emotional modulation of widespread neural systems and the implications of this modulation for cognition. PMID:20688087
A bio-inspired system for spatio-temporal recognition in static and video imagery
NASA Astrophysics Data System (ADS)
Khosla, Deepak; Moore, Christopher K.; Chelian, Suhas
2007-04-01
This paper presents a bio-inspired method for spatio-temporal recognition in static and video imagery. It builds upon and extends our previous work on a bio-inspired Visual Attention and object Recognition System (VARS). The VARS approach locates and recognizes objects in a single frame. This work presents two extensions of VARS. The first extension is a Scene Recognition Engine (SCE) that learns to recognize spatial relationships between objects that compose a particular scene category in static imagery. This could be used for recognizing the category of a scene, e.g., office vs. kitchen scene. The second extension is the Event Recognition Engine (ERE) that recognizes spatio-temporal sequences or events in sequences. This extension uses a working memory model to recognize events and behaviors in video imagery by maintaining and recognizing ordered spatio-temporal sequences. The working memory model is based on an ARTSTORE1 neural network that combines an ART-based neural network with a cascade of sustained temporal order recurrent (STORE)1 neural networks. A series of Default ARTMAP classifiers ascribes event labels to these sequences. Our preliminary studies have shown that this extension is robust to variations in an object's motion profile. We evaluated the performance of the SCE and ERE on real datasets. The SCE module was tested on a visual scene classification task using the LabelMe2 dataset. The ERE was tested on real world video footage of vehicles and pedestrians in a street scene. Our system is able to recognize the events in this footage involving vehicles and pedestrians.
Single-exposure visual memory judgments are reflected in inferotemporal cortex
Meyer, Travis
2018-01-01
Our visual memory percepts of whether we have encountered specific objects or scenes before are hypothesized to manifest as decrements in neural responses in inferotemporal cortex (IT) with stimulus repetition. To evaluate this proposal, we recorded IT neural responses as two monkeys performed a single-exposure visual memory task designed to measure the rates of forgetting with time. We found that a weighted linear read-out of IT was a better predictor of the monkeys’ forgetting rates and reaction time patterns than a strict instantiation of the repetition suppression hypothesis, expressed as a total spike count scheme. Behavioral predictions could be attributed to visual memory signals that were reflected as repetition suppression and were intermingled with visual selectivity, but only when combined across the most sensitive neurons. PMID:29517485
Li, Chunlin; Chen, Kewei; Han, Hongbin; Chui, Dehua; Wu, Jinglong
2012-01-01
Top-down attention to spatial and temporal cues has been thoroughly studied in the visual domain. However, because the neural systems that are important for auditory top-down temporal attention (i.e., attention based on time interval cues) remain undefined, the differences in brain activity between directed attention to auditory spatial location (compared with time intervals) are unclear. Using fMRI (magnetic resonance imaging), we measured the activations caused by cue-target paradigms by inducing the visual cueing of attention to an auditory target within a spatial or temporal domain. Imaging results showed that the dorsal frontoparietal network (dFPN), which consists of the bilateral intraparietal sulcus and the frontal eye field, responded to spatial orienting of attention, but activity was absent in the bilateral frontal eye field (FEF) during temporal orienting of attention. Furthermore, the fMRI results indicated that activity in the right ventrolateral prefrontal cortex (VLPFC) was significantly stronger during spatial orienting of attention than during temporal orienting of attention, while the DLPFC showed no significant differences between the two processes. We conclude that the bilateral dFPN and the right VLPFC contribute to auditory spatial orienting of attention. Furthermore, specific activations related to temporal cognition were confirmed within the superior occipital gyrus, tegmentum, motor area, thalamus and putamen. PMID:23166800
Control system of hexacopter using color histogram footprint and convolutional neural network
NASA Astrophysics Data System (ADS)
Ruliputra, R. N.; Darma, S.
2017-07-01
The development of unmanned aerial vehicles (UAV) has been growing rapidly in recent years. The use of logic thinking which is implemented into the program algorithms is needed to make a smart system. By using visual input from a camera, UAV is able to fly autonomously by detecting a target. However, some weaknesses arose as usage in the outdoor environment might change the target's color intensity. Color histogram footprint overcomes the problem because it divides color intensity into separate bins that make the detection tolerant to the slight change of color intensity. Template matching compare its detection result with a template of the reference image to determine the target position and use it to position the vehicle in the middle of the target with visual feedback control based on Proportional-Integral-Derivative (PID) controller. Color histogram footprint method localizes the target by calculating the back projection of its histogram. It has an average success rate of 77 % from a distance of 1 meter. It can position itself in the middle of the target by using visual feedback control with an average positioning time of 73 seconds. After the hexacopter is in the middle of the target, Convolutional Neural Networks (CNN) classifies a number contained in the target image to determine a task depending on the classified number, either landing, yawing, or return to launch. The recognition result shows an optimum success rate of 99.2 %.
Role of temporal processing stages by inferior temporal neurons in facial recognition.
Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Kawano, Kenji
2011-01-01
In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT) cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses. In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of face recognition.
Role of Temporal Processing Stages by Inferior Temporal Neurons in Facial Recognition
Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Kawano, Kenji
2011-01-01
In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT) cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses. In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of face recognition. PMID:21734904
ERIC Educational Resources Information Center
Posner, Michael I.; And Others
Recently, knowledge of the mechanisms of visual-spatial attention has improved due to studies employing single cell recording with alert monkeys and studies using performance analysis of neurological patients. These studies suggest that a complex neural network including parts of the posterior parietal lobe and midbrain are involved in covert…
Data systems and computer science programs: Overview
NASA Technical Reports Server (NTRS)
Smith, Paul H.; Hunter, Paul
1991-01-01
An external review of the Integrated Technology Plan for the Civil Space Program is presented. The topics are presented in viewgraph form and include the following: onboard memory and storage technology; advanced flight computers; special purpose flight processors; onboard networking and testbeds; information archive, access, and retrieval; visualization; neural networks; software engineering; and flight control and operations.
Preparing for Future Learning with a Tangible User Interface: The Case of Neuroscience
ERIC Educational Resources Information Center
Schneider, B.; Wallace, J.; Blikstein, P.; Pea, R.
2013-01-01
In this paper, we describe the development and evaluation of a microworld-based learning environment for neuroscience. Our system, BrainExplorer, allows students to discover the way neural pathways work by interacting with a tangible user interface. By severing and reconfiguring connections, users can observe how the visual field is impaired and,…
Memorable Audiovisual Narratives Synchronize Sensory and Supramodal Neural Responses
2016-01-01
Abstract Our brains integrate information across sensory modalities to generate perceptual experiences and form memories. However, it is difficult to determine the conditions under which multisensory stimulation will benefit or hinder the retrieval of everyday experiences. We hypothesized that the determining factor is the reliability of information processing during stimulus presentation, which can be measured through intersubject correlation of stimulus-evoked activity. We therefore presented biographical auditory narratives and visual animations to 72 human subjects visually, auditorily, or combined, while neural activity was recorded using electroencephalography. Memory for the narrated information, contained in the auditory stream, was tested 3 weeks later. While the visual stimulus alone led to no meaningful retrieval, this related stimulus improved memory when it was combined with the story, even when it was temporally incongruent with the audio. Further, individuals with better subsequent memory elicited neural responses during encoding that were more correlated with their peers. Surprisingly, portions of this predictive synchronized activity were present regardless of the sensory modality of the stimulus. These data suggest that the strength of sensory and supramodal activity is predictive of memory performance after 3 weeks, and that neural synchrony may explain the mnemonic benefit of the functionally uninformative visual context observed for these real-world stimuli. PMID:27844062
Bochner, David N.; Sapp, Richard W.; Adelson, Jaimie D.; Zhang, Siyu; Lee, Hanmi; Djurisic, Maja; Syken, Josh; Dan, Yang; Shatz, Carla J.
2015-01-01
During critical periods of development, the brain easily changes in response to environmental stimuli, but this neural plasticity declines by adulthood. By acutely disrupting paired immunoglobulin-like receptor B(PirB) function at specific ages, we show that PirB actively represses neural plasticity throughout life. We disrupted PirB function either by genetically introducing a conditional PirB allele into mice or by minipump infusion of a soluble PirB ectodomain (sPirB) into mouse visual cortex. We found that neural plasticity, as measured by depriving mice of vision in one eye and testing ocular dominance, was enhanced by this treatment both during the critical period and when PirB function was disrupted in adulthood. Acute blockade of PirB triggered the formation of new functional synapses, as indicated by increases in miniature excitatory postsynaptic current (mEPSC) frequency and spine density on dendrites of layer 5 pyramidal neurons. In addition, recovery from amblyopia— the decline in visual acuity and spine density resulting from long-term monocular deprivation— was possible after a 1-week infusion of sPirB after the deprivation period. Thus, neural plasticity in adult visual cortex is actively repressed and can be enhanced by blocking PirB function. PMID:25320232
Salient sounds activate human visual cortex automatically.
McDonald, John J; Störmer, Viola S; Martinez, Antigona; Feng, Wenfeng; Hillyard, Steven A
2013-05-22
Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, this study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2-4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of colocalized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task.
Salient sounds activate human visual cortex automatically
McDonald, John J.; Störmer, Viola S.; Martinez, Antigona; Feng, Wenfeng; Hillyard, Steven A.
2013-01-01
Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, the present study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2, 3, and 4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of co-localized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task. PMID:23699530
Whole-brain activity mapping onto a zebrafish brain atlas
Randlett, Owen; Wee, Caroline L.; Naumann, Eva A.; Nnaemeka, Onyeka; Schoppik, David; Fitzgerald, James E.; Portugues, Ruben; Lacoste, Alix M.B.; Riegler, Clemens; Engert, Florian; Schier, Alexander F.
2015-01-01
In order to localize the neural circuits involved in generating behaviors, it is necessary to assign activity onto anatomical maps of the nervous system. Using brain registration across hundreds of larval zebrafish, we have built an expandable open source atlas containing molecular labels and anatomical region definitions, the Z-Brain. Using this platform and immunohistochemical detection of phosphorylated-Extracellular signal-regulated kinase (ERK/MAPK) as a readout of neural activity, we have developed a system to create and contextualize whole brain maps of stimulus- and behavior-dependent neural activity. This MAP-Mapping (Mitogen Activated Protein kinase – Mapping) assay is technically simple, fast, inexpensive, and data analysis is completely automated. Since MAP-Mapping is performed on fish that are freely swimming, it is applicable to nearly any stimulus or behavior. We demonstrate the utility of our high-throughput approach using hunting/feeding, pharmacological, visual and noxious stimuli. The resultant maps outline hundreds of areas associated with behaviors. PMID:26778924
How the Human Brain Represents Perceived Dangerousness or “Predacity” of Animals
Sha, Long; Guntupalli, J. Swaroop; Oosterhof, Nikolaas; Halchenko, Yaroslav O.; Nastase, Samuel A.; di Oleggio Castello, Matteo Visconti; Abdi, Hervé; Jobst, Barbara C.; Gobbini, M. Ida; Haxby, James V.
2016-01-01
Common or folk knowledge about animals is dominated by three dimensions: (1) level of cognitive complexity or “animacy;” (2) dangerousness or “predacity;” and (3) size. We investigated the neural basis of the perceived dangerousness or aggressiveness of animals, which we refer to more generally as “perception of threat.” Using functional magnetic resonance imaging (fMRI), we analyzed neural activity evoked by viewing images of animal categories that spanned the dissociable semantic dimensions of threat and taxonomic class. The results reveal a distributed network for perception of threat extending along the right superior temporal sulcus. We compared neural representational spaces with target representational spaces based on behavioral judgments and a computational model of early vision and found a processing pathway in which perceived threat emerges as a dominant dimension: whereas visual features predominate in early visual cortex and taxonomy in lateral occipital and ventral temporal cortices, these dimensions fall away progressively from posterior to anterior temporal cortices, leaving threat as the dominant explanatory variable. Our results suggest that the perception of threat in the human brain is associated with neural structures that underlie perception and cognition of social actions and intentions, suggesting a broader role for these regions than has been thought previously, one that includes the perception of potential threat from agents independent of their biological class. SIGNIFICANCE STATEMENT For centuries, philosophers have wondered how the human mind organizes the world into meaningful categories and concepts. Today this question is at the core of cognitive science, but our focus has shifted to understanding how knowledge manifests in dynamic activity of neural systems in the human brain. This study advances the young field of empirical neuroepistemology by characterizing the neural systems engaged by an important dimension in our cognitive representation of the animal kingdom ontological subdomain: how the brain represents the perceived threat, dangerousness, or “predacity” of animals. Our findings reveal how activity for domain-specific knowledge of animals overlaps the social perception networks of the brain, suggesting domain-general mechanisms underlying the representation of conspecifics and other animals. PMID:27170133
Self-organizing neural integration of pose-motion features for human action recognition
Parisi, German I.; Weber, Cornelius; Wermter, Stefan
2015-01-01
The visual recognition of complex, articulated human movements is fundamental for a wide range of artificial systems oriented toward human-robot communication, action classification, and action-driven perception. These challenging tasks may generally involve the processing of a huge amount of visual information and learning-based mechanisms for generalizing a set of training actions and classifying new samples. To operate in natural environments, a crucial property is the efficient and robust recognition of actions, also under noisy conditions caused by, for instance, systematic sensor errors and temporarily occluded persons. Studies of the mammalian visual system and its outperforming ability to process biological motion information suggest separate neural pathways for the distinct processing of pose and motion features at multiple levels and the subsequent integration of these visual cues for action perception. We present a neurobiologically-motivated approach to achieve noise-tolerant action recognition in real time. Our model consists of self-organizing Growing When Required (GWR) networks that obtain progressively generalized representations of sensory inputs and learn inherent spatio-temporal dependencies. During the training, the GWR networks dynamically change their topological structure to better match the input space. We first extract pose and motion features from video sequences and then cluster actions in terms of prototypical pose-motion trajectories. Multi-cue trajectories from matching action frames are subsequently combined to provide action dynamics in the joint feature space. Reported experiments show that our approach outperforms previous results on a dataset of full-body actions captured with a depth sensor, and ranks among the best results for a public benchmark of domestic daily actions. PMID:26106323
Sounds Activate Visual Cortex and Improve Visual Discrimination
Störmer, Viola S.; Martinez, Antigona; McDonald, John J.; Hillyard, Steven A.
2014-01-01
A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound. PMID:25031419
Walking modulates speed sensitivity in Drosophila motion vision.
Chiappe, M Eugenia; Seelig, Johannes D; Reiser, Michael B; Jayaraman, Vivek
2010-08-24
Changes in behavioral state modify neural activity in many systems. In some vertebrates such modulation has been observed and interpreted in the context of attention and sensorimotor coordinate transformations. Here we report state-dependent activity modulations during walking in a visual-motor pathway of Drosophila. We used two-photon imaging to monitor intracellular calcium activity in motion-sensitive lobula plate tangential cells (LPTCs) in head-fixed Drosophila walking on an air-supported ball. Cells of the horizontal system (HS)--a subgroup of LPTCs--showed stronger calcium transients in response to visual motion when flies were walking rather than resting. The amplified responses were also correlated with walking speed. Moreover, HS neurons showed a relatively higher gain in response strength at higher temporal frequencies, and their optimum temporal frequency was shifted toward higher motion speeds. Walking-dependent modulation of HS neurons in the Drosophila visual system may constitute a mechanism to facilitate processing of higher image speeds in behavioral contexts where these speeds of visual motion are relevant for course stabilization. Copyright 2010 Elsevier Ltd. All rights reserved.
Eye Velocity Gain Fields in MSTd During Optokinetic Stimulation
Brostek, Lukas; Büttner, Ulrich; Mustari, Michael J.; Glasauer, Stefan
2015-01-01
Lesion studies argue for an involvement of cortical area dorsal medial superior temporal area (MSTd) in the control of optokinetic response (OKR) eye movements to planar visual stimulation. Neural recordings during OKR suggested that MSTd neurons directly encode stimulus velocity. On the other hand, studies using radial visual flow together with voluntary smooth pursuit eye movements showed that visual motion responses were modulated by eye movement-related signals. Here, we investigated neural responses in MSTd during continuous optokinetic stimulation using an information-theoretic approach for characterizing neural tuning with high resolution. We show that the majority of MSTd neurons exhibit gain-field-like tuning functions rather than directly encoding one variable. Neural responses showed a large diversity of tuning to combinations of retinal and extraretinal input. Eye velocity-related activity was observed prior to the actual eye movements, reflecting an efference copy. The observed tuning functions resembled those emerging in a network model trained to perform summation of 2 population-coded signals. Together, our findings support the hypothesis that MSTd implements the visuomotor transformation from retinal to head-centered stimulus velocity signals for the control of OKR. PMID:24557636
Neural Global Pattern Similarity Underlies True and False Memories.
Ye, Zhifang; Zhu, Bi; Zhuang, Liping; Lu, Zhonglin; Chen, Chuansheng; Xue, Gui
2016-06-22
The neural processes giving rise to human memory strength signals remain poorly understood. Inspired by formal computational models that posit a central role of global matching in memory strength, we tested a novel hypothesis that the strengths of both true and false memories arise from the global similarity of an item's neural activation pattern during retrieval to that of all the studied items during encoding (i.e., the encoding-retrieval neural global pattern similarity [ER-nGPS]). We revealed multiple ER-nGPS signals that carried distinct information and contributed differentially to true and false memories: Whereas the ER-nGPS in the parietal regions reflected semantic similarity and was scaled with the recognition strengths of both true and false memories, ER-nGPS in the visual cortex contributed solely to true memory. Moreover, ER-nGPS differences between the parietal and visual cortices were correlated with frontal monitoring processes. By combining computational and neuroimaging approaches, our results advance a mechanistic understanding of memory strength in recognition. What neural processes give rise to memory strength signals, and lead to our conscious feelings of familiarity? Using fMRI, we found that the memory strength of a given item depends not only on how it was encoded during learning, but also on the similarity of its neural representation with other studied items. The global neural matching signal, mainly in the parietal lobule, could account for the memory strengths of both studied and unstudied items. Interestingly, a different global matching signal, originated from the visual cortex, could distinguish true from false memories. The findings reveal multiple neural mechanisms underlying the memory strengths of events registered in the brain. Copyright © 2016 the authors 0270-6474/16/366792-11$15.00/0.
Isolating Discriminant Neural Activity in the Presence of Eye Movements and Concurrent Task Demands
Touryan, Jon; Lawhern, Vernon J.; Connolly, Patrick M.; Bigdely-Shamlo, Nima; Ries, Anthony J.
2017-01-01
A growing number of studies use the combination of eye-tracking and electroencephalographic (EEG) measures to explore the neural processes that underlie visual perception. In these studies, fixation-related potentials (FRPs) are commonly used to quantify early and late stages of visual processing that follow the onset of each fixation. However, FRPs reflect a mixture of bottom-up (sensory-driven) and top-down (goal-directed) processes, in addition to eye movement artifacts and unrelated neural activity. At present there is little consensus on how to separate this evoked response into its constituent elements. In this study we sought to isolate the neural sources of target detection in the presence of eye movements and over a range of concurrent task demands. Here, participants were asked to identify visual targets (Ts) amongst a grid of distractor stimuli (Ls), while simultaneously performing an auditory N-back task. To identify the discriminant activity, we used independent components analysis (ICA) for the separation of EEG into neural and non-neural sources. We then further separated the neural sources, using a modified measure-projection approach, into six regions of interest (ROIs): occipital, fusiform, temporal, parietal, cingulate, and frontal cortices. Using activity from these ROIs, we identified target from non-target fixations in all participants at a level similar to other state-of-the-art classification techniques. Importantly, we isolated the time course and spectral features of this discriminant activity in each ROI. In addition, we were able to quantify the effect of cognitive load on both fixation-locked potential and classification performance across regions. Together, our results show the utility of a measure-projection approach for separating task-relevant neural activity into meaningful ROIs within more complex contexts that include eye movements. PMID:28736519
Oyedotun, Oyebade K; Khashman, Adnan
2017-02-01
Humans are apt at recognizing patterns and discovering even abstract features which are sometimes embedded therein. Our ability to use the banknotes in circulation for business transactions lies in the effortlessness with which we can recognize the different banknote denominations after seeing them over a period of time. More significant is that we can usually recognize these banknote denominations irrespective of what parts of the banknotes are exposed to us visually. Furthermore, our recognition ability is largely unaffected even when these banknotes are partially occluded. In a similar analogy, the robustness of intelligent systems to perform the task of banknote recognition should not collapse under some minimum level of partial occlusion. Artificial neural networks are intelligent systems which from inception have taken many important cues related to structure and learning rules from the human nervous/cognition processing system. Likewise, it has been shown that advances in artificial neural network simulations can help us understand the human nervous/cognition system even furthermore. In this paper, we investigate three cognition hypothetical frameworks to vision-based recognition of banknote denominations using competitive neural networks. In order to make the task more challenging and stress-test the investigated hypotheses, we also consider the recognition of occluded banknotes. The implemented hypothetical systems are tasked to perform fast recognition of banknotes with up to 75 % occlusion. The investigated hypothetical systems are trained on Nigeria's Naira banknotes and several experiments are performed to demonstrate the findings presented within this work.
Infant Visual Attention and Object Recognition
Reynolds, Greg D.
2015-01-01
This paper explores the role visual attention plays in the recognition of objects in infancy. Research and theory on the development of infant attention and recognition memory are reviewed in three major sections. The first section reviews some of the major findings and theory emerging from a rich tradition of behavioral research utilizing preferential looking tasks to examine visual attention and recognition memory in infancy. The second section examines research utilizing neural measures of attention and object recognition in infancy as well as research on brain-behavior relations in the early development of attention and recognition memory. The third section addresses potential areas of the brain involved in infant object recognition and visual attention. An integrated synthesis of some of the existing models of the development of visual attention is presented which may account for the observed changes in behavioral and neural measures of visual attention and object recognition that occur across infancy. PMID:25596333
Neural basis of forward flight control and landing in honeybees.
Ibbotson, M R; Hung, Y-S; Meffin, H; Boeddeker, N; Srinivasan, M V
2017-11-06
The impressive repertoire of honeybee visually guided behaviors, and their ability to learn has made them an important tool for elucidating the visual basis of behavior. Like other insects, bees perform optomotor course correction to optic flow, a response that is dependent on the spatial structure of the visual environment. However, bees can also distinguish the speed of image motion during forward flight and landing, as well as estimate flight distances (odometry), irrespective of the visual scene. The neural pathways underlying these abilities are unknown. Here we report on a cluster of descending neurons (DNIIIs) that are shown to have the directional tuning properties necessary for detecting image motion during forward flight and landing on vertical surfaces. They have stable firing rates during prolonged periods of stimulation and respond to a wide range of image speeds, making them suitable to detect image flow during flight behaviors. While their responses are not strictly speed tuned, the shape and amplitudes of their speed tuning functions are resistant to large changes in spatial frequency. These cells are prime candidates not only for the control of flight speed and landing, but also the basis of a neural 'front end' of the honeybee's visual odometer.
Fox, Jessica L.; Aptekar, Jacob W.; Zolotova, Nadezhda M.; Shoemaker, Patrick A.; Frye, Mark A.
2014-01-01
The behavioral algorithms and neural subsystems for visual figure–ground discrimination are not sufficiently described in any model system. The fly visual system shares structural and functional similarity with that of vertebrates and, like vertebrates, flies robustly track visual figures in the face of ground motion. This computation is crucial for animals that pursue salient objects under the high performance requirements imposed by flight behavior. Flies smoothly track small objects and use wide-field optic flow to maintain flight-stabilizing optomotor reflexes. The spatial and temporal properties of visual figure tracking and wide-field stabilization have been characterized in flies, but how the two systems interact spatially to allow flies to actively track figures against a moving ground has not. We took a systems identification approach in flying Drosophila and measured wing-steering responses to velocity impulses of figure and ground motion independently. We constructed a spatiotemporal action field (STAF) – the behavioral analog of a spatiotemporal receptive field – revealing how the behavioral impulse responses to figure tracking and concurrent ground stabilization vary for figure motion centered at each location across the visual azimuth. The figure tracking and ground stabilization STAFs show distinct spatial tuning and temporal dynamics, confirming the independence of the two systems. When the figure tracking system is activated by a narrow vertical bar moving within the frontal field of view, ground motion is essentially ignored despite comprising over 90% of the total visual input. PMID:24198267
Fine-grained visual marine vessel classification for coastal surveillance and defense applications
NASA Astrophysics Data System (ADS)
Solmaz, Berkan; Gundogdu, Erhan; Karaman, Kaan; Yücesoy, Veysel; Koç, Aykut
2017-10-01
The need for capabilities of automated visual content analysis has substantially increased due to presence of large number of images captured by surveillance cameras. With a focus on development of practical methods for extracting effective visual data representations, deep neural network based representations have received great attention due to their success in visual categorization of generic images. For fine-grained image categorization, a closely related yet a more challenging research problem compared to generic image categorization due to high visual similarities within subgroups, diverse applications were developed such as classifying images of vehicles, birds, food and plants. Here, we propose the use of deep neural network based representations for categorizing and identifying marine vessels for defense and security applications. First, we gather a large number of marine vessel images via online sources grouping them into four coarse categories; naval, civil, commercial and service vessels. Next, we subgroup naval vessels into fine categories such as corvettes, frigates and submarines. For distinguishing images, we extract state-of-the-art deep visual representations and train support-vector-machines. Furthermore, we fine tune deep representations for marine vessel images. Experiments address two scenarios, classification and verification of naval marine vessels. Classification experiment aims coarse categorization, as well as learning models of fine categories. Verification experiment embroils identification of specific naval vessels by revealing if a pair of images belongs to identical marine vessels by the help of learnt deep representations. Obtaining promising performance, we believe these presented capabilities would be essential components of future coastal and on-board surveillance systems.
Dagnino-Subiabre, A; Terreros, G; Carmona-Fontaine, C; Zepeda, R; Orellana, J A; Díaz-Véliz, G; Mora, S; Aboitiz, F
2005-01-01
Chronic stress affects brain areas involved in learning and emotional responses. These alterations have been related with the development of cognitive deficits in major depression. The aim of this study was to determine the effect of chronic immobilization stress on the auditory and visual mesencephalic regions in the rat brain. We analyzed in Golgi preparations whether stress impairs the neuronal morphology of the inferior (auditory processing) and superior colliculi (visual processing). Afterward, we examined the effect of stress on acoustic and visual conditioning using an avoidance conditioning test. We found that stress induced dendritic atrophy in inferior colliculus neurons and did not affect neuronal morphology in the superior colliculus. Furthermore, stressed rats showed a stronger impairment in acoustic conditioning than in visual conditioning. Fifteen days post-stress the inferior colliculus neurons completely restored their dendritic structure, showing a high level of neural plasticity that is correlated with an improvement in acoustic learning. These results suggest that chronic stress has more deleterious effects in the subcortical auditory system than in the visual system and may affect the aversive system and fear-like behaviors. Our study opens a new approach to understand the pathophysiology of stress and stress-related disorders such as major depression.
Decentralized Multisensory Information Integration in Neural Systems.
Zhang, Wen-Hao; Chen, Aihua; Rasch, Malte J; Wu, Si
2016-01-13
How multiple sensory cues are integrated in neural circuitry remains a challenge. The common hypothesis is that information integration might be accomplished in a dedicated multisensory integration area receiving feedforward inputs from the modalities. However, recent experimental evidence suggests that it is not a single multisensory brain area, but rather many multisensory brain areas that are simultaneously involved in the integration of information. Why many mutually connected areas should be needed for information integration is puzzling. Here, we investigated theoretically how information integration could be achieved in a distributed fashion within a network of interconnected multisensory areas. Using biologically realistic neural network models, we developed a decentralized information integration system that comprises multiple interconnected integration areas. Studying an example of combining visual and vestibular cues to infer heading direction, we show that such a decentralized system is in good agreement with anatomical evidence and experimental observations. In particular, we show that this decentralized system can integrate information optimally. The decentralized system predicts that optimally integrated information should emerge locally from the dynamics of the communication between brain areas and sheds new light on the interpretation of the connectivity between multisensory brain areas. To extract information reliably from ambiguous environments, the brain integrates multiple sensory cues, which provide different aspects of information about the same entity of interest. Here, we propose a decentralized architecture for multisensory integration. In such a system, no processor is in the center of the network topology and information integration is achieved in a distributed manner through reciprocally connected local processors. Through studying the inference of heading direction with visual and vestibular cues, we show that the decentralized system can integrate information optimally, with the reciprocal connections between processers determining the extent of cue integration. Our model reproduces known multisensory integration behaviors observed in experiments and sheds new light on our understanding of how information is integrated in the brain. Copyright © 2016 Zhang et al.
Decentralized Multisensory Information Integration in Neural Systems
Zhang, Wen-hao; Chen, Aihua
2016-01-01
How multiple sensory cues are integrated in neural circuitry remains a challenge. The common hypothesis is that information integration might be accomplished in a dedicated multisensory integration area receiving feedforward inputs from the modalities. However, recent experimental evidence suggests that it is not a single multisensory brain area, but rather many multisensory brain areas that are simultaneously involved in the integration of information. Why many mutually connected areas should be needed for information integration is puzzling. Here, we investigated theoretically how information integration could be achieved in a distributed fashion within a network of interconnected multisensory areas. Using biologically realistic neural network models, we developed a decentralized information integration system that comprises multiple interconnected integration areas. Studying an example of combining visual and vestibular cues to infer heading direction, we show that such a decentralized system is in good agreement with anatomical evidence and experimental observations. In particular, we show that this decentralized system can integrate information optimally. The decentralized system predicts that optimally integrated information should emerge locally from the dynamics of the communication between brain areas and sheds new light on the interpretation of the connectivity between multisensory brain areas. SIGNIFICANCE STATEMENT To extract information reliably from ambiguous environments, the brain integrates multiple sensory cues, which provide different aspects of information about the same entity of interest. Here, we propose a decentralized architecture for multisensory integration. In such a system, no processor is in the center of the network topology and information integration is achieved in a distributed manner through reciprocally connected local processors. Through studying the inference of heading direction with visual and vestibular cues, we show that the decentralized system can integrate information optimally, with the reciprocal connections between processers determining the extent of cue integration. Our model reproduces known multisensory integration behaviors observed in experiments and sheds new light on our understanding of how information is integrated in the brain. PMID:26758843
NASA Technical Reports Server (NTRS)
Krauzlis, R. J.; Stone, L. S.
1999-01-01
The two components of voluntary tracking eye-movements in primates, pursuit and saccades, are generally viewed as relatively independent oculomotor subsystems that move the eyes in different ways using independent visual information. Although saccades have long been known to be guided by visual processes related to perception and cognition, only recently have psychophysical and physiological studies provided compelling evidence that pursuit is also guided by such higher-order visual processes, rather than by the raw retinal stimulus. Pursuit and saccades also do not appear to be entirely independent anatomical systems, but involve overlapping neural mechanisms that might be important for coordinating these two types of eye movement during the tracking of a selected visual object. Given that the recovery of objects from real-world images is inherently ambiguous, guiding both pursuit and saccades with perception could represent an explicit strategy for ensuring that these two motor actions are driven by a single visual interpretation.
Eguchi, Akihiro; Mender, Bedeho M. W.; Evans, Benjamin D.; Humphreys, Glyn W.; Stringer, Simon M.
2015-01-01
Neurons in successive stages of the primate ventral visual pathway encode the spatial structure of visual objects. In this paper, we investigate through computer simulation how these cell firing properties may develop through unsupervised visually-guided learning. Individual neurons in the model are shown to exploit statistical regularity and temporal continuity of the visual inputs during training to learn firing properties that are similar to neurons in V4 and TEO. Neurons in V4 encode the conformation of boundary contour elements at a particular position within an object regardless of the location of the object on the retina, while neurons in TEO integrate information from multiple boundary contour elements. This representation goes beyond mere object recognition, in which neurons simply respond to the presence of a whole object, but provides an essential foundation from which the brain is subsequently able to recognize the whole object. PMID:26300766
Eyes Matched to the Prize: The State of Matched Filters in Insect Visual Circuits.
Kohn, Jessica R; Heath, Sarah L; Behnia, Rudy
2018-01-01
Confronted with an ever-changing visual landscape, animals must be able to detect relevant stimuli and translate this information into behavioral output. A visual scene contains an abundance of information: to interpret the entirety of it would be uneconomical. To optimally perform this task, neural mechanisms exist to enhance the detection of important features of the sensory environment while simultaneously filtering out irrelevant information. This can be accomplished by using a circuit design that implements specific "matched filters" that are tuned to relevant stimuli. Following this rule, the well-characterized visual systems of insects have evolved to streamline feature extraction on both a structural and functional level. Here, we review examples of specialized visual microcircuits for vital behaviors across insect species, including feature detection, escape, and estimation of self-motion. Additionally, we discuss how these microcircuits are modulated to weigh relevant input with respect to different internal and behavioral states.
Neural circuits underlying visually evoked escapes in larval zebrafish
Dunn, Timothy W.; Gebhardt, Christoph; Naumann, Eva A.; Riegler, Clemens; Ahrens, Misha B.; Engert, Florian; Del Bene, Filippo
2015-01-01
SUMMARY Escape behaviors deliver organisms away from imminent catastrophe. Here, we characterize behavioral responses of freely swimming larval zebrafish to looming visual stimuli simulating predators. We report that the visual system alone can recruit lateralized, rapid escape motor programs, similar to those elicited by mechanosensory modalities. Two-photon calcium imaging of retino-recipient midbrain regions isolated the optic tectum as an important center processing looming stimuli, with ensemble activity encoding the critical image size determining escape latency. Furthermore, we describe activity in retinal ganglion cell terminals and superficial inhibitory interneurons in the tectum during looming and propose a model for how temporal dynamics in tectal periventricular neurons might arise from computations between these two fundamental constituents. Finally, laser ablations of hindbrain circuitry confirmed that visual and mechanosensory modalities share the same premotor output network. Together, we establish a circuit for the processing of aversive stimuli in the context of an innate visual behavior. PMID:26804997
Choich, J A; Sass, J B; Silbergeld, E K
2002-01-01
Methods of identifying and preventing ecotoxicity related to environmental stressors on wildlife species are underdeveloped. To detect sublethal effects, we have devised a neurochemical method of evaluating environmental neurotoxins by a measuring changes in regional neural activity in the central nervous system of fish. Our system is a unique adaptation of the 2-deoxyglucose (2-DG) method originally developed by L. Sokoloff in 1977, which is based on the direct relationship between glucose metabolism and neural functioning at the regional level. We applied these concepts to test the assumption that changes in neural activity as a result of chemical exposure would produce measurable effects on the amount of [(14)C]2-DG accumulated regionally in the brain of Tilapia nilatica. For purposes of this study, we utilized the excitotoxin N-methyl-D-aspartate (NMDA) to characterize the response of the central nervous system. Regional accumulation of [(14)C]2-DG was visualized by autoradiography and digital image processing. Observable increases in regional [(14) C] 2-DG uptake were evident in all NMDA-treated groups as compared to controls. Specific areas of increased [(14)C] 2-DG uptake included the telencephalon, optic tectum, and regions of the cerebellum, all areas in which high concentrations of NMDA-subtype glutamate receptors have been found in Tilapia monsanbica. These results are consistent with the known neural excitatory action of NMDA.
Born, Jannis; Galeazzi, Juan M; Stringer, Simon M
2017-01-01
A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning in VisNet.
Born, Jannis; Stringer, Simon M.
2017-01-01
A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning in VisNet. PMID:28562618
Dissociation between the neural correlates of conscious face perception and visual attention.
Navajas, Joaquin; Nitka, Aleksander W; Quian Quiroga, Rodrigo
2017-08-01
Given the higher chance to recognize attended compared to unattended stimuli, the specific neural correlates of these two processes, attention and awareness, tend to be intermingled in experimental designs. In this study, we dissociated the neural correlates of conscious face perception from the effects of visual attention. To do this, we presented faces at the threshold of awareness and manipulated attention through the use of exogenous prestimulus cues. We show that the N170 component, a scalp EEG marker of face perception, was modulated independently by attention and by awareness. An earlier P1 component was not modulated by either of the two effects and a later P3 component was indicative of awareness but not of attention. These claims are supported by converging evidence from (a) modulations observed in the average evoked potentials, (b) correlations between neural and behavioral data at the single-subject level, and (c) single-trial analyses. Overall, our results show a clear dissociation between the neural substrates of attention and awareness. Based on these results, we argue that conscious face perception is triggered by a boost in face-selective cortical ensembles that can be modulated by, but are still independent from, visual attention. © 2017 Society for Psychophysiological Research.
A steady state visually evoked potential investigation of memory and ageing.
Macpherson, Helen; Pipingas, Andrew; Silberstein, Richard
2009-04-01
Old age is generally accompanied by a decline in memory performance. Specifically, neuroimaging and electrophysiological studies have revealed that there are age-related changes in the neural correlates of episodic and working memory. This study investigated age-associated changes in the steady state visually evoked potential (SSVEP) amplitude and latency associated with memory performance. Participants were 15 older (59-67 years) and 14 younger (20-30 years) adults who performed an object working memory (OWM) task and a contextual recognition memory (CRM) task, whilst the SSVEP was recorded from 64 electrode sites. Retention of a single object in the low demand OWM task was characterised by smaller frontal SSVEP amplitude and latency differences in older adults than in younger adults, indicative of an age-associated reduction in neural processes. Recognition of visual images in the more difficult CRM task was accompanied by larger, more sustained SSVEP amplitude and latency decreases over temporal parietal regions in older adults. In contrast, the more transient, frontally mediated pattern of activity demonstrated by younger adults suggests that younger and older adults utilize different neural resources to perform recognition judgements. The results provide support for compensatory processes in the aging brain; at lower task demands, older adults demonstrate reduced neural activity, whereas at greater task demands neural activity is increased.
ShapeShop: Towards Understanding Deep Learning Representations via Interactive Experimentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hohman, Frederick M.; Hodas, Nathan O.; Chau, Duen Horng
Deep learning is the driving force behind many recent technologies; however, deep neural networks are often viewed as “black-boxes” due to their internal complexity that is hard to understand. Little research focuses on helping people explore and understand the relationship between a user’s data and the learned representations in deep learning models. We present our ongoing work, ShapeShop, an interactive system for visualizing and understanding what semantics a neural network model has learned. Built using standard web technologies, ShapeShop allows users to experiment with and compare deep learning models to help explore the robustness of image classifiers.
The Effects of Pharmacological Opioid Blockade on Neural Measures of Drug Cue-Reactivity in Humans.
Courtney, Kelly E; Ghahremani, Dara G; Ray, Lara A
2016-11-01
Interactions between dopaminergic and opioidergic systems have been implicated in the reinforcing properties of drugs of abuse. The present study investigated the effects of opioid blockade, via naltrexone, on functional magnetic resonance imaging (fMRI) measures during methamphetamine cue-reactivity to elucidate the role of endogenous opioids in the neural systems underlying drug craving. To investigate this question, non-treatment seeking individuals with methamphetamine use disorder (N=23; 74% male, mean age=34.70 (SD=8.95)) were recruited for a randomized, placebo controlled, within-subject design and underwent a visual methamphetamine cue-reactivity task during two blood-oxygen-level dependent (BOLD) fMRI sessions following 3 days of naltrexone (50 mg) and matched time for placebo. fMRI analyses tested naltrexone-induced differences in BOLD activation and functional connectivity during cue processing. The results showed that naltrexone administration reduced cue-reactivity in sensorimotor regions and related to altered functional connectivity of dorsal striatum, ventral tegmental area, and precuneus with frontal, visual, sensory, and motor-related regions. Naltrexone also weakened the associations between subjective craving and precuneus functional connectivity with sensorimotor regions and strengthened the associations between subjective craving and dorsal striatum and precuneus connectivity with frontal regions. In conclusion, this study provides the first evidence that opioidergic blockade alters neural responses to drug cues in humans with methamphetamine addiction and suggests that naltrexone may be reducing drug cue salience by decreasing the involvement of sensorimotor regions and by engaging greater frontal regulation over salience attribution.
Mental Imagery and Visual Working Memory
Keogh, Rebecca; Pearson, Joel
2011-01-01
Visual working memory provides an essential link between past and future events. Despite recent efforts, capacity limits, their genesis and the underlying neural structures of visual working memory remain unclear. Here we show that performance in visual working memory - but not iconic visual memory - can be predicted by the strength of mental imagery as assessed with binocular rivalry in a given individual. In addition, for individuals with strong imagery, modulating the background luminance diminished performance on visual working memory and imagery tasks, but not working memory for number strings. This suggests that luminance signals were disrupting sensory-based imagery mechanisms and not a general working memory system. Individuals with poor imagery still performed above chance in the visual working memory task, but their performance was not affected by the background luminance, suggesting a dichotomy in strategies for visual working memory: individuals with strong mental imagery rely on sensory-based imagery to support mnemonic performance, while those with poor imagery rely on different strategies. These findings could help reconcile current controversy regarding the mechanism and location of visual mnemonic storage. PMID:22195024
Mental imagery and visual working memory.
Keogh, Rebecca; Pearson, Joel
2011-01-01
Visual working memory provides an essential link between past and future events. Despite recent efforts, capacity limits, their genesis and the underlying neural structures of visual working memory remain unclear. Here we show that performance in visual working memory--but not iconic visual memory--can be predicted by the strength of mental imagery as assessed with binocular rivalry in a given individual. In addition, for individuals with strong imagery, modulating the background luminance diminished performance on visual working memory and imagery tasks, but not working memory for number strings. This suggests that luminance signals were disrupting sensory-based imagery mechanisms and not a general working memory system. Individuals with poor imagery still performed above chance in the visual working memory task, but their performance was not affected by the background luminance, suggesting a dichotomy in strategies for visual working memory: individuals with strong mental imagery rely on sensory-based imagery to support mnemonic performance, while those with poor imagery rely on different strategies. These findings could help reconcile current controversy regarding the mechanism and location of visual mnemonic storage.
A visual metaphor describing neural dynamics in schizophrenia.
van Beveren, Nico J M; de Haan, Lieuwe
2008-07-09
In many scientific disciplines the use of a metaphor as an heuristic aid is not uncommon. A well known example in somatic medicine is the 'defense army metaphor' used to characterize the immune system. In fact, probably a large part of the everyday work of doctors consists of 'translating' scientific and clinical information (i.e. causes of disease, percentage of success versus risk of side-effects) into information tailored to the needs and capacities of the individual patient. The ability to do so in an effective way is at least partly what makes a clinician a good communicator. Schizophrenia is a severe psychiatric disorder which affects approximately 1% of the population. Over the last two decades a large amount of molecular-biological, imaging and genetic data have been accumulated regarding the biological underpinnings of schizophrenia. However, it remains difficult to understand how the characteristic symptoms of schizophrenia such as hallucinations and delusions are related to disturbances on the molecular-biological level. In general, psychiatry seems to lack a conceptual framework with sufficient explanatory power to link the mental- and molecular-biological domains. Here, we present an essay-like study in which we propose to use visualized concepts stemming from the theory on dynamical complex systems as a 'visual metaphor' to bridge the mental- and molecular-biological domains in schizophrenia. We first describe a computer model of neural information processing; we show how the information processing in this model can be visualized, using concepts from the theory on complex systems. We then describe two computer models which have been used to investigate the primary theory on schizophrenia, the neurodevelopmental model, and show how disturbed information processing in these two computer models can be presented in terms of the visual metaphor previously described. Finally, we describe the effects of dopamine neuromodulation, of which disturbances have been frequently described in schizophrenia, in terms of the same visualized metaphor. The conceptual framework and metaphor described offers a heuristic tool to understand the relationship between the mental- and molecular-biological domains in an intuitive way. The concepts we present may serve to facilitate communication between researchers, clinicians and patients.