Sample records for human visual processing

  1. Paintings, photographs, and computer graphics are calculated appearances

    NASA Astrophysics Data System (ADS)

    McCann, John

    2012-03-01

    Painters reproduce the appearances they see, or visualize. The entire human visual system is the first part of that process, providing extensive spatial processing. Painters have used spatial techniques since the Renaissance to render HDR scenes. Silver halide photography responds to the light falling on single film pixels. Film can only mimic the retinal response of the cones at the start of the visual process. Film cannot mimic the spatial processing in humans. Digital image processing can. This talk studies three dramatic visual illusions and uses the spatial mechanisms found in human vision to interpret their appearances.

  2. Digital Images and Human Vision

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)

    1997-01-01

    Processing of digital images destined for visual consumption raises many interesting questions regarding human visual sensitivity. This talk will survey some of these questions, including some that have been answered and some that have not. There will be an emphasis upon visual masking, and a distinction will be drawn between masking due to contrast gain control processes, and due to processes such as hypothesis testing, pattern recognition, and visual search.

  3. A comparative psychophysical approach to visual perception in primates.

    PubMed

    Matsuno, Toyomi; Fujita, Kazuo

    2009-04-01

    Studies on the visual processing of primates, which have well developed visual systems, provide essential information about the perceptual bases of their higher-order cognitive abilities. Although the mechanisms underlying visual processing are largely shared between human and nonhuman primates, differences have also been reported. In this article, we review psychophysical investigations comparing the basic visual processing that operates in human and nonhuman species, and discuss the future contributions potentially deriving from such comparative psychophysical approaches to primate minds.

  4. The Human is the Loop: New Directions for Visual Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endert, Alexander; Hossain, Shahriar H.; Ramakrishnan, Naren

    2014-01-28

    Visual analytics is the science of marrying interactive visualizations and analytic algorithms to support exploratory knowledge discovery in large datasets. We argue for a shift from a ‘human in the loop’ philosophy for visual analytics to a ‘human is the loop’ viewpoint, where the focus is on recognizing analysts’ work processes, and seamlessly fitting analytics into that existing interactive process. We survey a range of projects that provide visual analytic support contextually in the sensemaking loop, and outline a research agenda along with future challenges.

  5. Evolutionary relevance facilitates visual information processing.

    PubMed

    Jackson, Russell E; Calvillo, Dusti P

    2013-11-03

    Visual search of the environment is a fundamental human behavior that perceptual load affects powerfully. Previously investigated means for overcoming the inhibitions of high perceptual load, however, generalize poorly to real-world human behavior. We hypothesized that humans would process evolutionarily relevant stimuli more efficiently than evolutionarily novel stimuli, and evolutionary relevance would mitigate the repercussions of high perceptual load during visual search. Animacy is a significant component to evolutionary relevance of visual stimuli because perceiving animate entities is time-sensitive in ways that pose significant evolutionary consequences. Participants completing a visual search task located evolutionarily relevant and animate objects fastest and with the least impact of high perceptual load. Evolutionarily novel and inanimate objects were located slowest and with the highest impact of perceptual load. Evolutionary relevance may importantly affect everyday visual information processing.

  6. Structural and functional correlates of visual field asymmetry in the human brain by diffusion kurtosis MRI and functional MRI.

    PubMed

    O'Connell, Caitlin; Ho, Leon C; Murphy, Matthew C; Conner, Ian P; Wollstein, Gadi; Cham, Rakie; Chan, Kevin C

    2016-11-09

    Human visual performance has been observed to show superiority in localized regions of the visual field across many classes of stimuli. However, the underlying neural mechanisms remain unclear. This study aims to determine whether the visual information processing in the human brain is dependent on the location of stimuli in the visual field and the corresponding neuroarchitecture using blood-oxygenation-level-dependent functional MRI (fMRI) and diffusion kurtosis MRI, respectively, in 15 healthy individuals at 3 T. In fMRI, visual stimulation to the lower hemifield showed stronger brain responses and larger brain activation volumes than the upper hemifield, indicative of the differential sensitivity of the human brain across the visual field. In diffusion kurtosis MRI, the brain regions mapping to the lower visual field showed higher mean kurtosis, but not fractional anisotropy or mean diffusivity compared with the upper visual field. These results suggested the different distributions of microstructural organization across visual field brain representations. There was also a strong positive relationship between diffusion kurtosis and fMRI responses in the lower field brain representations. In summary, this study suggested the structural and functional brain involvements in the asymmetry of visual field responses in humans, and is important to the neurophysiological and psychological understanding of human visual information processing.

  7. Decoding the time-course of object recognition in the human brain: From visual features to categorical decisions.

    PubMed

    Contini, Erika W; Wardle, Susan G; Carlson, Thomas A

    2017-10-01

    Visual object recognition is a complex, dynamic process. Multivariate pattern analysis methods, such as decoding, have begun to reveal how the brain processes complex visual information. Recently, temporal decoding methods for EEG and MEG have offered the potential to evaluate the temporal dynamics of object recognition. Here we review the contribution of M/EEG time-series decoding methods to understanding visual object recognition in the human brain. Consistent with the current understanding of the visual processing hierarchy, low-level visual features dominate decodable object representations early in the time-course, with more abstract representations related to object category emerging later. A key finding is that the time-course of object processing is highly dynamic and rapidly evolving, with limited temporal generalisation of decodable information. Several studies have examined the emergence of object category structure, and we consider to what degree category decoding can be explained by sensitivity to low-level visual features. Finally, we evaluate recent work attempting to link human behaviour to the neural time-course of object processing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Experience, Context, and the Visual Perception of Human Movement

    ERIC Educational Resources Information Center

    Jacobs, Alissa; Pinto, Jeannine; Shiffrar, Maggie

    2004-01-01

    Why are human observers particularly sensitive to human movement? Seven experiments examined the roles of visual experience and motor processes in human movement perception by comparing visual sensitivities to point-light displays of familiar, unusual, and impossible gaits across gait-speed and identity discrimination tasks. In both tasks, visual…

  9. Objects Classification by Learning-Based Visual Saliency Model and Convolutional Neural Network.

    PubMed

    Li, Na; Zhao, Xinbo; Yang, Yongjia; Zou, Xiaochun

    2016-01-01

    Humans can easily classify different kinds of objects whereas it is quite difficult for computers. As a hot and difficult problem, objects classification has been receiving extensive interests with broad prospects. Inspired by neuroscience, deep learning concept is proposed. Convolutional neural network (CNN) as one of the methods of deep learning can be used to solve classification problem. But most of deep learning methods, including CNN, all ignore the human visual information processing mechanism when a person is classifying objects. Therefore, in this paper, inspiring the completed processing that humans classify different kinds of objects, we bring forth a new classification method which combines visual attention model and CNN. Firstly, we use the visual attention model to simulate the processing of human visual selection mechanism. Secondly, we use CNN to simulate the processing of how humans select features and extract the local features of those selected areas. Finally, not only does our classification method depend on those local features, but also it adds the human semantic features to classify objects. Our classification method has apparently advantages in biology. Experimental results demonstrated that our method made the efficiency of classification improve significantly.

  10. Structural and Functional Correlates of Visual Field Asymmetry in the Human Brain by Diffusion Kurtosis MRI and Functional MRI

    PubMed Central

    O’Connell, Caitlin; Ho, Leon C.; Murphy, Matthew C.; Conner, Ian P.; Wollstein, Gadi; Cham, Rakie; Chan, Kevin C.

    2016-01-01

    Human visual performance has been observed to exhibit superiority in localized regions of the visual field across many classes of stimuli. However, the underlying neural mechanisms remain unclear. This study aims to determine if the visual information processing in the human brain is dependent on the location of stimuli in the visual field and the corresponding neuroarchitecture using blood-oxygenation-level-dependent functional MRI (fMRI) and diffusion kurtosis MRI (DKI), respectively in 15 healthy individuals at 3 Tesla. In fMRI, visual stimulation to the lower hemifield showed stronger brain responses and larger brain activation volumes than the upper hemifield, indicative of the differential sensitivity of the human brain across the visual field. In DKI, the brain regions mapping to the lower visual field exhibited higher mean kurtosis but not fractional anisotropy or mean diffusivity when compared to the upper visual field. These results suggested the different distributions of microstructural organization across visual field brain representations. There was also a strong positive relationship between diffusion kurtosis and fMRI responses in the lower field brain representations. In summary, this study suggested the structural and functional brain involvements in the asymmetry of visual field responses in humans, and is important to the neurophysiological and psychological understanding of human visual information processing. PMID:27631541

  11. The multisensory function of the human primary visual cortex.

    PubMed

    Murray, Micah M; Thelen, Antonia; Thut, Gregor; Romei, Vincenzo; Martuzzi, Roberto; Matusz, Pawel J

    2016-03-01

    It has been nearly 10 years since Ghazanfar and Schroeder (2006) proposed that the neocortex is essentially multisensory in nature. However, it is only recently that sufficient and hard evidence that supports this proposal has accrued. We review evidence that activity within the human primary visual cortex plays an active role in multisensory processes and directly impacts behavioural outcome. This evidence emerges from a full pallet of human brain imaging and brain mapping methods with which multisensory processes are quantitatively assessed by taking advantage of particular strengths of each technique as well as advances in signal analyses. Several general conclusions about multisensory processes in primary visual cortex of humans are supported relatively solidly. First, haemodynamic methods (fMRI/PET) show that there is both convergence and integration occurring within primary visual cortex. Second, primary visual cortex is involved in multisensory processes during early post-stimulus stages (as revealed by EEG/ERP/ERFs as well as TMS). Third, multisensory effects in primary visual cortex directly impact behaviour and perception, as revealed by correlational (EEG/ERPs/ERFs) as well as more causal measures (TMS/tACS). While the provocative claim of Ghazanfar and Schroeder (2006) that the whole of neocortex is multisensory in function has yet to be demonstrated, this can now be considered established in the case of the human primary visual cortex. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. How cortical neurons help us see: visual recognition in the human brain

    PubMed Central

    Blumberg, Julie; Kreiman, Gabriel

    2010-01-01

    Through a series of complex transformations, the pixel-like input to the retina is converted into rich visual perceptions that constitute an integral part of visual recognition. Multiple visual problems arise due to damage or developmental abnormalities in the cortex of the brain. Here, we provide an overview of how visual information is processed along the ventral visual cortex in the human brain. We discuss how neurophysiological recordings in macaque monkeys and in humans can help us understand the computations performed by visual cortex. PMID:20811161

  13. Visual Motion Perception and Visual Attentive Processes.

    DTIC Science & Technology

    1988-04-01

    88-0551 Visual Motion Perception and Visual Attentive Processes George Spering , New YorkUnivesity A -cesson For DTIC TAB rant AFOSR 85-0364... Spering . HIPSt: A Unix-based image processing syslem. Computer Vision, Graphics, and Image Processing, 1984,25. 331-347. ’HIPS is the Human Information...Processing Laboratory’s Image Processing System. 1985 van Santen, Jan P. It, and George Spering . Elaborated Reichardt detectors. Journal of the Optical

  14. A Computational Model of Spatial Visualization Capacity

    ERIC Educational Resources Information Center

    Lyon, Don R.; Gunzelmann, Glenn; Gluck, Kevin A.

    2008-01-01

    Visualizing spatial material is a cornerstone of human problem solving, but human visualization capacity is sharply limited. To investigate the sources of this limit, we developed a new task to measure visualization accuracy for verbally-described spatial paths (similar to street directions), and implemented a computational process model to…

  15. Contributions of visual and embodied expertise to body perception.

    PubMed

    Reed, Catherine L; Nyberg, Andrew A; Grubb, Jefferson D

    2012-01-01

    Recent research has demonstrated that our perception of the human body differs from that of inanimate objects. This study investigated whether the visual perception of the human body differs from that of other animate bodies and, if so, whether that difference could be attributed to visual experience and/or embodied experience. To dissociate differential effects of these two types of expertise, inversion effects (recognition of inverted stimuli is slower and less accurate than recognition of upright stimuli) were compared for two types of bodies in postures that varied in typicality: humans in human postures (human-typical), humans in dog postures (human-atypical), dogs in dog postures (dog-typical), and dogs in human postures (dog-atypical). Inversion disrupts global configural processing. Relative changes in the size and presence of inversion effects reflect changes in visual processing. Both visual and embodiment expertise predict larger inversion effects for human over dog postures because we see humans more and we have experience producing human postures. However, our design that crosses body type and typicality leads to distinct predictions for visual and embodied experience. Visual expertise predicts an interaction between typicality and orientation: greater inversion effects should be found for typical over atypical postures regardless of body type. Alternatively, embodiment expertise predicts a body, typicality, and orientation interaction: larger inversion effects should be found for all human postures but only for atypical dog postures because humans can map their bodily experience onto these postures. Accuracy data supported embodiment expertise with the three-way interaction. However, response-time data supported contributions of visual expertise with larger inversion effects for typical over atypical postures. Thus, both types of expertise affect the visual perception of bodies.

  16. Feature precedence in processing multifeature visual information in the human brain: an event-related potential study.

    PubMed

    Liu, B; Meng, X; Wu, G; Huang, Y

    2012-05-17

    In this article, we aimed to study whether feature precedence existed in the cognitive processing of multifeature visual information in the human brain. In our experiment, we paid attention to two important visual features as follows: color and shape. In order to avoid the presence of semantic constraints between them and the resulting impact, pure color and simple geometric shape were chosen as the color feature and shape feature of visual stimulus, respectively. We adopted an "old/new" paradigm to study the cognitive processing of color feature, shape feature and the combination of color feature and shape feature, respectively. The experiment consisted of three tasks as follows: Color task, Shape task and Color-Shape task. The results showed that the feature-based pattern would be activated in the human brain in processing multifeature visual information without semantic association between features. Furthermore, shape feature was processed earlier than color feature, and the cognitive processing of color feature was more difficult than that of shape feature. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.

  17. Visual and Haptic Shape Processing in the Human Brain: Unisensory Processing, Multisensory Convergence, and Top-Down Influences.

    PubMed

    Lee Masson, Haemy; Bulthé, Jessica; Op de Beeck, Hans P; Wallraven, Christian

    2016-08-01

    Humans are highly adept at multisensory processing of object shape in both vision and touch. Previous studies have mostly focused on where visually perceived object-shape information can be decoded, with haptic shape processing receiving less attention. Here, we investigate visuo-haptic shape processing in the human brain using multivoxel correlation analyses. Importantly, we use tangible, parametrically defined novel objects as stimuli. Two groups of participants first performed either a visual or haptic similarity-judgment task. The resulting perceptual object-shape spaces were highly similar and matched the physical parameter space. In a subsequent fMRI experiment, objects were first compared within the learned modality and then in the other modality in a one-back task. When correlating neural similarity spaces with perceptual spaces, visually perceived shape was decoded well in the occipital lobe along with the ventral pathway, whereas haptically perceived shape information was mainly found in the parietal lobe, including frontal cortex. Interestingly, ventrolateral occipito-temporal cortex decoded shape in both modalities, highlighting this as an area capable of detailed visuo-haptic shape processing. Finally, we found haptic shape representations in early visual cortex (in the absence of visual input), when participants switched from visual to haptic exploration, suggesting top-down involvement of visual imagery on haptic shape processing. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Common Visual Preference for Curved Contours in Humans and Great Apes.

    PubMed

    Munar, Enric; Gómez-Puerto, Gerardo; Call, Josep; Nadal, Marcos

    2015-01-01

    Among the visual preferences that guide many everyday activities and decisions, from consumer choices to social judgment, preference for curved over sharp-angled contours is commonly thought to have played an adaptive role throughout human evolution, favoring the avoidance of potentially harmful objects. However, because nonhuman primates also exhibit preferences for certain visual qualities, it is conceivable that humans' preference for curved contours is grounded on perceptual and cognitive mechanisms shared with extant nonhuman primate species. Here we aimed to determine whether nonhuman great apes and humans share a visual preference for curved over sharp-angled contours using a 2-alternative forced choice experimental paradigm under comparable conditions. Our results revealed that the human group and the great ape group indeed share a common preference for curved over sharp-angled contours, but that they differ in the manner and magnitude with which this preference is expressed behaviorally. These results suggest that humans' visual preference for curved objects evolved from earlier primate species' visual preferences, and that during this process it became stronger, but also more susceptible to the influence of higher cognitive processes and preference for other visual features.

  19. Dynamic Stimuli And Active Processing In Human Visual Perception

    NASA Astrophysics Data System (ADS)

    Haber, Ralph N.

    1990-03-01

    Theories of visual perception traditionally have considered a static retinal image to be the starting point for processing; and has considered processing both to be passive and a literal translation of that frozen, two dimensional, pictorial image. This paper considers five problem areas in the analysis of human visually guided locomotion, in which the traditional approach is contrasted to newer ones that utilize dynamic definitions of stimulation, and an active perceiver: (1) differentiation between object motion and self motion, and among the various kinds of self motion (e.g., eyes only, head only, whole body, and their combinations); (2) the sources and contents of visual information that guide movement; (3) the acquisition and performance of perceptual motor skills; (4) the nature of spatial representations, percepts, and the perceived layout of space; and (5) and why the retinal image is a poor starting point for perceptual processing. These newer approaches argue that stimuli must be considered as dynamic: humans process the systematic changes in patterned light when objects move and when they themselves move. Furthermore, the processing of visual stimuli must be active and interactive, so that perceivers can construct panoramic and stable percepts from an interaction of stimulus information and expectancies of what is contained in the visual environment. These developments all suggest a very different approach to the computational analyses of object location and identification, and of the visual guidance of locomotion.

  20. The effect of early visual deprivation on the neural bases of multisensory processing.

    PubMed

    Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte

    2015-06-01

    Developmental vision is deemed to be necessary for the maturation of multisensory cortical circuits. Thus far, this has only been investigated in animal studies, which have shown that congenital visual deprivation markedly reduces the capability of neurons to integrate cross-modal inputs. The present study investigated the effect of transient congenital visual deprivation on the neural mechanisms of multisensory processing in humans. We used functional magnetic resonance imaging to compare responses of visual and auditory cortical areas to visual, auditory and audio-visual stimulation in cataract-reversal patients and normally sighted controls. The results showed that cataract-reversal patients, unlike normally sighted controls, did not exhibit multisensory integration in auditory areas. Furthermore, cataract-reversal patients, but not normally sighted controls, exhibited lower visual cortical processing within visual cortex during audio-visual stimulation than during visual stimulation. These results indicate that congenital visual deprivation affects the capability of cortical areas to integrate cross-modal inputs in humans, possibly because visual processing is suppressed during cross-modal stimulation. Arguably, the lack of vision in the first months after birth may result in a reorganization of visual cortex, including the suppression of noisy visual input from the deprived retina in order to reduce interference during auditory processing. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Preprocessing of emotional visual information in the human piriform cortex.

    PubMed

    Schulze, Patrick; Bestgen, Anne-Kathrin; Lech, Robert K; Kuchinke, Lars; Suchan, Boris

    2017-08-23

    This study examines the processing of visual information by the olfactory system in humans. Recent data point to the processing of visual stimuli by the piriform cortex, a region mainly known as part of the primary olfactory cortex. Moreover, the piriform cortex generates predictive templates of olfactory stimuli to facilitate olfactory processing. This study fills the gap relating to the question whether this region is also capable of preprocessing emotional visual information. To gain insight into the preprocessing and transfer of emotional visual information into olfactory processing, we recorded hemodynamic responses during affective priming using functional magnetic resonance imaging (fMRI). Odors of different valence (pleasant, neutral and unpleasant) were primed by images of emotional facial expressions (happy, neutral and disgust). Our findings are the first to demonstrate that the piriform cortex preprocesses emotional visual information prior to any olfactory stimulation and that the emotional connotation of this preprocessing is subsequently transferred and integrated into an extended olfactory network for olfactory processing.

  2. Human Computation in Visualization: Using Purpose Driven Games for Robust Evaluation of Visualization Algorithms.

    PubMed

    Ahmed, N; Zheng, Ziyi; Mueller, K

    2012-12-01

    Due to the inherent characteristics of the visualization process, most of the problems in this field have strong ties with human cognition and perception. This makes the human brain and sensory system the only truly appropriate evaluation platform for evaluating and fine-tuning a new visualization method or paradigm. However, getting humans to volunteer for these purposes has always been a significant obstacle, and thus this phase of the development process has traditionally formed a bottleneck, slowing down progress in visualization research. We propose to take advantage of the newly emerging field of Human Computation (HC) to overcome these challenges. HC promotes the idea that rather than considering humans as users of the computational system, they can be made part of a hybrid computational loop consisting of traditional computation resources and the human brain and sensory system. This approach is particularly successful in cases where part of the computational problem is considered intractable using known computer algorithms but is trivial to common sense human knowledge. In this paper, we focus on HC from the perspective of solving visualization problems and also outline a framework by which humans can be easily seduced to volunteer their HC resources. We introduce a purpose-driven game titled "Disguise" which serves as a prototypical example for how the evaluation of visualization algorithms can be mapped into a fun and addicting activity, allowing this task to be accomplished in an extensive yet cost effective way. Finally, we sketch out a framework that transcends from the pure evaluation of existing visualization methods to the design of a new one.

  3. Processing Of Visual Information In Primate Brains

    NASA Technical Reports Server (NTRS)

    Anderson, Charles H.; Van Essen, David C.

    1991-01-01

    Report reviews and analyzes information-processing strategies and pathways in primate retina and visual cortex. Of interest both in biological fields and in such related computational fields as artificial neural networks. Focuses on data from macaque, which has superb visual system similar to that of humans. Authors stress concept of "good engineering" in understanding visual system.

  4. Habituation, Response to Novelty, and Dishabituation in Human Infants: Tests of a Dual-Process Theory of Visual Attention.

    ERIC Educational Resources Information Center

    Kaplan, Peter S.; Werner, John S.

    1986-01-01

    Tests infants' dual-process performance (a process mediating response decrements called habituation and a state-dependent process mediating response increments called sensitization) on visual habituation-dishabituation tasks. (HOD)

  5. Interoceptive signals impact visual processing: Cardiac modulation of visual body perception.

    PubMed

    Ronchi, Roberta; Bernasconi, Fosco; Pfeiffer, Christian; Bello-Ruiz, Javier; Kaliuzhna, Mariia; Blanke, Olaf

    2017-09-01

    Multisensory perception research has largely focused on exteroceptive signals, but recent evidence has revealed the integration of interoceptive signals with exteroceptive information. Such research revealed that heartbeat signals affect sensory (e.g., visual) processing: however, it is unknown how they impact the perception of body images. Here we linked our participants' heartbeat to visual stimuli and investigated the spatio-temporal brain dynamics of cardio-visual stimulation on the processing of human body images. We recorded visual evoked potentials with 64-channel electroencephalography while showing a body or a scrambled-body (control) that appeared at the frequency of the on-line recorded participants' heartbeat or not (not-synchronous, control). Extending earlier studies, we found a body-independent effect, with cardiac signals enhancing visual processing during two time periods (77-130 ms and 145-246 ms). Within the second (later) time-window we detected a second effect characterised by enhanced activity in parietal, temporo-occipital, inferior frontal, and right basal ganglia-insula regions, but only when non-scrambled body images were flashed synchronously with the heartbeat (208-224 ms). In conclusion, our results highlight the role of interoceptive information for the visual processing of human body pictures within a network integrating cardio-visual signals of relevance for perceptual and cognitive aspects of visual body processing. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Timing of target discrimination in human frontal eye fields.

    PubMed

    O'Shea, Jacinta; Muggleton, Neil G; Cowey, Alan; Walsh, Vincent

    2004-01-01

    Frontal eye field (FEF) neurons discharge in response to behaviorally relevant stimuli that are potential targets for saccades. Distinct visual and motor processes have been dissociated in the FEF of macaque monkeys, but little is known about the visual processing capacity of FEF in humans. We used double-pulse transcranial magnetic stimulation [(d)TMS] to investigate the timing of target discrimination during visual conjunction search. We applied dual TMS pulses separated by 40 msec over the right FEF and vertex. These were applied in five timing conditions to sample separate time windows within the first 200 msec of visual processing. (d)TMS impaired search performance, reflected in reduced d' scores. This effect was limited to a time window between 40 and 80 msec after search array onset. These parameters correspond with single-cell activity in FEF that predicts monkeys' behavioral reports on hit, miss, false alarm, and correct rejection trials. Our findings demonstrate a crucial early role for human FEF in visual target discrimination that is independent of saccade programming.

  7. The integration processing of the visual and auditory information in videos of real-world events: an ERP study.

    PubMed

    Liu, Baolin; Wang, Zhongning; Jin, Zhixing

    2009-09-11

    In real life, the human brain usually receives information through visual and auditory channels and processes the multisensory information, but studies on the integration processing of the dynamic visual and auditory information are relatively few. In this paper, we have designed an experiment, where through the presentation of common scenario, real-world videos, with matched and mismatched actions (images) and sounds as stimuli, we aimed to study the integration processing of synchronized visual and auditory information in videos of real-world events in the human brain, through the use event-related potentials (ERPs) methods. Experimental results showed that videos of mismatched actions (images) and sounds would elicit a larger P400 as compared to videos of matched actions (images) and sounds. We believe that the P400 waveform might be related to the cognitive integration processing of mismatched multisensory information in the human brain. The results also indicated that synchronized multisensory information would interfere with each other, which would influence the results of the cognitive integration processing.

  8. Visual Image Sensor Organ Replacement

    NASA Technical Reports Server (NTRS)

    Maluf, David A.

    2014-01-01

    This innovation is a system that augments human vision through a technique called "Sensing Super-position" using a Visual Instrument Sensory Organ Replacement (VISOR) device. The VISOR device translates visual and other sensors (i.e., thermal) into sounds to enable very difficult sensing tasks. Three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. Because the human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns, the translation of images into sounds reduces the risk of accidentally filtering out important clues. The VISOR device was developed to augment the current state-of-the-art head-mounted (helmet) display systems. It provides the ability to sense beyond the human visible light range, to increase human sensing resolution, to use wider angle visual perception, and to improve the ability to sense distances. It also allows compensation for movement by the human or changes in the scene being viewed.

  9. Auditory and visual sequence learning in humans and monkeys using an artificial grammar learning paradigm.

    PubMed

    Milne, Alice E; Petkov, Christopher I; Wilson, Benjamin

    2017-07-05

    Language flexibly supports the human ability to communicate using different sensory modalities, such as writing and reading in the visual modality and speaking and listening in the auditory domain. Although it has been argued that nonhuman primate communication abilities are inherently multisensory, direct behavioural comparisons between human and nonhuman primates are scant. Artificial grammar learning (AGL) tasks and statistical learning experiments can be used to emulate ordering relationships between words in a sentence. However, previous comparative work using such paradigms has primarily investigated sequence learning within a single sensory modality. We used an AGL paradigm to evaluate how humans and macaque monkeys learn and respond to identically structured sequences of either auditory or visual stimuli. In the auditory and visual experiments, we found that both species were sensitive to the ordering relationships between elements in the sequences. Moreover, the humans and monkeys produced largely similar response patterns to the visual and auditory sequences, indicating that the sequences are processed in comparable ways across the sensory modalities. These results provide evidence that human sequence processing abilities stem from an evolutionarily conserved capacity that appears to operate comparably across the sensory modalities in both human and nonhuman primates. The findings set the stage for future neurobiological studies to investigate the multisensory nature of these sequencing operations in nonhuman primates and how they compare to related processes in humans. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  10. Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence

    PubMed Central

    Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude

    2016-01-01

    The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain. PMID:27282108

  11. Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence.

    PubMed

    Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude

    2016-06-10

    The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain.

  12. The role of lateral occipitotemporal junction and area MT/V5 in the visual analysis of upper-limb postures.

    PubMed

    Peigneux, P; Salmon, E; van der Linden, M; Garraux, G; Aerts, J; Delfiore, G; Degueldre, C; Luxen, A; Orban, G; Franck, G

    2000-06-01

    Humans, like numerous other species, strongly rely on the observation of gestures of other individuals in their everyday life. It is hypothesized that the visual processing of human gestures is sustained by a specific functional architecture, even at an early prelexical cognitive stage, different from that required for the processing of other visual entities. In the present PET study, the neural basis of visual gesture analysis was investigated with functional neuroimaging of brain activity during naming and orientation tasks performed on pictures of either static gestures (upper-limb postures) or tridimensional objects. To prevent automatic object-related cerebral activation during the visual processing of postures, only intransitive postures were selected, i. e., symbolic or meaningless postures which do not imply the handling of objects. Conversely, only intransitive objects which cannot be handled were selected to prevent gesture-related activation during their visual processing. Results clearly demonstrate a significant functional segregation between the processing of static intransitive postures and the processing of intransitive tridimensional objects. Visual processing of objects elicited mainly occipital and fusiform gyrus activity, while visual processing of postures strongly activated the lateral occipitotemporal junction, encroaching upon area MT/V5, involved in motion analysis. These findings suggest that the lateral occipitotemporal junction, working in association with area MT/V5, plays a prominent role in the high-level perceptual analysis of gesture, namely the construction of its visual representation, available for subsequent recognition or imitation. Copyright 2000 Academic Press.

  13. Hemispheric Asymmetry of Visual Scene Processing in the Human Brain: Evidence from Repetition Priming and Intrinsic Activity

    PubMed Central

    Kahn, Itamar; Wig, Gagan S.; Schacter, Daniel L.

    2012-01-01

    Asymmetrical specialization of cognitive processes across the cerebral hemispheres is a hallmark of healthy brain development and an important evolutionary trait underlying higher cognition in humans. While previous research, including studies of priming, divided visual field presentation, and split-brain patients, demonstrates a general pattern of right/left asymmetry of form-specific versus form-abstract visual processing, little is known about brain organization underlying this dissociation. Here, using repetition priming of complex visual scenes and high-resolution functional magnetic resonance imaging (MRI), we demonstrate asymmetrical form specificity of visual processing between the right and left hemispheres within a region known to be critical for processing of visual spatial scenes (parahippocampal place area [PPA]). Next, we use resting-state functional connectivity MRI analyses to demonstrate that this functional asymmetry is associated with differential intrinsic activity correlations of the right versus left PPA with regions critically involved in perceptual versus conceptual processing, respectively. Our results demonstrate that the PPA comprises lateralized subregions across the cerebral hemispheres that are engaged in functionally dissociable yet complementary components of visual scene analysis. Furthermore, this functional asymmetry is associated with differential intrinsic functional connectivity of the PPA with distinct brain areas known to mediate dissociable cognitive processes. PMID:21968568

  14. Hemispheric asymmetry of visual scene processing in the human brain: evidence from repetition priming and intrinsic activity.

    PubMed

    Stevens, W Dale; Kahn, Itamar; Wig, Gagan S; Schacter, Daniel L

    2012-08-01

    Asymmetrical specialization of cognitive processes across the cerebral hemispheres is a hallmark of healthy brain development and an important evolutionary trait underlying higher cognition in humans. While previous research, including studies of priming, divided visual field presentation, and split-brain patients, demonstrates a general pattern of right/left asymmetry of form-specific versus form-abstract visual processing, little is known about brain organization underlying this dissociation. Here, using repetition priming of complex visual scenes and high-resolution functional magnetic resonance imaging (MRI), we demonstrate asymmetrical form specificity of visual processing between the right and left hemispheres within a region known to be critical for processing of visual spatial scenes (parahippocampal place area [PPA]). Next, we use resting-state functional connectivity MRI analyses to demonstrate that this functional asymmetry is associated with differential intrinsic activity correlations of the right versus left PPA with regions critically involved in perceptual versus conceptual processing, respectively. Our results demonstrate that the PPA comprises lateralized subregions across the cerebral hemispheres that are engaged in functionally dissociable yet complementary components of visual scene analysis. Furthermore, this functional asymmetry is associated with differential intrinsic functional connectivity of the PPA with distinct brain areas known to mediate dissociable cognitive processes.

  15. Processing of form stimuli presented unilaterally in humans, chimpanzees (Pan troglodytes), and monkeys (Macaca mulatta)

    NASA Technical Reports Server (NTRS)

    Hopkins, William D.; Washburn, David A.; Rumbaugh, Duane M.

    1990-01-01

    Visual forms were unilaterally presented using a video-task paradigm to ten humans, chimpanzees, and two rhesus monkeys to determine whether hemispheric advantages existed in the processing of these stimuli. Both accuracy and reaction time served as dependent measures. For the chimpanzees, a significant right hemisphere advantage was found within the first three test sessions. The humans and monkeys failed to show a hemispheric advantage as determined by accuracy scores. Analysis of reaction time data revealed a significant left hemisphere advantage for the monkeys. A visual half-field x block interaction was found for the chimpanzees, with a significant left visual field advantage in block two, whereas a right visual field advantage was found in block four. In the human subjects, a left visual field advantage was found in block three when they used their right hands to respond. The results are discussed in relation to recent reports of hemispheric advantages for nonhuman primates.

  16. Cognitive issues in searching images with visual queries

    NASA Astrophysics Data System (ADS)

    Yu, ByungGu; Evens, Martha W.

    1999-01-01

    In this paper, we propose our image indexing technique and visual query processing technique. Our mental images are different from the actual retinal images and many things, such as personal interests, personal experiences, perceptual context, the characteristics of spatial objects, and so on, affect our spatial perception. These private differences are propagated into our mental images and so our visual queries become different from the real images that we want to find. This is a hard problem and few people have tried to work on it. In this paper, we survey the human mental imagery system, the human spatial perception, and discuss several kinds of visual queries. Also, we propose our own approach to visual query interpretation and processing.

  17. Spatial updating in human parietal cortex

    NASA Technical Reports Server (NTRS)

    Merriam, Elisha P.; Genovese, Christopher R.; Colby, Carol L.

    2003-01-01

    Single neurons in monkey parietal cortex update visual information in conjunction with eye movements. This remapping of stimulus representations is thought to contribute to spatial constancy. We hypothesized that a similar process occurs in human parietal cortex and that we could visualize it with functional MRI. We scanned subjects during a task that involved remapping of visual signals across hemifields. We observed an initial response in the hemisphere contralateral to the visual stimulus, followed by a remapped response in the hemisphere ipsilateral to the stimulus. We ruled out the possibility that this remapped response resulted from either eye movements or visual stimuli alone. Our results demonstrate that updating of visual information occurs in human parietal cortex.

  18. Visual masking and the dynamics of human perception, cognition, and consciousness A century of progress, a contemporary synthesis, and future directions.

    PubMed

    Ansorge, Ulrich; Francis, Gregory; Herzog, Michael H; Oğmen, Haluk

    2008-07-15

    The 1990s, the "decade of the brain," witnessed major advances in the study of visual perception, cognition, and consciousness. Impressive techniques in neurophysiology, neuroanatomy, neuropsychology, electrophysiology, psychophysics and brain-imaging were developed to address how the nervous system transforms and represents visual inputs. Many of these advances have dealt with the steady-state properties of processing. To complement this "steady-state approach," more recent research emphasized the importance of dynamic aspects of visual processing. Visual masking has been a paradigm of choice for more than a century when it comes to the study of dynamic vision. A recent workshop (http://lpsy.epfl.ch/VMworkshop/), held in Delmenhorst, Germany, brought together an international group of researchers to present state-of-the-art research on dynamic visual processing with a focus on visual masking. This special issue presents peer-reviewed contributions by the workshop participants and provides a contemporary synthesis of how visual masking can inform the dynamics of human perception, cognition, and consciousness.

  19. Visual masking and the dynamics of human perception, cognition, and consciousness A century of progress, a contemporary synthesis, and future directions

    PubMed Central

    Ansorge, Ulrich; Francis, Gregory; Herzog, Michael H.; Öğmen, Haluk

    2008-01-01

    The 1990s, the “decade of the brain,” witnessed major advances in the study of visual perception, cognition, and consciousness. Impressive techniques in neurophysiology, neuroanatomy, neuropsychology, electrophysiology, psychophysics and brain-imaging were developed to address how the nervous system transforms and represents visual inputs. Many of these advances have dealt with the steady-state properties of processing. To complement this “steady-state approach,” more recent research emphasized the importance of dynamic aspects of visual processing. Visual masking has been a paradigm of choice for more than a century when it comes to the study of dynamic vision. A recent workshop (http://lpsy.epfl.ch/VMworkshop/), held in Delmenhorst, Germany, brought together an international group of researchers to present state-of-the-art research on dynamic visual processing with a focus on visual masking. This special issue presents peer-reviewed contributions by the workshop participants and provides a contemporary synthesis of how visual masking can inform the dynamics of human perception, cognition, and consciousness. PMID:20517493

  20. Distinct populations of neurons respond to emotional valence and arousal in the human subthalamic nucleus.

    PubMed

    Sieger, Tomáš; Serranová, Tereza; Růžička, Filip; Vostatek, Pavel; Wild, Jiří; Štastná, Daniela; Bonnet, Cecilia; Novák, Daniel; Růžička, Evžen; Urgošík, Dušan; Jech, Robert

    2015-03-10

    Both animal studies and studies using deep brain stimulation in humans have demonstrated the involvement of the subthalamic nucleus (STN) in motivational and emotional processes; however, participation of this nucleus in processing human emotion has not been investigated directly at the single-neuron level. We analyzed the relationship between the neuronal firing from intraoperative microrecordings from the STN during affective picture presentation in patients with Parkinson's disease (PD) and the affective ratings of emotional valence and arousal performed subsequently. We observed that 17% of neurons responded to emotional valence and arousal of visual stimuli according to individual ratings. The activity of some neurons was related to emotional valence, whereas different neurons responded to arousal. In addition, 14% of neurons responded to visual stimuli. Our results suggest the existence of neurons involved in processing or transmission of visual and emotional information in the human STN, and provide evidence of separate processing of the affective dimensions of valence and arousal at the level of single neurons as well.

  1. Visual Environments for CFD Research

    NASA Technical Reports Server (NTRS)

    Watson, Val; George, Michael W. (Technical Monitor)

    1994-01-01

    This viewgraph presentation gives an overview of the visual environments for computational fluid dynamics (CFD) research. It includes details on critical needs from the future computer environment, features needed to attain this environment, prospects for changes in and the impact of the visualization revolution on the human-computer interface, human processing capabilities, limits of personal environment and the extension of that environment with computers. Information is given on the need for more 'visual' thinking (including instances of visual thinking), an evaluation of the alternate approaches for and levels of interactive computer graphics, a visual analysis of computational fluid dynamics, and an analysis of visualization software.

  2. "Visual" Cortex of Congenitally Blind Adults Responds to Syntactic Movement.

    PubMed

    Lane, Connor; Kanjlia, Shipra; Omaki, Akira; Bedny, Marina

    2015-09-16

    Human cortex is comprised of specialized networks that support functions, such as visual motion perception and language processing. How do genes and experience contribute to this specialization? Studies of plasticity offer unique insights into this question. In congenitally blind individuals, "visual" cortex responds to auditory and tactile stimuli. Remarkably, recent evidence suggests that occipital areas participate in language processing. We asked whether in blindness, occipital cortices: (1) develop domain-specific responses to language and (2) respond to a highly specialized aspect of language-syntactic movement. Nineteen congenitally blind and 18 sighted participants took part in two fMRI experiments. We report that in congenitally blind individuals, but not in sighted controls, "visual" cortex is more active during sentence comprehension than during a sequence memory task with nonwords, or a symbolic math task. This suggests that areas of occipital cortex become selective for language, relative to other similar higher-cognitive tasks. Crucially, we find that these occipital areas respond more to sentences with syntactic movement but do not respond to the difficulty of math equations. We conclude that regions within the visual cortex of blind adults are involved in syntactic processing. Our findings suggest that the cognitive function of human cortical areas is largely determined by input during development. Human cortex is made up of specialized regions that perform different functions, such as visual motion perception and language processing. How do genes and experience contribute to this specialization? Studies of plasticity show that cortical areas can change function from one sensory modality to another. Here we demonstrate that input during development can alter cortical function even more dramatically. In blindness a subset of "visual" areas becomes specialized for language processing. Crucially, we find that the same "visual" areas respond to a highly specialized and uniquely human aspect of language-syntactic movement. These data suggest that human cortex has broad functional capacity during development, and input plays a major role in determining functional specialization. Copyright © 2015 the authors 0270-6474/15/3512859-10$15.00/0.

  3. Image-plane processing of visual information

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    Shannon's theory of information is used to optimize the optical design of sensor-array imaging systems which use neighborhood image-plane signal processing for enhancing edges and compressing dynamic range during image formation. The resultant edge-enhancement, or band-pass-filter, response is found to be very similar to that of human vision. Comparisons of traits in human vision with results from information theory suggest that: (1) Image-plane processing, like preprocessing in human vision, can improve visual information acquisition for pattern recognition when resolving power, sensitivity, and dynamic range are constrained. Improvements include reduced sensitivity to changes in lighter levels, reduced signal dynamic range, reduced data transmission and processing, and reduced aliasing and photosensor noise degradation. (2) Information content can be an appropriate figure of merit for optimizing the optical design of imaging systems when visual information is acquired for pattern recognition. The design trade-offs involve spatial response, sensitivity, and sampling interval.

  4. Altered Evoked Gamma-Band Responses Reveal Impaired Early Visual Processing in ADHD Children

    ERIC Educational Resources Information Center

    Lenz, Daniel; Krauel, Kerstin; Flechtner, Hans-Henning; Schadow, Jeanette; Hinrichs, Hermann; Herrmann, Christoph S.

    2010-01-01

    Neurophysiological studies yield contrary results whether attentional problems of patients with attention-deficit/hyperactivity disorder (ADHD) are related to early visual processing deficits or not. Evoked gamma-band responses (GBRs), being among the first cortical responses occurring as early as 90 ms after visual stimulation in human EEG, have…

  5. Masking disrupts reentrant processing in human visual cortex.

    PubMed

    Fahrenfort, J J; Scholte, H S; Lamme, V A F

    2007-09-01

    In masking, a stimulus is rendered invisible through the presentation of a second stimulus shortly after the first. Over the years, authors have typically explained masking by postulating some early disruption process. In these feedforward-type explanations, the mask somehow "catches up" with the target stimulus, disrupting its processing either through lateral or interchannel inhibition. However, studies from recent years indicate that visual perception--and most notably visual awareness itself--may depend strongly on cortico-cortical feedback connections from higher to lower visual areas. This has led some researchers to propose that masking derives its effectiveness from selectively interrupting these reentrant processes. In this experiment, we used electroencephalogram measurements to determine what happens in the human visual cortex during detection of a texture-defined square under nonmasked (seen) and masked (unseen) conditions. Electro-encephalogram derivatives that are typically associated with reentrant processing turn out to be absent in the masked condition. Moreover, extrastriate visual areas are still activated early on by both seen and unseen stimuli, as shown by scalp surface Laplacian current source-density maps. This conclusively shows that feedforward processing is preserved, even when subject performance is at chance as determined by objective measures. From these results, we conclude that masking derives its effectiveness, at least partly, from disrupting reentrant processing, thereby interfering with the neural mechanisms of figure-ground segmentation and visual awareness itself.

  6. Spatial frequency supports the emergence of categorical representations in visual cortex during natural scene perception.

    PubMed

    Dima, Diana C; Perry, Gavin; Singh, Krish D

    2018-06-11

    In navigating our environment, we rapidly process and extract meaning from visual cues. However, the relationship between visual features and categorical representations in natural scene perception is still not well understood. Here, we used natural scene stimuli from different categories and filtered at different spatial frequencies to address this question in a passive viewing paradigm. Using representational similarity analysis (RSA) and cross-decoding of magnetoencephalography (MEG) data, we show that categorical representations emerge in human visual cortex at ∼180 ms and are linked to spatial frequency processing. Furthermore, dorsal and ventral stream areas reveal temporally and spatially overlapping representations of low and high-level layer activations extracted from a feedforward neural network. Our results suggest that neural patterns from extrastriate visual cortex switch from low-level to categorical representations within 200 ms, highlighting the rapid cascade of processing stages essential in human visual perception. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  7. Serial grouping of 2D-image regions with object-based attention in humans.

    PubMed

    Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R

    2016-06-13

    After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas.

  8. Visual Attention and Applications in Multimedia Technologies

    PubMed Central

    Le Callet, Patrick; Niebur, Ernst

    2013-01-01

    Making technological advances in the field of human-machine interactions requires that the capabilities and limitations of the human perceptual system are taken into account. The focus of this report is an important mechanism of perception, visual selective attention, which is becoming more and more important for multimedia applications. We introduce the concept of visual attention and describe its underlying mechanisms. In particular, we introduce the concepts of overt and covert visual attention, and of bottom-up and top-down processing. Challenges related to modeling visual attention and their validation using ad hoc ground truth are also discussed. Examples of the usage of visual attention models in image and video processing are presented. We emphasize multimedia delivery, retargeting and quality assessment of image and video, medical imaging, and the field of stereoscopic 3D images applications. PMID:24489403

  9. Orthographic processing in pigeons (Columba livia)

    PubMed Central

    Scarf, Damian; Boy, Karoline; Uber Reinert, Anelisie; Devine, Jack; Güntürkün, Onur; Colombo, Michael

    2016-01-01

    Learning to read involves the acquisition of letter–sound relationships (i.e., decoding skills) and the ability to visually recognize words (i.e., orthographic knowledge). Although decoding skills are clearly human-unique, given they are seated in language, recent research and theory suggest that orthographic processing may derive from the exaptation or recycling of visual circuits that evolved to recognize everyday objects and shapes in our natural environment. An open question is whether orthographic processing is limited to visual circuits that are similar to our own or a product of plasticity common to many vertebrate visual systems. Here we show that pigeons, organisms that separated from humans more than 300 million y ago, process words orthographically. Specifically, we demonstrate that pigeons trained to discriminate words from nonwords picked up on the orthographic properties that define words and used this knowledge to identify words they had never seen before. In addition, the pigeons were sensitive to the bigram frequencies of words (i.e., the common co-occurrence of certain letter pairs), the edit distance between nonwords and words, and the internal structure of words. Our findings demonstrate that visual systems organizationally distinct from the primate visual system can also be exapted or recycled to process the visual word form. PMID:27638211

  10. Local spatio-temporal analysis in vision systems

    NASA Astrophysics Data System (ADS)

    Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David

    1994-07-01

    The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.

  11. Pigeon visual short-term memory directly compared to primates.

    PubMed

    Wright, Anthony A; Elmore, L Caitlin

    2016-02-01

    Three pigeons were trained to remember arrays of 2-6 colored squares and detect which of two squares had changed color to test their visual short-term memory. Procedures (e.g., stimuli, displays, viewing times, delays) were similar to those used to test monkeys and humans. Following extensive training, pigeons performed slightly better than similarly trained monkeys, but both animal species were considerably less accurate than humans with the same array sizes (2, 4 and 6 items). Pigeons and monkeys showed calculated memory capacities of one item or less, whereas humans showed a memory capacity of 2.5 items. Despite the differences in calculated memory capacities, the pigeons' memory results, like those from monkeys and humans, were all well characterized by an inverse power-law function fit to d' values for the five display sizes. This characterization provides a simple, straightforward summary of the fundamental processing of visual short-term memory (how visual short-term memory declines with memory load) that emphasizes species similarities based upon similar functional relationships. By closely matching pigeon testing parameters to those of monkeys and humans, these similar functional relationships suggest similar underlying processes of visual short-term memory in pigeons, monkeys and humans. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Teaching Effectively with Visual Effect in an Image-Processing Class.

    ERIC Educational Resources Information Center

    Ng, G. S.

    1997-01-01

    Describes a course teaching the use of computers in emulating human visual capability and image processing and proposes an interactive presentation using multimedia technology to capture and sustain student attention. Describes the three phase presentation: introduction of image processing equipment, presentation of lecture material, and…

  13. Processing of Visual Imagery by an Adaptive Model of the Visual System: Its Performance and its Significance. Final Report, June 1969-March 1970.

    ERIC Educational Resources Information Center

    Tallman, Oliver H.

    A digital simulation of a model for the processing of visual images is derived from known aspects of the human visual system. The fundamental principle of computation suggested by a biological model is a transformation that distributes information contained in an input stimulus everywhere in a transform domain. Each sensory input contributes under…

  14. Auditory, Tactile, and Audiotactile Information Processing Following Visual Deprivation

    ERIC Educational Resources Information Center

    Occelli, Valeria; Spence, Charles; Zampini, Massimiliano

    2013-01-01

    We highlight the results of those studies that have investigated the plastic reorganization processes that occur within the human brain as a consequence of visual deprivation, as well as how these processes give rise to behaviorally observable changes in the perceptual processing of auditory and tactile information. We review the evidence showing…

  15. Sensing Super-position: Visual Instrument Sensor Replacement

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Schipper, John F.

    2006-01-01

    The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This project addresses the technical feasibility of augmenting human vision through Sensing Super-position using a Visual Instrument Sensory Organ Replacement (VISOR). The current implementation of the VISOR device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of the human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system.

  16. Salient sounds activate human visual cortex automatically.

    PubMed

    McDonald, John J; Störmer, Viola S; Martinez, Antigona; Feng, Wenfeng; Hillyard, Steven A

    2013-05-22

    Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, this study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2-4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of colocalized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task.

  17. Salient sounds activate human visual cortex automatically

    PubMed Central

    McDonald, John J.; Störmer, Viola S.; Martinez, Antigona; Feng, Wenfeng; Hillyard, Steven A.

    2013-01-01

    Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, the present study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2, 3, and 4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of co-localized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task. PMID:23699530

  18. Calibration-free gaze tracking for automatic measurement of visual acuity in human infants.

    PubMed

    Xiong, Chunshui; Huang, Lei; Liu, Changping

    2014-01-01

    Most existing vision-based methods for gaze tracking need a tedious calibration process. In this process, subjects are required to fixate on a specific point or several specific points in space. However, it is hard to cooperate, especially for children and human infants. In this paper, a new calibration-free gaze tracking system and method is presented for automatic measurement of visual acuity in human infants. As far as I know, it is the first time to apply the vision-based gaze tracking in the measurement of visual acuity. Firstly, a polynomial of pupil center-cornea reflections (PCCR) vector is presented to be used as the gaze feature. Then, Gaussian mixture models (GMM) is employed for gaze behavior classification, which is trained offline using labeled data from subjects with healthy eyes. Experimental results on several subjects show that the proposed method is accurate, robust and sufficient for the application of measurement of visual acuity in human infants.

  19. Visual Field Map Clusters in High-Order Visual Processing: Organization of V3A/V3B and a New Cloverleaf Cluster in the Posterior Superior Temporal Sulcus

    PubMed Central

    Barton, Brian; Brewer, Alyssa A.

    2017-01-01

    The cortical hierarchy of the human visual system has been shown to be organized around retinal spatial coordinates throughout much of low- and mid-level visual processing. These regions contain visual field maps (VFMs) that each follows the organization of the retina, with neighboring aspects of the visual field processed in neighboring cortical locations. On a larger, macrostructural scale, groups of such sensory cortical field maps (CFMs) in both the visual and auditory systems are organized into roughly circular cloverleaf clusters. CFMs within clusters tend to share properties such as receptive field distribution, cortical magnification, and processing specialization. Here we use fMRI and population receptive field (pRF) modeling to investigate the extent of VFM and cluster organization with an examination of higher-level visual processing in temporal cortex and compare these measurements to mid-level visual processing in dorsal occipital cortex. In human temporal cortex, the posterior superior temporal sulcus (pSTS) has been implicated in various neuroimaging studies as subserving higher-order vision, including face processing, biological motion perception, and multimodal audiovisual integration. In human dorsal occipital cortex, the transverse occipital sulcus (TOS) contains the V3A/B cluster, which comprises two VFMs subserving mid-level motion perception and visuospatial attention. For the first time, we present the organization of VFMs in pSTS in a cloverleaf cluster. This pSTS cluster contains four VFMs bilaterally: pSTS-1:4. We characterize these pSTS VFMs as relatively small at ∼125 mm2 with relatively large pRF sizes of ∼2–8° of visual angle across the central 10° of the visual field. V3A and V3B are ∼230 mm2 in surface area, with pRF sizes here similarly ∼1–8° of visual angle across the same region. In addition, cortical magnification measurements show that a larger extent of the pSTS VFM surface areas are devoted to the peripheral visual field than those in the V3A/B cluster. Reliability measurements of VFMs in pSTS and V3A/B reveal that these cloverleaf clusters are remarkably consistent and functionally differentiable. Our findings add to the growing number of measurements of widespread sensory CFMs organized into cloverleaf clusters, indicating that CFMs and cloverleaf clusters may both be fundamental organizing principles in cortical sensory processing. PMID:28293182

  20. Where is your shoulder? Neural correlates of localizing others' body parts.

    PubMed

    Felician, Olivier; Anton, Jean-Luc; Nazarian, Bruno; Roth, Muriel; Roll, Jean-Pierre; Romaiguère, Patricia

    2009-07-01

    Neuropsychological studies, based on pointing to body parts paradigms, suggest that left posterior parietal lobe is involved in the visual processing of other persons' bodies. In addition, some patients have been found with mild deficit when dealing with abstract human representations but marked impairment with realistically represented bodies, suggesting that this processing could be modulated by the abstraction level of the body to be analyzed. These issues were examined in the present fMRI experiment, designed to evaluate the effects of visually processing human bodies of different abstraction levels on brain activity. The human specificity of the studied processes was assessed using whole-body representations of humans and of dogs, while the effects of the abstraction level of the representation were assessed using drawings, photographs, and videos. To assess the effect of species and stimulus complexity on BOLD signal, we performed a two-way ANOVA with factors species (human versus animal) and stimulus complexity (drawings, photographs and videos). When pointing to body parts irrespective of the stimulus complexity, we observed a positive effect of humans upon animals in the left angular gyrus (BA 39), as suggested by lesion studies. This effect was also present in midline cortical structures including mesial prefrontal, anterior cingulate and precuneal regions. When pointing to body parts irrespective of the species to be processed, we observed a positive effect of videos upon photographs and drawings in the right superior parietal lobule (BA 7), and bilaterally in the superior temporal sulcus, the supramarginal gyrus (BA 40) and the lateral extrastriate visual cortex (including the "extrastriate body area"). Taken together, these data suggest that, in comparison with other mammalians, the visual processing of other humans' bodies is associated with left angular gyrus activity, but also with midline structures commonly implicated in self-reference. They also suggest a role of the lateral extrastriate cortex in the processing of dynamic and biologically relevant body representations.

  1. Modulation of induced gamma band activity in the human EEG by attention and visual information processing.

    PubMed

    Müller, M M; Gruber, T; Keil, A

    2000-12-01

    Here we present a series of four studies aimed to investigate the link between induced gamma band activity in the human EEG and visual information processing. We demonstrated and validated the modulation of spectral gamma band power by spatial selective visual attention. When subjects attended to a certain stimulus, spectral power was increased as compared to when the same stimulus was ignored. In addition, we showed a shift in spectral gamma band power increase to the contralateral hemisphere when subjects shifted their attention to one visual hemifield. The following study investigated induced gamma band activity and the perception of a Gestalt. Ambiguous rotating figures were used to operationalize the law of good figure (gute Gestalt). We found increased gamma band power at posterior electrode sites when subjects perceived an object. In the last experiment we demonstrated a differential hemispheric gamma band activation when subjects were confronted with emotional pictures. Results of the present experiments in combination with other studies presented in this volume are supportive for the notion that induced gamma band activity in the human EEG is closely related to visual information processing and attentional perceptual mechanisms.

  2. Distinct populations of neurons respond to emotional valence and arousal in the human subthalamic nucleus

    PubMed Central

    Sieger, Tomáš; Serranová, Tereza; Růžička, Filip; Vostatek, Pavel; Wild, Jiří; Šťastná, Daniela; Bonnet, Cecilia; Novák, Daniel; Růžička, Evžen; Urgošík, Dušan; Jech, Robert

    2015-01-01

    Both animal studies and studies using deep brain stimulation in humans have demonstrated the involvement of the subthalamic nucleus (STN) in motivational and emotional processes; however, participation of this nucleus in processing human emotion has not been investigated directly at the single-neuron level. We analyzed the relationship between the neuronal firing from intraoperative microrecordings from the STN during affective picture presentation in patients with Parkinson’s disease (PD) and the affective ratings of emotional valence and arousal performed subsequently. We observed that 17% of neurons responded to emotional valence and arousal of visual stimuli according to individual ratings. The activity of some neurons was related to emotional valence, whereas different neurons responded to arousal. In addition, 14% of neurons responded to visual stimuli. Our results suggest the existence of neurons involved in processing or transmission of visual and emotional information in the human STN, and provide evidence of separate processing of the affective dimensions of valence and arousal at the level of single neurons as well. PMID:25713375

  3. Visual Processing of Object Velocity and Acceleration

    DTIC Science & Technology

    1994-02-04

    A failure of motion deblurring in the human visual system. Investigative Opthalmology and Visual Sciences (Suppl),34, 1230 Watamaniuk, S.N.J. and...McKee, S.P. Why is a trajectory more detectable in noise than correlated signal dots? Investigative Opthalmology and Visual Sciences (Suppl),34, 1364

  4. [Sensory loss and brain reorganization].

    PubMed

    Fortin, Madeleine; Voss, Patrice; Lassonde, Maryse; Lepore, Franco

    2007-11-01

    It is without a doubt that humans are first and foremost visual beings. Even though the other sensory modalities provide us with valuable information, it is vision that generally offers the most reliable and detailed information concerning our immediate surroundings. It is therefore not surprising that nearly a third of the human brain processes, in one way or another, visual information. But what happens when the visual information no longer reaches these brain regions responsible for processing it? Indeed numerous medical conditions such as congenital glaucoma, retinis pigmentosa and retinal detachment, to name a few, can disrupt the visual system and lead to blindness. So, do the brain areas responsible for processing visual stimuli simply shut down and become non-functional? Do they become dead weight and simply stop contributing to cognitive and sensory processes? Current data suggests that this is not the case. Quite the contrary, it would seem that congenitally blind individuals benefit from the recruitment of these areas by other sensory modalities to carry out non-visual tasks. In fact, our laboratory has been studying blindness and its consequences on both the brain and behaviour for many years now. We have shown that blind individuals demonstrate exceptional hearing abilities. This finding holds true for stimuli originating from both near and far space. It also holds true, under certain circumstances, for those who lost their sight later in life, beyond a period generally believed to limit the brain changes following the loss of sight. In the case of the early blind, we have shown their ability to localize sounds is strongly correlated with activity in the occipital cortex (the location of the visual processing), demonstrating that these areas are functionally engaged by the task. Therefore it would seem that the plastic nature of the human brain allows them to make new use of the cerebral areas normally dedicated to visual processing.

  5. Using the virtual reality device Oculus Rift for neuropsychological assessment of visual processing capabilities

    PubMed Central

    Foerster, Rebecca M.; Poth, Christian H.; Behler, Christian; Botsch, Mario; Schneider, Werner X.

    2016-01-01

    Neuropsychological assessment of human visual processing capabilities strongly depends on visual testing conditions including room lighting, stimuli, and viewing-distance. This limits standardization, threatens reliability, and prevents the assessment of core visual functions such as visual processing speed. Increasingly available virtual reality devices allow to address these problems. One such device is the portable, light-weight, and easy-to-use Oculus Rift. It is head-mounted and covers the entire visual field, thereby shielding and standardizing the visual stimulation. A fundamental prerequisite to use Oculus Rift for neuropsychological assessment is sufficient test-retest reliability. Here, we compare the test-retest reliabilities of Bundesen’s visual processing components (visual processing speed, threshold of conscious perception, capacity of visual working memory) as measured with Oculus Rift and a standard CRT computer screen. Our results show that Oculus Rift allows to measure the processing components as reliably as the standard CRT. This means that Oculus Rift is applicable for standardized and reliable assessment and diagnosis of elementary cognitive functions in laboratory and clinical settings. Oculus Rift thus provides the opportunity to compare visual processing components between individuals and institutions and to establish statistical norm distributions. PMID:27869220

  6. Using the virtual reality device Oculus Rift for neuropsychological assessment of visual processing capabilities.

    PubMed

    Foerster, Rebecca M; Poth, Christian H; Behler, Christian; Botsch, Mario; Schneider, Werner X

    2016-11-21

    Neuropsychological assessment of human visual processing capabilities strongly depends on visual testing conditions including room lighting, stimuli, and viewing-distance. This limits standardization, threatens reliability, and prevents the assessment of core visual functions such as visual processing speed. Increasingly available virtual reality devices allow to address these problems. One such device is the portable, light-weight, and easy-to-use Oculus Rift. It is head-mounted and covers the entire visual field, thereby shielding and standardizing the visual stimulation. A fundamental prerequisite to use Oculus Rift for neuropsychological assessment is sufficient test-retest reliability. Here, we compare the test-retest reliabilities of Bundesen's visual processing components (visual processing speed, threshold of conscious perception, capacity of visual working memory) as measured with Oculus Rift and a standard CRT computer screen. Our results show that Oculus Rift allows to measure the processing components as reliably as the standard CRT. This means that Oculus Rift is applicable for standardized and reliable assessment and diagnosis of elementary cognitive functions in laboratory and clinical settings. Oculus Rift thus provides the opportunity to compare visual processing components between individuals and institutions and to establish statistical norm distributions.

  7. Human infrared vision is triggered by two-photon chromophore isomerization

    PubMed Central

    Palczewska, Grazyna; Vinberg, Frans; Stremplewski, Patrycjusz; Bircher, Martin P.; Salom, David; Komar, Katarzyna; Zhang, Jianye; Cascella, Michele; Wojtkowski, Maciej; Kefalov, Vladimir J.; Palczewski, Krzysztof

    2014-01-01

    Vision relies on photoactivation of visual pigments in rod and cone photoreceptor cells of the retina. The human eye structure and the absorption spectra of pigments limit our visual perception of light. Our visual perception is most responsive to stimulating light in the 400- to 720-nm (visible) range. First, we demonstrate by psychophysical experiments that humans can perceive infrared laser emission as visible light. Moreover, we show that mammalian photoreceptors can be directly activated by near infrared light with a sensitivity that paradoxically increases at wavelengths above 900 nm, and display quadratic dependence on laser power, indicating a nonlinear optical process. Biochemical experiments with rhodopsin, cone visual pigments, and a chromophore model compound 11-cis-retinyl-propylamine Schiff base demonstrate the direct isomerization of visual chromophore by a two-photon chromophore isomerization. Indeed, quantum mechanics modeling indicates the feasibility of this mechanism. Together, these findings clearly show that human visual perception of near infrared light occurs by two-photon isomerization of visual pigments. PMID:25453064

  8. Visual analytics as a translational cognitive science.

    PubMed

    Fisher, Brian; Green, Tera Marie; Arias-Hernández, Richard

    2011-07-01

    Visual analytics is a new interdisciplinary field of study that calls for a more structured scientific approach to understanding the effects of interaction with complex graphical displays on human cognitive processes. Its primary goal is to support the design and evaluation of graphical information systems that better support cognitive processes in areas as diverse as scientific research and emergency management. The methodologies that make up this new field are as yet ill defined. This paper proposes a pathway for development of visual analytics as a translational cognitive science that bridges fundamental research in human/computer cognitive systems and design and evaluation of information systems in situ. Achieving this goal will require the development of enhanced field methods for conceptual decomposition of human/computer cognitive systems that maps onto laboratory studies, and improved methods for conducting laboratory investigations that might better map onto real-world cognitive processes in technology-rich environments. Copyright © 2011 Cognitive Science Society, Inc.

  9. Neural Mechanisms of Selective Visual Attention.

    PubMed

    Moore, Tirin; Zirnsak, Marc

    2017-01-03

    Selective visual attention describes the tendency of visual processing to be confined largely to stimuli that are relevant to behavior. It is among the most fundamental of cognitive functions, particularly in humans and other primates for whom vision is the dominant sense. We review recent progress in identifying the neural mechanisms of selective visual attention. We discuss evidence from studies of different varieties of selective attention and examine how these varieties alter the processing of stimuli by neurons within the visual system, current knowledge of their causal basis, and methods for assessing attentional dysfunctions. In addition, we identify some key questions that remain in identifying the neural mechanisms that give rise to the selective processing of visual information.

  10. Crossmodal association of auditory and visual material properties in infants.

    PubMed

    Ujiie, Yuta; Yamashita, Wakayo; Fujisaki, Waka; Kanazawa, So; Yamaguchi, Masami K

    2018-06-18

    The human perceptual system enables us to extract visual properties of an object's material from auditory information. In monkeys, the neural basis underlying such multisensory association develops through experience of exposure to a material; material information could be processed in the posterior inferior temporal cortex, progressively from the high-order visual areas. In humans, however, the development of this neural representation remains poorly understood. Here, we demonstrated for the first time the presence of a mapping of the auditory material property with visual material ("Metal" and "Wood") in the right temporal region in preverbal 4- to 8-month-old infants, using near-infrared spectroscopy (NIRS). Furthermore, we found that infants acquired the audio-visual mapping for a property of the "Metal" material later than for the "Wood" material, since infants form the visual property of "Metal" material after approximately 6 months of age. These findings indicate that multisensory processing of material information induces the activation of brain areas related to sound symbolism. Our findings also indicate that the material's familiarity might facilitate the development of multisensory processing during the first year of life.

  11. Age and Visual Information Processing.

    ERIC Educational Resources Information Center

    Gummerman, Kent; And Others

    This paper reports on three studies concerned with aspects of human visual information processing. Study I was an effort to measure the duration of iconic storage using a partial report method in children ranging in age from 6 to 13 years. Study II was designed to detect age related changes in the rate of processing (perceptually encoding) letters…

  12. Serial grouping of 2D-image regions with object-based attention in humans

    PubMed Central

    Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R

    2016-01-01

    After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas. DOI: http://dx.doi.org/10.7554/eLife.14320.001 PMID:27291188

  13. Ultra-Rapid serial visual presentation reveals dynamics of feedforward and feedback processes in the ventral visual pathway.

    PubMed

    Mohsenzadeh, Yalda; Qin, Sheng; Cichy, Radoslaw M; Pantazis, Dimitrios

    2018-06-21

    Human visual recognition activates a dense network of overlapping feedforward and recurrent neuronal processes, making it hard to disentangle processing in the feedforward from the feedback direction. Here, we used ultra-rapid serial visual presentation to suppress sustained activity that blurs the boundaries of processing steps, enabling us to resolve two distinct stages of processing with MEG multivariate pattern classification. The first processing stage was the rapid activation cascade of the bottom-up sweep, which terminated early as visual stimuli were presented at progressively faster rates. The second stage was the emergence of categorical information with peak latency that shifted later in time with progressively faster stimulus presentations, indexing time-consuming recurrent processing. Using MEG-fMRI fusion with representational similarity, we localized recurrent signals in early visual cortex. Together, our findings segregated an initial bottom-up sweep from subsequent feedback processing, and revealed the neural signature of increased recurrent processing demands for challenging viewing conditions. © 2018, Mohsenzadeh et al.

  14. Resolving human object recognition in space and time

    PubMed Central

    Cichy, Radoslaw Martin; Pantazis, Dimitrios; Oliva, Aude

    2014-01-01

    A comprehensive picture of object processing in the human brain requires combining both spatial and temporal information about brain activity. Here, we acquired human magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) responses to 92 object images. Multivariate pattern classification applied to MEG revealed the time course of object processing: whereas individual images were discriminated by visual representations early, ordinate and superordinate category levels emerged relatively later. Using representational similarity analysis, we combine human fMRI and MEG to show content-specific correspondence between early MEG responses and primary visual cortex (V1), and later MEG responses and inferior temporal (IT) cortex. We identified transient and persistent neural activities during object processing, with sources in V1 and IT., Finally, human MEG signals were correlated to single-unit responses in monkey IT. Together, our findings provide an integrated space- and time-resolved view of human object categorization during the first few hundred milliseconds of vision. PMID:24464044

  15. Multiscale neural connectivity during human sensory processing in the brain

    NASA Astrophysics Data System (ADS)

    Maksimenko, Vladimir A.; Runnova, Anastasia E.; Frolov, Nikita S.; Makarov, Vladimir V.; Nedaivozov, Vladimir; Koronovskii, Alexey A.; Pisarchik, Alexander; Hramov, Alexander E.

    2018-05-01

    Stimulus-related brain activity is considered using wavelet-based analysis of neural interactions between occipital and parietal brain areas in alpha (8-12 Hz) and beta (15-30 Hz) frequency bands. We show that human sensory processing related to the visual stimuli perception induces brain response resulted in different ways of parieto-occipital interactions in these bands. In the alpha frequency band the parieto-occipital neuronal network is characterized by homogeneous increase of the interaction between all interconnected areas both within occipital and parietal lobes and between them. In the beta frequency band the occipital lobe starts to play a leading role in the dynamics of the occipital-parietal network: The perception of visual stimuli excites the visual center in the occipital area and then, due to the increase of parieto-occipital interactions, such excitation is transferred to the parietal area, where the attentional center takes place. In the case when stimuli are characterized by a high degree of ambiguity, we find greater increase of the interaction between interconnected areas in the parietal lobe due to the increase of human attention. Based on revealed mechanisms, we describe the complex response of the parieto-occipital brain neuronal network during the perception and primary processing of the visual stimuli. The results can serve as an essential complement to the existing theory of neural aspects of visual stimuli processing.

  16. Visual event-related potentials to biological motion stimuli in autism spectrum disorders

    PubMed Central

    Bletsch, Anke; Krick, Christoph; Siniatchkin, Michael; Jarczok, Tomasz A.; Freitag, Christine M.; Bender, Stephan

    2014-01-01

    Atypical visual processing of biological motion contributes to social impairments in autism spectrum disorders (ASD). However, the exact temporal sequence of deficits of cortical biological motion processing in ASD has not been studied to date. We used 64-channel electroencephalography to study event-related potentials associated with human motion perception in 17 children and adolescents with ASD and 21 typical controls. A spatio-temporal source analysis was performed to assess the brain structures involved in these processes. We expected altered activity already during early stimulus processing and reduced activity during subsequent biological motion specific processes in ASD. In response to both, random and biological motion, the P100 amplitude was decreased suggesting unspecific deficits in visual processing, and the occipito-temporal N200 showed atypical lateralization in ASD suggesting altered hemispheric specialization. A slow positive deflection after 400 ms, reflecting top-down processes, and human motion-specific dipole activation differed slightly between groups, with reduced and more diffuse activation in the ASD-group. The latter could be an indicator of a disrupted neuronal network for biological motion processing in ADS. Furthermore, early visual processing (P100) seems to be correlated to biological motion-specific activation. This emphasizes the relevance of early sensory processing for higher order processing deficits in ASD. PMID:23887808

  17. Conservation implications of anthropogenic impacts on visual communication and camouflage.

    PubMed

    Delhey, Kaspar; Peters, Anne

    2017-02-01

    Anthropogenic environmental impacts can disrupt the sensory environment of animals and affect important processes from mate choice to predator avoidance. Currently, these effects are best understood for auditory and chemosensory modalities, and recent reviews highlight their importance for conservation. We examined how anthropogenic changes to the visual environment (ambient light, transmission, and backgrounds) affect visual communication and camouflage and considered the implications of these effects for conservation. Human changes to the visual environment can increase predation risk by affecting camouflage effectiveness, lead to maladaptive patterns of mate choice, and disrupt mutualistic interactions between pollinators and plants. Implications for conservation are particularly evident for disrupted camouflage due to its tight links with survival. The conservation importance of impaired visual communication is less documented. The effects of anthropogenic changes on visual communication and camouflage may be severe when they affect critical processes such as pollination or species recognition. However, when impaired mate choice does not lead to hybridization, the conservation consequences are less clear. We suggest that the demographic effects of human impacts on visual communication and camouflage will be particularly strong when human-induced modifications to the visual environment are evolutionarily novel (i.e., very different from natural variation); affected species and populations have low levels of intraspecific (genotypic and phenotypic) variation and behavioral, sensory, or physiological plasticity; and the processes affected are directly related to survival (camouflage), species recognition, or number of offspring produced, rather than offspring quality or attractiveness. Our findings suggest that anthropogenic effects on the visual environment may be of similar importance relative to conservation as anthropogenic effects on other sensory modalities. © 2016 Society for Conservation Biology.

  18. When Art Moves the Eyes: A Behavioral and Eye-Tracking Study

    PubMed Central

    Massaro, Davide; Savazzi, Federica; Di Dio, Cinzia; Freedberg, David; Gallese, Vittorio; Gilli, Gabriella; Marchetti, Antonella

    2012-01-01

    The aim of this study was to investigate, using eye-tracking technique, the influence of bottom-up and top-down processes on visual behavior while subjects, naïve to art criticism, were presented with representational paintings. Forty-two subjects viewed color and black and white paintings (Color) categorized as dynamic or static (Dynamism) (bottom-up processes). Half of the images represented natural environments and half human subjects (Content); all stimuli were displayed under aesthetic and movement judgment conditions (Task) (top-down processes). Results on gazing behavior showed that content-related top-down processes prevailed over low-level visually-driven bottom-up processes when a human subject is represented in the painting. On the contrary, bottom-up processes, mediated by low-level visual features, particularly affected gazing behavior when looking at nature-content images. We discuss our results proposing a reconsideration of the definition of content-related top-down processes in accordance with the concept of embodied simulation in art perception. PMID:22624007

  19. When art moves the eyes: a behavioral and eye-tracking study.

    PubMed

    Massaro, Davide; Savazzi, Federica; Di Dio, Cinzia; Freedberg, David; Gallese, Vittorio; Gilli, Gabriella; Marchetti, Antonella

    2012-01-01

    The aim of this study was to investigate, using eye-tracking technique, the influence of bottom-up and top-down processes on visual behavior while subjects, naïve to art criticism, were presented with representational paintings. Forty-two subjects viewed color and black and white paintings (Color) categorized as dynamic or static (Dynamism) (bottom-up processes). Half of the images represented natural environments and half human subjects (Content); all stimuli were displayed under aesthetic and movement judgment conditions (Task) (top-down processes). Results on gazing behavior showed that content-related top-down processes prevailed over low-level visually-driven bottom-up processes when a human subject is represented in the painting. On the contrary, bottom-up processes, mediated by low-level visual features, particularly affected gazing behavior when looking at nature-content images. We discuss our results proposing a reconsideration of the definition of content-related top-down processes in accordance with the concept of embodied simulation in art perception.

  20. Comparison of Object Recognition Behavior in Human and Monkey

    PubMed Central

    Rajalingham, Rishi; Schmidt, Kailyn

    2015-01-01

    Although the rhesus monkey is used widely as an animal model of human visual processing, it is not known whether invariant visual object recognition behavior is quantitatively comparable across monkeys and humans. To address this question, we systematically compared the core object recognition behavior of two monkeys with that of human subjects. To test true object recognition behavior (rather than image matching), we generated several thousand naturalistic synthetic images of 24 basic-level objects with high variation in viewing parameters and image background. Monkeys were trained to perform binary object recognition tasks on a match-to-sample paradigm. Data from 605 human subjects performing the same tasks on Mechanical Turk were aggregated to characterize “pooled human” object recognition behavior, as well as 33 separate Mechanical Turk subjects to characterize individual human subject behavior. Our results show that monkeys learn each new object in a few days, after which they not only match mean human performance but show a pattern of object confusion that is highly correlated with pooled human confusion patterns and is statistically indistinguishable from individual human subjects. Importantly, this shared human and monkey pattern of 3D object confusion is not shared with low-level visual representations (pixels, V1+; models of the retina and primary visual cortex) but is shared with a state-of-the-art computer vision feature representation. Together, these results are consistent with the hypothesis that rhesus monkeys and humans share a common neural shape representation that directly supports object perception. SIGNIFICANCE STATEMENT To date, several mammalian species have shown promise as animal models for studying the neural mechanisms underlying high-level visual processing in humans. In light of this diversity, making tight comparisons between nonhuman and human primates is particularly critical in determining the best use of nonhuman primates to further the goal of the field of translating knowledge gained from animal models to humans. To the best of our knowledge, this study is the first systematic attempt at comparing a high-level visual behavior of humans and macaque monkeys. PMID:26338324

  1. Sex differences in motor and cognitive abilities predicted from human evolutionary history with some implications for models of the visual system.

    PubMed

    Sanders, Geoff

    2013-01-01

    This article expands the knowledge base available to sex researchers by reviewing recent evidence for sex differences in coincidence-anticipation timing (CAT), motor control with the hand and arm, and visual processing of stimuli in near and far space. In CAT, the differences are between sex and, therefore, typical of other widely reported sex differences. Men perform CAT tasks with greater accuracy and precision than women, who tend to underestimate time to arrival. Null findings arise because significant sex differences are found with easy but not with difficult tasks. The differences in motor control and visual processing are within sex, and they underlie reciprocal patterns of performance in women and men. Motor control is exerted better by women with the hand than the arm. In contrast, men showed the reverse pattern. Visual processing is performed better by women with stimuli within hand reach (near space) as opposed to beyond hand reach (far space); men showed the reverse pattern. The sex differences seen in each of these three abilities are consistent with the evolutionary selection of men for hunting-related skills and women for gathering-related skills. The implications of the sex differences in visual processing for two visual system models of human vision are discussed.

  2. Presentation Media, Information Complexity, and Learning Outcomes

    ERIC Educational Resources Information Center

    Andres, Hayward P.; Petersen, Candice

    2002-01-01

    Cognitive processing limitations restrict the number of complex information items held and processed in human working memory. To overcome such limitations, a verbal working memory channel is used to construct an if-then proposition representation of facts and a visual working memory channel is used to construct a visual imagery of geometric…

  3. Dissociation and Convergence of the Dorsal and Ventral Visual Streams in the Human Prefrontal Cortex

    PubMed Central

    Takahashi, Emi; Ohki, Kenichi; Kim, Dae-Shik

    2012-01-01

    Visual information is largely processed through two pathways in the primate brain: an object pathway from the primary visual cortex to the temporal cortex (ventral stream) and a spatial pathway to the parietal cortex (dorsal stream). Whether and to what extent dissociation exists in the human prefrontal cortex (PFC) has long been debated. We examined anatomical connections from functionally defined areas in the temporal and parietal cortices to the PFC, using noninvasive functional and diffusion-weighted magnetic resonance imaging. The right inferior frontal gyrus (IFG) received converging input from both streams, while the right superior frontal gyrus received input only from the dorsal stream. Interstream functional connectivity to the IFG was dynamically recruited only when both object and spatial information were processed. These results suggest that the human PFC receives dissociated and converging visual pathways, and that the right IFG region serves as an integrator of the two types of information. PMID:23063444

  4. Processing reafferent and exafferent visual information for action and perception.

    PubMed

    Reichenbach, Alexandra; Diedrichsen, Jörn

    2015-01-01

    A recent study suggests that reafferent hand-related visual information utilizes a privileged, attention-independent processing channel for motor control. This process was termed visuomotor binding to reflect its proposed function: linking visual reafferences to the corresponding motor control centers. Here, we ask whether the advantage of processing reafferent over exafferent visual information is a specific feature of the motor processing stream or whether the improved processing also benefits the perceptual processing stream. Human participants performed a bimanual reaching task in a cluttered visual display, and one of the visual hand cursors could be displaced laterally during the movement. We measured the rapid feedback responses of the motor system as well as matched perceptual judgments of which cursor was displaced. Perceptual judgments were either made by watching the visual scene without moving or made simultaneously to the reaching tasks, such that the perceptual processing stream could also profit from the specialized processing of reafferent information in the latter case. Our results demonstrate that perceptual judgments in the heavily cluttered visual environment were improved when performed based on reafferent information. Even in this case, however, the filtering capability of the perceptual processing stream suffered more from the increasing complexity of the visual scene than the motor processing stream. These findings suggest partly shared and partly segregated processing of reafferent information for vision for motor control versus vision for perception.

  5. Bayesian learning of visual chunks by human observers

    PubMed Central

    Orbán, Gergő; Fiser, József; Aslin, Richard N.; Lengyel, Máté

    2008-01-01

    Efficient and versatile processing of any hierarchically structured information requires a learning mechanism that combines lower-level features into higher-level chunks. We investigated this chunking mechanism in humans with a visual pattern-learning paradigm. We developed an ideal learner based on Bayesian model comparison that extracts and stores only those chunks of information that are minimally sufficient to encode a set of visual scenes. Our ideal Bayesian chunk learner not only reproduced the results of a large set of previous empirical findings in the domain of human pattern learning but also made a key prediction that we confirmed experimentally. In accordance with Bayesian learning but contrary to associative learning, human performance was well above chance when pair-wise statistics in the exemplars contained no relevant information. Thus, humans extract chunks from complex visual patterns by generating accurate yet economical representations and not by encoding the full correlational structure of the input. PMID:18268353

  6. The Comparison of Visual Working Memory Representations with Perceptual Inputs

    ERIC Educational Resources Information Center

    Hyun, Joo-seok; Woodman, Geoffrey F.; Vogel, Edward K.; Hollingworth, Andrew; Luck, Steven J.

    2009-01-01

    The human visual system can notice differences between memories of previous visual inputs and perceptions of new visual inputs, but the comparison process that detects these differences has not been well characterized. In this study, the authors tested the hypothesis that differences between the memory of a stimulus array and the perception of a…

  7. Estimation of the Horizon in Photographed Outdoor Scenes by Human and Machine

    PubMed Central

    Herdtweck, Christian; Wallraven, Christian

    2013-01-01

    We present three experiments on horizon estimation. In Experiment 1 we verify the human ability to estimate the horizon in static images from only visual input. Estimates are given without time constraints with emphasis on precision. The resulting estimates are used as baseline to evaluate horizon estimates from early visual processes. Stimuli are presented for only ms and then masked to purge visual short-term memory and enforcing estimates to rely on early processes, only. The high agreement between estimates and the lack of a training effect shows that enough information about viewpoint is extracted in the first few hundred milliseconds to make accurate horizon estimation possible. In Experiment 3 we investigate several strategies to estimate the horizon in the computer and compare human with machine “behavior” for different image manipulations and image scene types. PMID:24349073

  8. Anxiety affects the amplitudes of red and green color-elicited flash visual evoked potentials in humans.

    PubMed

    Hosono, Yuki; Kitaoka, Kazuyoshi; Urushihara, Ryo; Séi, Hiroyoshi; Kinouchi, Yohsuke

    2014-01-01

    It has been reported that negative emotional changes and conditions affect the visual faculties of humans at the neural level. On the other hand, the effects of emotion on color perception in particular, which are based on evoked potentials, are unknown. In the present study, we investigated whether different anxiety levels affect the color information processing for each of 3 wavelengths by using flash visual evoked potentials (FVEPs) and State-Trait Anxiety Inventory. In results, significant positive correlations were observed between FVEP amplitudes and state or trait anxiety scores in the long (sensed as red) and middle (sensed as green) wavelengths. On the other hand, short-wavelength-evoked FVEPs were not correlated with anxiety level. Our results suggest that negative emotional conditions may affect color sense processing in humans.

  9. Are visual peripheries forever young?

    PubMed

    Burnat, Kalina

    2015-01-01

    The paper presents a concept of lifelong plasticity of peripheral vision. Central vision processing is accepted as critical and irreplaceable for normal perception in humans. While peripheral processing chiefly carries information about motion stimuli features and redirects foveal attention to new objects, it can also take over functions typical for central vision. Here I review the data showing the plasticity of peripheral vision found in functional, developmental, and comparative studies. Even though it is well established that afferent projections from central and peripheral retinal regions are not established simultaneously during early postnatal life, central vision is commonly used as a general model of development of the visual system. Based on clinical studies and visually deprived animal models, I describe how central and peripheral visual field representations separately rely on early visual experience. Peripheral visual processing (motion) is more affected by binocular visual deprivation than central visual processing (spatial resolution). In addition, our own experimental findings show the possible recruitment of coarse peripheral vision for fine spatial analysis. Accordingly, I hypothesize that the balance between central and peripheral visual processing, established in the course of development, is susceptible to plastic adaptations during the entire life span, with peripheral vision capable of taking over central processing.

  10. Modeling the role of parallel processing in visual search.

    PubMed

    Cave, K R; Wolfe, J M

    1990-04-01

    Treisman's Feature Integration Theory and Julesz's Texton Theory explain many aspects of visual search. However, these theories require that parallel processing mechanisms not be used in many visual searches for which they would be useful, and they imply that visual processing should be much slower than it is. Most importantly, they cannot account for recent data showing that some subjects can perform some conjunction searches very efficiently. Feature Integration Theory can be modified so that it accounts for these data and helps to answer these questions. In this new theory, which we call Guided Search, the parallel stage guides the serial stage as it chooses display elements to process. A computer simulation of Guided Search produces the same general patterns as human subjects in a number of different types of visual search.

  11. The visual analysis of emotional actions.

    PubMed

    Chouchourelou, Arieta; Matsuka, Toshihiko; Harber, Kent; Shiffrar, Maggie

    2006-01-01

    Is the visual analysis of human actions modulated by the emotional content of those actions? This question is motivated by a consideration of the neuroanatomical connections between visual and emotional areas. Specifically, the superior temporal sulcus (STS), known to play a critical role in the visual detection of action, is extensively interconnected with the amygdala, a center for emotion processing. To the extent that amygdala activity influences STS activity, one would expect to find systematic differences in the visual detection of emotional actions. A series of psychophysical studies tested this prediction. Experiment 1 identified point-light walker movies that convincingly depicted five different emotional states: happiness, sadness, neutral, anger, and fear. In Experiment 2, participants performed a walker detection task with these movies. Detection performance was systematically modulated by the emotional content of the gaits. Participants demonstrated the greatest visual sensitivity to angry walkers. The results of Experiment 3 suggest that local velocity cues to anger may account for high false alarm rates to the presence of angry gaits. These results support the hypothesis that the visual analysis of human action depends upon emotion processes.

  12. Multilevel depth and image fusion for human activity detection.

    PubMed

    Ni, Bingbing; Pei, Yong; Moulin, Pierre; Yan, Shuicheng

    2013-10-01

    Recognizing complex human activities usually requires the detection and modeling of individual visual features and the interactions between them. Current methods only rely on the visual features extracted from 2-D images, and therefore often lead to unreliable salient visual feature detection and inaccurate modeling of the interaction context between individual features. In this paper, we show that these problems can be addressed by combining data from a conventional camera and a depth sensor (e.g., Microsoft Kinect). We propose a novel complex activity recognition and localization framework that effectively fuses information from both grayscale and depth image channels at multiple levels of the video processing pipeline. In the individual visual feature detection level, depth-based filters are applied to the detected human/object rectangles to remove false detections. In the next level of interaction modeling, 3-D spatial and temporal contexts among human subjects or objects are extracted by integrating information from both grayscale and depth images. Depth information is also utilized to distinguish different types of indoor scenes. Finally, a latent structural model is developed to integrate the information from multiple levels of video processing for an activity detection. Extensive experiments on two activity recognition benchmarks (one with depth information) and a challenging grayscale + depth human activity database that contains complex interactions between human-human, human-object, and human-surroundings demonstrate the effectiveness of the proposed multilevel grayscale + depth fusion scheme. Higher recognition and localization accuracies are obtained relative to the previous methods.

  13. Audio-visual affective expression recognition

    NASA Astrophysics Data System (ADS)

    Huang, Thomas S.; Zeng, Zhihong

    2007-11-01

    Automatic affective expression recognition has attracted more and more attention of researchers from different disciplines, which will significantly contribute to a new paradigm for human computer interaction (affect-sensitive interfaces, socially intelligent environments) and advance the research in the affect-related fields including psychology, psychiatry, and education. Multimodal information integration is a process that enables human to assess affective states robustly and flexibly. In order to understand the richness and subtleness of human emotion behavior, the computer should be able to integrate information from multiple sensors. We introduce in this paper our efforts toward machine understanding of audio-visual affective behavior, based on both deliberate and spontaneous displays. Some promising methods are presented to integrate information from both audio and visual modalities. Our experiments show the advantage of audio-visual fusion in affective expression recognition over audio-only or visual-only approaches.

  14. The Location of Sources of Human Computer Processed Cerebral Potentials for the Automated Assessment of Visual Field Impairment

    PubMed Central

    Leisman, Gerald; Ashkenazi, Maureen

    1979-01-01

    Objective psychophysical techniques for investigating visual fields are described. The paper concerns methods for the collection and analysis of evoked potentials using a small laboratory computer and provides efficient methods for obtaining information about the conduction pathways of the visual system.

  15. Modeling Spatial and Temporal Aspects of Visual Backward Masking

    ERIC Educational Resources Information Center

    Hermens, Frouke; Luksys, Gediminas; Gerstner, Wulfram; Herzog, Michael H.; Ernst, Udo

    2008-01-01

    Visual backward masking is a versatile tool for understanding principles and limitations of visual information processing in the human brain. However, the mechanisms underlying masking are still poorly understood. In the current contribution, the authors show that a structurally simple mathematical model can explain many spatial and temporal…

  16. Synchronization to auditory and visual rhythms in hearing and deaf individuals

    PubMed Central

    Iversen, John R.; Patel, Aniruddh D.; Nicodemus, Brenda; Emmorey, Karen

    2014-01-01

    A striking asymmetry in human sensorimotor processing is that humans synchronize movements to rhythmic sound with far greater precision than to temporally equivalent visual stimuli (e.g., to an auditory vs. a flashing visual metronome). Traditionally, this finding is thought to reflect a fundamental difference in auditory vs. visual processing, i.e., superior temporal processing by the auditory system and/or privileged coupling between the auditory and motor systems. It is unclear whether this asymmetry is an inevitable consequence of brain organization or whether it can be modified (or even eliminated) by stimulus characteristics or by experience. With respect to stimulus characteristics, we found that a moving, colliding visual stimulus (a silent image of a bouncing ball with a distinct collision point on the floor) was able to drive synchronization nearly as accurately as sound in hearing participants. To study the role of experience, we compared synchronization to flashing metronomes in hearing and profoundly deaf individuals. Deaf individuals performed better than hearing individuals when synchronizing with visual flashes, suggesting that cross-modal plasticity enhances the ability to synchronize with temporally discrete visual stimuli. Furthermore, when deaf (but not hearing) individuals synchronized with the bouncing ball, their tapping patterns suggest that visual timing may access higher-order beat perception mechanisms for deaf individuals. These results indicate that the auditory advantage in rhythmic synchronization is more experience- and stimulus-dependent than has been previously reported. PMID:25460395

  17. Parvocellular Pathway Impairment in Autism Spectrum Disorder: Evidence from Visual Evoked Potentials

    ERIC Educational Resources Information Center

    Fujita, Takako; Yamasaki, Takao; Kamio, Yoko; Hirose, Shinichi; Tobimatsu, Shozo

    2011-01-01

    In humans, visual information is processed via parallel channels: the parvocellular (P) pathway analyzes color and form information, whereas the magnocellular (M) stream plays an important role in motion analysis. Individuals with autism spectrum disorder (ASD) often show superior performance in processing fine detail, but impaired performance in…

  18. The Multisensory Attentional Consequences of Tool Use: A Functional Magnetic Resonance Imaging Study

    PubMed Central

    Holmes, Nicholas P.; Spence, Charles; Hansen, Peter C.; Mackay, Clare E.; Calvert, Gemma A.

    2008-01-01

    Background Tool use in humans requires that multisensory information is integrated across different locations, from objects seen to be distant from the hand, but felt indirectly at the hand via the tool. We tested the hypothesis that using a simple tool to perceive vibrotactile stimuli results in the enhanced processing of visual stimuli presented at the distal, functional part of the tool. Such a finding would be consistent with a shift of spatial attention to the location where the tool is used. Methodology/Principal Findings We tested this hypothesis by scanning healthy human participants' brains using functional magnetic resonance imaging, while they used a simple tool to discriminate between target vibrations, accompanied by congruent or incongruent visual distractors, on the same or opposite side to the tool. The attentional hypothesis was supported: BOLD response in occipital cortex, particularly in the right hemisphere lingual gyrus, varied significantly as a function of tool position, increasing contralaterally, and decreasing ipsilaterally to the tool. Furthermore, these modulations occurred despite the fact that participants were repeatedly instructed to ignore the visual stimuli, to respond only to the vibrotactile stimuli, and to maintain visual fixation centrally. In addition, the magnitude of multisensory (visual-vibrotactile) interactions in participants' behavioural responses significantly predicted the BOLD response in occipital cortical areas that were also modulated as a function of both visual stimulus position and tool position. Conclusions/Significance These results show that using a simple tool to locate and to perceive vibrotactile stimuli is accompanied by a shift of spatial attention to the location where the functional part of the tool is used, resulting in enhanced processing of visual stimuli at that location, and decreased processing at other locations. This was most clearly observed in the right hemisphere lingual gyrus. Such modulations of visual processing may reflect the functional importance of visuospatial information during human tool use. PMID:18958150

  19. Atoms of recognition in human and computer vision.

    PubMed

    Ullman, Shimon; Assif, Liav; Fetaya, Ethan; Harari, Daniel

    2016-03-08

    Discovering the visual features and representations used by the brain to recognize objects is a central problem in the study of vision. Recently, neural network models of visual object recognition, including biological and deep network models, have shown remarkable progress and have begun to rival human performance in some challenging tasks. These models are trained on image examples and learn to extract features and representations and to use them for categorization. It remains unclear, however, whether the representations and learning processes discovered by current models are similar to those used by the human visual system. Here we show, by introducing and using minimal recognizable images, that the human visual system uses features and processes that are not used by current models and that are critical for recognition. We found by psychophysical studies that at the level of minimal recognizable images a minute change in the image can have a drastic effect on recognition, thus identifying features that are critical for the task. Simulations then showed that current models cannot explain this sensitivity to precise feature configurations and, more generally, do not learn to recognize minimal images at a human level. The role of the features shown here is revealed uniquely at the minimal level, where the contribution of each feature is essential. A full understanding of the learning and use of such features will extend our understanding of visual recognition and its cortical mechanisms and will enhance the capacity of computational models to learn from visual experience and to deal with recognition and detailed image interpretation.

  20. The role of 3-D interactive visualization in blind surveys of H I in galaxies

    NASA Astrophysics Data System (ADS)

    Punzo, D.; van der Hulst, J. M.; Roerdink, J. B. T. M.; Oosterloo, T. A.; Ramatsoku, M.; Verheijen, M. A. W.

    2015-09-01

    Upcoming H I surveys will deliver large datasets, and automated processing using the full 3-D information (two positional dimensions and one spectral dimension) to find and characterize H I objects is imperative. In this context, visualization is an essential tool for enabling qualitative and quantitative human control on an automated source finding and analysis pipeline. We discuss how Visual Analytics, the combination of automated data processing and human reasoning, creativity and intuition, supported by interactive visualization, enables flexible and fast interaction with the 3-D data, helping the astronomer to deal with the analysis of complex sources. 3-D visualization, coupled to modeling, provides additional capabilities helping the discovery and analysis of subtle structures in the 3-D domain. The requirements for a fully interactive visualization tool are: coupled 1-D/2-D/3-D visualization, quantitative and comparative capabilities, combined with supervised semi-automated analysis. Moreover, the source code must have the following characteristics for enabling collaborative work: open, modular, well documented, and well maintained. We review four state of-the-art, 3-D visualization packages assessing their capabilities and feasibility for use in the case of 3-D astronomical data.

  1. Visual laterality in belugas (Delphinapterus leucas) and Pacific white-sided dolphins (Lagenorhynchus obliquidens) when viewing familiar and unfamiliar humans.

    PubMed

    Yeater, Deirdre B; Hill, Heather M; Baus, Natalie; Farnell, Heather; Kuczaj, Stan A

    2014-11-01

    Lateralization of cognitive processes and motor functions has been demonstrated in a number of species, including humans, elephants, and cetaceans. For example, bottlenose dolphins (Tursiops truncatus) have exhibited preferential eye use during a variety of cognitive tasks. The present study investigated the possibility of visual lateralization in 12 belugas (Delphinapterus leucas) and six Pacific white-sided dolphins (Lagenorhynchus obliquidens) located at two separate marine mammal facilities. During free swim periods, the belugas and Pacific white-sided dolphins were presented a familiar human, an unfamiliar human, or no human during 10-15 min sessions. Session videos were coded for gaze duration, eye presentation at approach, and eye preference while viewing each stimulus. Although we did not find any clear group level lateralization, we found individual left eye lateralized preferences related to social stimuli for most belugas and some Pacific white-sided dolphins. Differences in gaze durations were also observed. The majority of individual belugas had longer gaze durations for unfamiliar rather than familiar stimuli. These results suggest that lateralization occurs during visual processing of human stimuli in belugas and Pacific white-sided dolphins and that these species can distinguish between familiar and unfamiliar humans.

  2. Computational Model of Primary Visual Cortex Combining Visual Attention for Action Recognition

    PubMed Central

    Shu, Na; Gao, Zhiyong; Chen, Xiangan; Liu, Haihua

    2015-01-01

    Humans can easily understand other people’s actions through visual systems, while computers cannot. Therefore, a new bio-inspired computational model is proposed in this paper aiming for automatic action recognition. The model focuses on dynamic properties of neurons and neural networks in the primary visual cortex (V1), and simulates the procedure of information processing in V1, which consists of visual perception, visual attention and representation of human action. In our model, a family of the three-dimensional spatial-temporal correlative Gabor filters is used to model the dynamic properties of the classical receptive field of V1 simple cell tuned to different speeds and orientations in time for detection of spatiotemporal information from video sequences. Based on the inhibitory effect of stimuli outside the classical receptive field caused by lateral connections of spiking neuron networks in V1, we propose surround suppressive operator to further process spatiotemporal information. Visual attention model based on perceptual grouping is integrated into our model to filter and group different regions. Moreover, in order to represent the human action, we consider the characteristic of the neural code: mean motion map based on analysis of spike trains generated by spiking neurons. The experimental evaluation on some publicly available action datasets and comparison with the state-of-the-art approaches demonstrate the superior performance of the proposed model. PMID:26132270

  3. Method and Apparatus for Evaluating the Visual Quality of Processed Digital Video Sequences

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    2002-01-01

    A Digital Video Quality (DVQ) apparatus and method that incorporate a model of human visual sensitivity to predict the visibility of artifacts. The DVQ method and apparatus are used for the evaluation of the visual quality of processed digital video sequences and for adaptively controlling the bit rate of the processed digital video sequences without compromising the visual quality. The DVQ apparatus minimizes the required amount of memory and computation. The input to the DVQ apparatus is a pair of color image sequences: an original (R) non-compressed sequence, and a processed (T) sequence. Both sequences (R) and (T) are sampled, cropped, and subjected to color transformations. The sequences are then subjected to blocking and discrete cosine transformation, and the results are transformed to local contrast. The next step is a time filtering operation which implements the human sensitivity to different time frequencies. The results are converted to threshold units by dividing each discrete cosine transform coefficient by its respective visual threshold. At the next stage the two sequences are subtracted to produce an error sequence. The error sequence is subjected to a contrast masking operation, which also depends upon the reference sequence (R). The masked errors can be pooled in various ways to illustrate the perceptual error over various dimensions, and the pooled error can be converted to a visual quality measure.

  4. Multiple Transient Signals in Human Visual Cortex Associated with an Elementary Decision

    PubMed Central

    Nolte, Guido

    2017-01-01

    The cerebral cortex continuously undergoes changes in its state, which are manifested in transient modulations of the cortical power spectrum. Cortical state changes also occur at full wakefulness and during rapid cognitive acts, such as perceptual decisions. Previous studies found a global modulation of beta-band (12–30 Hz) activity in human and monkey visual cortex during an elementary visual decision: reporting the appearance or disappearance of salient visual targets surrounded by a distractor. The previous studies disentangled neither the motor action associated with behavioral report nor other secondary processes, such as arousal, from perceptual decision processing per se. Here, we used magnetoencephalography in humans to pinpoint the factors underlying the beta-band modulation. We found that disappearances of a salient target were associated with beta-band suppression, and target reappearances with beta-band enhancement. This was true for both overt behavioral reports (immediate button presses) and silent counting of the perceptual events. This finding indicates that the beta-band modulation was unrelated to the execution of the motor act associated with a behavioral report of the perceptual decision. Further, changes in pupil-linked arousal, fixational eye movements, or gamma-band responses were not necessary for the beta-band modulation. Together, our results suggest that the beta-band modulation was a top-down signal associated with the process of converting graded perceptual signals into a categorical format underlying flexible behavior. This signal may have been fed back from brain regions involved in decision processing to visual cortex, thus enforcing a “decision-consistent” cortical state. SIGNIFICANCE STATEMENT Elementary visual decisions are associated with a rapid state change in visual cortex, indexed by a modulation of neural activity in the beta-frequency range. Such decisions are also followed by other events that might affect the state of visual cortex, including the motor command associated with the report of the decision, an increase in pupil-linked arousal, fixational eye movements, and fluctuations in bottom-up sensory processing. Here, we ruled out the necessity of these events for the beta-band modulation of visual cortex. We propose that the modulation reflects a decision-related state change, which is induced by the conversion of graded perceptual signals into a categorical format underlying behavior. The resulting decision signal may be fed back to visual cortex. PMID:28495972

  5. Independence between implicit and explicit processing as revealed by the Simon effect.

    PubMed

    Lo, Shih-Yu; Yeh, Su-Ling

    2011-09-01

    Studies showing human behavior influenced by subliminal stimuli mainly focus on implicit processing per se, and little is known about its interaction with explicit processing. We examined this by using the Simon effect, wherein a task-irrelevant spatial distracter interferes with lateralized response. Lo and Yeh (2008) found that the visual Simon effect, although it occurred when participants were aware of the visual distracters, did not occur with subliminal visual distracters. We used the same paradigm and examined whether subliminal and supra-threshold stimuli are processed independently by adding a supra-threshold auditory distracter to ascertain whether it would interact with the subliminal visual distracter. Results showed auditory Simon effect, but there was still no visual Simon effect, indicating that supra-threshold and subliminal stimuli are processed separately in independent streams. In contrast to the traditional view that implicit processing precedes explicit processing, our results suggest that they operate independently in a parallel fashion. Copyright © 2010 Elsevier Inc. All rights reserved.

  6. Modeling human comprehension of data visualizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzen, Laura E.; Haass, Michael Joseph; Divis, Kristin Marie

    This project was inspired by two needs. The first is a need for tools to help scientists and engineers to design effective data visualizations for communicating information, whether to the user of a system, an analyst who must make decisions based on complex data, or in the context of a technical report or publication. Most scientists and engineers are not trained in visualization design, and they could benefit from simple metrics to assess how well their visualization's design conveys the intended message. In other words, will the most important information draw the viewer's attention? The second is the need formore » cognition-based metrics for evaluating new types of visualizations created by researchers in the information visualization and visual analytics communities. Evaluating visualizations is difficult even for experts. However, all visualization methods and techniques are intended to exploit the properties of the human visual system to convey information efficiently to a viewer. Thus, developing evaluation methods that are rooted in the scientific knowledge of the human visual system could be a useful approach. In this project, we conducted fundamental research on how humans make sense of abstract data visualizations, and how this process is influenced by their goals and prior experience. We then used that research to develop a new model, the Data Visualization Saliency Model, that can make accurate predictions about which features in an abstract visualization will draw a viewer's attention. The model is an evaluation tool that can address both of the needs described above, supporting both visualization research and Sandia mission needs.« less

  7. Encoding model of temporal processing in human visual cortex.

    PubMed

    Stigliani, Anthony; Jeska, Brianna; Grill-Spector, Kalanit

    2017-12-19

    How is temporal information processed in human visual cortex? Visual input is relayed to V1 through segregated transient and sustained channels in the retina and lateral geniculate nucleus (LGN). However, there is intense debate as to how sustained and transient temporal channels contribute to visual processing beyond V1. The prevailing view associates transient processing predominately with motion-sensitive regions and sustained processing with ventral stream regions, while the opposing view suggests that both temporal channels contribute to neural processing beyond V1. Using fMRI, we measured cortical responses to time-varying stimuli and then implemented a two temporal channel-encoding model to evaluate the contributions of each channel. Different from the general linear model of fMRI that predicts responses directly from the stimulus, the encoding approach first models neural responses to the stimulus from which fMRI responses are derived. This encoding approach not only predicts cortical responses to time-varying stimuli from milliseconds to seconds but also, reveals differential contributions of temporal channels across visual cortex. Consistent with the prevailing view, motion-sensitive regions and adjacent lateral occipitotemporal regions are dominated by transient responses. However, ventral occipitotemporal regions are driven by both sustained and transient channels, with transient responses exceeding the sustained. These findings propose a rethinking of temporal processing in the ventral stream and suggest that transient processing may contribute to rapid extraction of the content of the visual input. Importantly, our encoding approach has vast implications, because it can be applied with fMRI to decipher neural computations in millisecond resolution in any part of the brain. Copyright © 2017 the Author(s). Published by PNAS.

  8. Is attentional prioritisation of infant faces unique in humans?: Comparative demonstrations by modified dot-probe task in monkeys.

    PubMed

    Koda, Hiroki; Sato, Anna; Kato, Akemi

    2013-09-01

    Humans innately perceive infantile features as cute. The ethologist Konrad Lorenz proposed that the infantile features of mammals and birds, known as the baby schema (kindchenschema), motivate caretaking behaviour. As biologically relevant stimuli, newborns are likely to be processed specially in terms of visual attention, perception, and cognition. Recent demonstrations on human participants have shown visual attentional prioritisation to newborn faces (i.e., newborn faces capture visual attention). Although characteristics equivalent to those found in the faces of human infants are found in nonhuman primates, attentional capture by newborn faces has not been tested in nonhuman primates. We examined whether conspecific newborn faces captured the visual attention of two Japanese monkeys using a target-detection task based on dot-probe tasks commonly used in human visual attention studies. Although visual cues enhanced target detection in subject monkeys, our results, unlike those for humans, showed no evidence of an attentional prioritisation for newborn faces by monkeys. Our demonstrations showed the validity of dot-probe task for visual attention studies in monkeys and propose a novel approach to bridge the gap between human and nonhuman primate social cognition research. This suggests that attentional capture by newborn faces is not common to macaques, but it is unclear if nursing experiences influence their perception and recognition of infantile appraisal stimuli. We need additional comparative studies to reveal the evolutionary origins of baby-schema perception and recognition. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Feature-Selective Attentional Modulations in Human Frontoparietal Cortex.

    PubMed

    Ester, Edward F; Sutterer, David W; Serences, John T; Awh, Edward

    2016-08-03

    Control over visual selection has long been framed in terms of a dichotomy between "source" and "site," where top-down feedback signals originating in frontoparietal cortical areas modulate or bias sensory processing in posterior visual areas. This distinction is motivated in part by observations that frontoparietal cortical areas encode task-level variables (e.g., what stimulus is currently relevant or what motor outputs are appropriate), while posterior sensory areas encode continuous or analog feature representations. Here, we present evidence that challenges this distinction. We used fMRI, a roving searchlight analysis, and an inverted encoding model to examine representations of an elementary feature property (orientation) across the entire human cortical sheet while participants attended either the orientation or luminance of a peripheral grating. Orientation-selective representations were present in a multitude of visual, parietal, and prefrontal cortical areas, including portions of the medial occipital cortex, the lateral parietal cortex, and the superior precentral sulcus (thought to contain the human homolog of the macaque frontal eye fields). Additionally, representations in many-but not all-of these regions were stronger when participants were instructed to attend orientation relative to luminance. Collectively, these findings challenge models that posit a strict segregation between sources and sites of attentional control on the basis of representational properties by demonstrating that simple feature values are encoded by cortical regions throughout the visual processing hierarchy, and that representations in many of these areas are modulated by attention. Influential models of visual attention posit a distinction between top-down control and bottom-up sensory processing networks. These models are motivated in part by demonstrations showing that frontoparietal cortical areas associated with top-down control represent abstract or categorical stimulus information, while visual areas encode parametric feature information. Here, we show that multivariate activity in human visual, parietal, and frontal cortical areas encode representations of a simple feature property (orientation). Moreover, representations in several (though not all) of these areas were modulated by feature-based attention in a similar fashion. These results provide an important challenge to models that posit dissociable top-down control and sensory processing networks on the basis of representational properties. Copyright © 2016 the authors 0270-6474/16/368188-12$15.00/0.

  10. Perceptual asymmetry in texture perception.

    PubMed

    Williams, D; Julesz, B

    1992-07-15

    A fundamental property of human visual perception is our ability to distinguish between textures. A concerted effort has been made to account for texture segregation in terms of linear spatial filter models and their nonlinear extensions. However, for certain texture pairs the ease of discrimination changes when the role of figure and ground are reversed. This asymmetry poses a problem for both linear and nonlinear models. We have isolated a property of texture perception that can account for this asymmetry in discrimination: subjective closure. This property, which is also responsible for visual illusions, appears to be explainable by early visual processes alone. Our results force a reexamination of the process of human texture segregation and of some recent models that were introduced to explain it.

  11. Sustained multifocal attentional enhancement of stimulus processing in early visual areas predicts tracking performance.

    PubMed

    Störmer, Viola S; Winther, Gesche N; Li, Shu-Chen; Andersen, Søren K

    2013-03-20

    Keeping track of multiple moving objects is an essential ability of visual perception. However, the mechanisms underlying this ability are not well understood. We instructed human observers to track five or seven independent randomly moving target objects amid identical nontargets and recorded steady-state visual evoked potentials (SSVEPs) elicited by these stimuli. Visual processing of moving targets, as assessed by SSVEP amplitudes, was continuously facilitated relative to the processing of identical but irrelevant nontargets. The cortical sources of this enhancement were located to areas including early visual cortex V1-V3 and motion-sensitive area MT, suggesting that the sustained multifocal attentional enhancement during multiple object tracking already operates at hierarchically early stages of visual processing. Consistent with this interpretation, the magnitude of attentional facilitation during tracking in a single trial predicted the speed of target identification at the end of the trial. Together, these findings demonstrate that attention can flexibly and dynamically facilitate the processing of multiple independent object locations in early visual areas and thereby allow for tracking of these objects.

  12. Differential processing of binocular and monocular gloss cues in human visual cortex

    PubMed Central

    Di Luca, Massimiliano; Ban, Hiroshi; Muryy, Alexander; Fleming, Roland W.

    2016-01-01

    The visual impression of an object's surface reflectance (“gloss”) relies on a range of visual cues, both monocular and binocular. Whereas previous imaging work has identified processing within ventral visual areas as important for monocular cues, little is known about cortical areas involved in processing binocular cues. Here, we used human functional MRI (fMRI) to test for brain areas selectively involved in the processing of binocular cues. We manipulated stereoscopic information to create four conditions that differed in their disparity structure and in the impression of surface gloss that they evoked. We performed multivoxel pattern analysis to find areas whose fMRI responses allow classes of stimuli to be distinguished based on their depth structure vs. material appearance. We show that higher dorsal areas play a role in processing binocular gloss information, in addition to known ventral areas involved in material processing, with ventral area lateral occipital responding to both object shape and surface material properties. Moreover, we tested for similarities between the representation of gloss from binocular cues and monocular cues. Specifically, we tested for transfer in the decoding performance of an algorithm trained on glossy vs. matte objects defined by either binocular or by monocular cues. We found transfer effects from monocular to binocular cues in dorsal visual area V3B/kinetic occipital (KO), suggesting a shared representation of the two cues in this area. These results indicate the involvement of mid- to high-level visual circuitry in the estimation of surface material properties, with V3B/KO potentially playing a role in integrating monocular and binocular cues. PMID:26912596

  13. Rethinking Visual Analytics for Streaming Data Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crouser, R. Jordan; Franklin, Lyndsey; Cook, Kris

    In the age of data science, the use of interactive information visualization techniques has become increasingly ubiquitous. From online scientific journals to the New York Times graphics desk, the utility of interactive visualization for both storytelling and analysis has become ever more apparent. As these techniques have become more readily accessible, the appeal of combining interactive visualization with computational analysis continues to grow. Arising out of a need for scalable, human-driven analysis, primary objective of visual analytics systems is to capitalize on the complementary strengths of human and machine analysis, using interactive visualization as a medium for communication between themore » two. These systems leverage developments from the fields of information visualization, computer graphics, machine learning, and human-computer interaction to support insight generation in areas where purely computational analyses fall short. Over the past decade, visual analytics systems have generated remarkable advances in many historically challenging analytical contexts. These include areas such as modeling political systems [Crouser et al. 2012], detecting financial fraud [Chang et al. 2008], and cybersecurity [Harrison et al. 2012]. In each of these contexts, domain expertise and human intuition is a necessary component of the analysis. This intuition is essential to building trust in the analytical products, as well as supporting the translation of evidence into actionable insight. In addition, each of these examples also highlights the need for scalable analysis. In each case, it is infeasible for a human analyst to manually assess the raw information unaided, and the communication overhead to divide the task between a large number of analysts makes simple parallelism intractable. Regardless of the domain, visual analytics tools strive to optimize the allocation of human analytical resources, and to streamline the sensemaking process on data that is massive, complex, incomplete, and uncertain in scenarios requiring human judgment.« less

  14. Spatial vision in older adults: perceptual changes and neural bases.

    PubMed

    McKendrick, Allison M; Chan, Yu Man; Nguyen, Bao N

    2018-05-17

    The number of older adults is rapidly increasing internationally, leading to a significant increase in research on how healthy ageing impacts vision. Most clinical assessments of spatial vision involve simple detection (letter acuity, grating contrast sensitivity, perimetry). However, most natural visual environments are more spatially complicated, requiring contrast discrimination, and the delineation of object boundaries and contours, which are typically present on non-uniform backgrounds. In this review we discuss recent research that reports on the effects of normal ageing on these more complex visual functions, specifically in the context of recent neurophysiological studies. Recent research has concentrated on understanding the effects of healthy ageing on neural responses within the visual pathway in animal models. Such neurophysiological research has led to numerous, subsequently tested, hypotheses regarding the likely impact of healthy human ageing on specific aspects of spatial vision. Healthy normal ageing impacts significantly on spatial visual information processing from the retina through to visual cortex. Some human data validates that obtained from studies of animal physiology, however some findings indicate that rethinking of presumed neural substrates is required. Notably, not all spatial visual processes are altered by age. Healthy normal ageing impacts significantly on some spatial visual processes (in particular centre-surround tasks), but leaves contrast discrimination, contrast adaptation, and orientation discrimination relatively intact. The study of older adult vision contributes to knowledge of the brain mechanisms altered by the ageing process, can provide practical information regarding visual environments that older adults may find challenging, and may lead to new methods of assessing visual performance in clinical environments. © 2018 The Authors Ophthalmic & Physiological Optics © 2018 The College of Optometrists.

  15. Simulation of talking faces in the human brain improves auditory speech recognition

    PubMed Central

    von Kriegstein, Katharina; Dogan, Özgür; Grüter, Martina; Giraud, Anne-Lise; Kell, Christian A.; Grüter, Thomas; Kleinschmidt, Andreas; Kiebel, Stefan J.

    2008-01-01

    Human face-to-face communication is essentially audiovisual. Typically, people talk to us face-to-face, providing concurrent auditory and visual input. Understanding someone is easier when there is visual input, because visual cues like mouth and tongue movements provide complementary information about speech content. Here, we hypothesized that, even in the absence of visual input, the brain optimizes both auditory-only speech and speaker recognition by harvesting speaker-specific predictions and constraints from distinct visual face-processing areas. To test this hypothesis, we performed behavioral and neuroimaging experiments in two groups: subjects with a face recognition deficit (prosopagnosia) and matched controls. The results show that observing a specific person talking for 2 min improves subsequent auditory-only speech and speaker recognition for this person. In both prosopagnosics and controls, behavioral improvement in auditory-only speech recognition was based on an area typically involved in face-movement processing. Improvement in speaker recognition was only present in controls and was based on an area involved in face-identity processing. These findings challenge current unisensory models of speech processing, because they show that, in auditory-only speech, the brain exploits previously encoded audiovisual correlations to optimize communication. We suggest that this optimization is based on speaker-specific audiovisual internal models, which are used to simulate a talking face. PMID:18436648

  16. Modeling Efficient Serial Visual Search

    DTIC Science & Technology

    2012-08-01

    parafovea size) to explore the parameter space associated with serial search efficiency. Visual search as a paradigm has been studied meticulously for...continues (Over, Hooge , Vlaskamp, & Erkelens, 2007). Over et al. (2007) found that participants initially attended to general properties of the search environ...the efficiency of human serial visual search. There were three parameters that were manipulated in the modeling of the visual search process in this

  17. SMALL COLOUR VISION VARIATIONS AND THEIR EFFECT IN VISUAL COLORIMETRY,

    DTIC Science & Technology

    COLOR VISION, PERFORMANCE(HUMAN), TEST EQUIPMENT, PERFORMANCE(HUMAN), CORRELATION TECHNIQUES, STATISTICAL PROCESSES, COLORS, ANALYSIS OF VARIANCE, AGING(MATERIALS), COLORIMETRY , BRIGHTNESS, ANOMALIES, PLASTICS, UNITED KINGDOM.

  18. Trajectory data analyses for pedestrian space-time activity study.

    PubMed

    Qi, Feng; Du, Fei

    2013-02-25

    It is well recognized that human movement in the spatial and temporal dimensions has direct influence on disease transmission(1-3). An infectious disease typically spreads via contact between infected and susceptible individuals in their overlapped activity spaces. Therefore, daily mobility-activity information can be used as an indicator to measure exposures to risk factors of infection. However, a major difficulty and thus the reason for paucity of studies of infectious disease transmission at the micro scale arise from the lack of detailed individual mobility data. Previously in transportation and tourism research detailed space-time activity data often relied on the time-space diary technique, which requires subjects to actively record their activities in time and space. This is highly demanding for the participants and collaboration from the participants greatly affects the quality of data(4). Modern technologies such as GPS and mobile communications have made possible the automatic collection of trajectory data. The data collected, however, is not ideal for modeling human space-time activities, limited by the accuracies of existing devices. There is also no readily available tool for efficient processing of the data for human behavior study. We present here a suite of methods and an integrated ArcGIS desktop-based visual interface for the pre-processing and spatiotemporal analyses of trajectory data. We provide examples of how such processing may be used to model human space-time activities, especially with error-rich pedestrian trajectory data, that could be useful in public health studies such as infectious disease transmission modeling. The procedure presented includes pre-processing, trajectory segmentation, activity space characterization, density estimation and visualization, and a few other exploratory analysis methods. Pre-processing is the cleaning of noisy raw trajectory data. We introduce an interactive visual pre-processing interface as well as an automatic module. Trajectory segmentation(5) involves the identification of indoor and outdoor parts from pre-processed space-time tracks. Again, both interactive visual segmentation and automatic segmentation are supported. Segmented space-time tracks are then analyzed to derive characteristics of one's activity space such as activity radius etc. Density estimation and visualization are used to examine large amount of trajectory data to model hot spots and interactions. We demonstrate both density surface mapping(6) and density volume rendering(7). We also include a couple of other exploratory data analyses (EDA) and visualizations tools, such as Google Earth animation support and connection analysis. The suite of analytical as well as visual methods presented in this paper may be applied to any trajectory data for space-time activity studies.

  19. A multi-pathway hypothesis for human visual fear signaling

    PubMed Central

    Silverstein, David N.; Ingvar, Martin

    2015-01-01

    A hypothesis is proposed for five visual fear signaling pathways in humans, based on an analysis of anatomical connectivity from primate studies and human functional connectvity and tractography from brain imaging studies. Earlier work has identified possible subcortical and cortical fear pathways known as the “low road” and “high road,” which arrive at the amygdala independently. In addition to a subcortical pathway, we propose four cortical signaling pathways in humans along the visual ventral stream. All four of these traverse through the LGN to the visual cortex (VC) and branching off at the inferior temporal area, with one projection directly to the amygdala; another traversing the orbitofrontal cortex; and two others passing through the parietal and then prefrontal cortex, one excitatory pathway via the ventral-medial area and one regulatory pathway via the ventral-lateral area. These pathways have progressively longer propagation latencies and may have progressively evolved with brain development to take advantage of higher-level processing. Using the anatomical path lengths and latency estimates for each of these five pathways, predictions are made for the relative processing times at selective ROIs and arrival at the amygdala, based on the presentation of a fear-relevant visual stimulus. Partial verification of the temporal dynamics of this hypothesis might be accomplished using experimental MEG analysis. Possible experimental protocols are suggested. PMID:26379513

  20. Theory of Visual Attention (TVA) applied to mice in the 5-choice serial reaction time task.

    PubMed

    Fitzpatrick, C M; Caballero-Puntiverio, M; Gether, U; Habekost, T; Bundesen, C; Vangkilde, S; Woldbye, D P D; Andreasen, J T; Petersen, A

    2017-03-01

    The 5-choice serial reaction time task (5-CSRTT) is widely used to measure rodent attentional functions. In humans, many attention studies in healthy and clinical populations have used testing based on Bundesen's Theory of Visual Attention (TVA) to estimate visual processing speeds and other parameters of attentional capacity. We aimed to bridge these research fields by modifying the 5-CSRTT's design and by mathematically modelling data to derive attentional parameters analogous to human TVA-based measures. C57BL/6 mice were tested in two 1-h sessions on consecutive days with a version of the 5-CSRTT where stimulus duration (SD) probe length was varied based on information from previous TVA studies. Thereafter, a scopolamine hydrobromide (HBr; 0.125 or 0.25 mg/kg) pharmacological challenge was undertaken, using a Latin square design. Mean score values were modelled using a new three-parameter version of TVA to obtain estimates of visual processing speeds, visual thresholds and motor response baselines in each mouse. The parameter estimates for each animal were reliable across sessions, showing that the data were stable enough to support analysis on an individual level. Scopolamine HBr dose-dependently reduced 5-CSRTT attentional performance while also increasing reward collection latency at the highest dose. Upon TVA modelling, scopolamine HBr significantly reduced visual processing speed at both doses, while having less pronounced effects on visual thresholds and motor response baselines. This study shows for the first time how 5-CSRTT performance in mice can be mathematically modelled to yield estimates of attentional capacity that are directly comparable to estimates from human studies.

  1. Developmental trajectory of neural specialization for letter and number visual processing.

    PubMed

    Park, Joonkoo; van den Berg, Berry; Chiang, Crystal; Woldorff, Marty G; Brannon, Elizabeth M

    2018-05-01

    Adult neuroimaging studies have demonstrated dissociable neural activation patterns in the visual cortex in response to letters (Latin alphabet) and numbers (Arabic numerals), which suggest a strong experiential influence of reading and mathematics on the human visual system. Here, developmental trajectories in the event-related potential (ERP) patterns evoked by visual processing of letters, numbers, and false fonts were examined in four different age groups (7-, 10-, 15-year-olds, and young adults). The 15-year-olds and adults showed greater neural sensitivity to letters over numbers in the left visual cortex and the reverse pattern in the right visual cortex, extending previous findings in adults to teenagers. In marked contrast, 7- and 10-year-olds did not show this dissociable neural pattern. Furthermore, the contrast of familiar stimuli (letters or numbers) versus unfamiliar ones (false fonts) showed stark ERP differences between the younger (7- and 10-year-olds) and the older (15-year-olds and adults) participants. These results suggest that both coarse (familiar versus unfamiliar) and fine (letters versus numbers) tuning for letters and numbers continue throughout childhood and early adolescence, demonstrating a profound impact of uniquely human cultural inventions on visual cognition and its development. © 2017 John Wiley & Sons Ltd.

  2. Preliminary study of visual effect of multiplex hologram

    NASA Astrophysics Data System (ADS)

    Fu, Huaiping; Xiong, Bingheng; Yang, Hong; Zhang, Xueguo

    2004-06-01

    The process of any movement of real object can be recorded and displayed by a multiplex holographic stereogram. An embossing multiplex holographic stereogram and a multiplex rainbow holographic stereogram have been made by us, the multiplex rainbow holographic stereogram reconstructs the dynamic 2D line drawing of speech organs, the embossing multiplex holographic stereogram reconstructs the process of an old man drinking water. In this paper, we studied the visual result of an embossing multiplex holographic stereogram made with 80 films of 2-D pictures. Forty-eight persons of aged from 13 to 67 were asked to see the hologram and then to answer some questions about the feeling of viewing. The results indicate that this kind of holograms could be accepted by human visual sense organ without any problem. This paper also discusses visual effect of the multiplex holography stereograms base on visual perceptual psychology. It is open out that the planar multiplex holograms can be recorded and present the movement of real animal and object. Not only have the human visual perceptual constancy for shape, just as that size, color, etc... but also have visual perceptual constancy for binocular parallax.

  3. Monkey Visual Short-Term Memory Directly Compared to Humans

    PubMed Central

    Elmore, L. Caitlin; Wright, Anthony A.

    2015-01-01

    Two adult rhesus monkeys were trained to detect which item in an array of memory items had changed using the same stimuli, viewing times, and delays as used with humans. Although the monkeys were extensively trained, they were less accurate than humans with the same array sizes (2, 4, & 6 items), with both stimulus types (colored squares, clip art), and showed calculated memory capacities of about one item (or less). Nevertheless, the memory results from both monkeys and humans for both stimulus types were well characterized by the inverse power-law of display size. This characterization provides a simple and straightforward summary of a fundamental process of visual short-term memory (how VSTM declines with memory load) that emphasizes species similarities based upon similar functional relationships. By more closely matching of monkey testing parameters to those of humans, the similar functional relationships strengthen the evidence suggesting similar processes underlying monkey and human VSTM. PMID:25706544

  4. Behaviorally Relevant Abstract Object Identity Representation in the Human Parietal Cortex

    PubMed Central

    Jeong, Su Keun

    2016-01-01

    The representation of object identity is fundamental to human vision. Using fMRI and multivoxel pattern analysis, here we report the representation of highly abstract object identity information in human parietal cortex. Specifically, in superior intraparietal sulcus (IPS), a region previously shown to track visual short-term memory capacity, we found object identity representations for famous faces varying freely in viewpoint, hairstyle, facial expression, and age; and for well known cars embedded in different scenes, and shown from different viewpoints and sizes. Critically, these parietal identity representations were behaviorally relevant as they closely tracked the perceived face-identity similarity obtained in a behavioral task. Meanwhile, the task-activated regions in prefrontal and parietal cortices (excluding superior IPS) did not exhibit such abstract object identity representations. Unlike previous studies, we also failed to observe identity representations in posterior ventral and lateral visual object-processing regions, likely due to the greater amount of identity abstraction demanded by our stimulus manipulation here. Our MRI slice coverage precluded us from examining identity representation in anterior temporal lobe, a likely region for the computing of identity information in the ventral region. Overall, we show that human parietal cortex, part of the dorsal visual processing pathway, is capable of holding abstract and complex visual representations that are behaviorally relevant. These results argue against a “content-poor” view of the role of parietal cortex in attention. Instead, the human parietal cortex seems to be “content rich” and capable of directly participating in goal-driven visual information representation in the brain. SIGNIFICANCE STATEMENT The representation of object identity (including faces) is fundamental to human vision and shapes how we interact with the world. Although object representation has traditionally been associated with human occipital and temporal cortices, here we show, by measuring fMRI response patterns, that a region in the human parietal cortex can robustly represent task-relevant object identities. These representations are invariant to changes in a host of visual features, such as viewpoint, and reflect an abstract level of representation that has not previously been reported in the human parietal cortex. Critically, these neural representations are behaviorally relevant as they closely track the perceived object identities. Human parietal cortex thus participates in the moment-to-moment goal-directed visual information representation in the brain. PMID:26843642

  5. Flies and humans share a motion estimation strategy that exploits natural scene statistics

    PubMed Central

    Clark, Damon A.; Fitzgerald, James E.; Ales, Justin M.; Gohl, Daryl M.; Silies, Marion A.; Norcia, Anthony M.; Clandinin, Thomas R.

    2014-01-01

    Sighted animals extract motion information from visual scenes by processing spatiotemporal patterns of light falling on the retina. The dominant models for motion estimation exploit intensity correlations only between pairs of points in space and time. Moving natural scenes, however, contain more complex correlations. Here we show that fly and human visual systems encode the combined direction and contrast polarity of moving edges using triple correlations that enhance motion estimation in natural environments. Both species extract triple correlations with neural substrates tuned for light or dark edges, and sensitivity to specific triple correlations is retained even as light and dark edge motion signals are combined. Thus, both species separately process light and dark image contrasts to capture motion signatures that can improve estimation accuracy. This striking convergence argues that statistical structures in natural scenes have profoundly affected visual processing, driving a common computational strategy over 500 million years of evolution. PMID:24390225

  6. Complete scanpaths analysis toolbox.

    PubMed

    Augustyniak, Piotr; Mikrut, Zbigniew

    2006-01-01

    This paper presents a complete open software environment for control, data processing and assessment of visual experiments. Visual experiments are widely used in research on human perception physiology and the results are applicable to various visual information-based man-machine interfacing, human-emulated automatic visual systems or scanpath-based learning of perceptual habits. The toolbox is designed for Matlab platform and supports infra-red reflection-based eyetracker in calibration and scanpath analysis modes. Toolbox procedures are organized in three layers: the lower one, communicating with the eyetracker output file, the middle detecting scanpath events on a physiological background and the one upper consisting of experiment schedule scripts, statistics and summaries. Several examples of visual experiments carried out with use of the presented toolbox complete the paper.

  7. Object representations in visual memory: evidence from visual illusions.

    PubMed

    Ben-Shalom, Asaf; Ganel, Tzvi

    2012-07-26

    Human visual memory is considered to contain different levels of object representations. Representations in visual working memory (VWM) are thought to contain relatively elaborated information about object structure. Conversely, representations in iconic memory are thought to be more perceptual in nature. In four experiments, we tested the effects of two different categories of visual illusions on representations in VWM and in iconic memory. Unlike VWM that was affected by both types of illusions, iconic memory was immune to the effects of within-object contextual illusions and was affected only by illusions driven by between-objects contextual properties. These results show that iconic and visual working memory contain dissociable representations of object shape. These findings suggest that the global properties of the visual scene are processed prior to the processing of specific elements.

  8. Attention distributed across sensory modalities enhances perceptual performance

    PubMed Central

    Mishra, Jyoti; Gazzaley, Adam

    2012-01-01

    This study investigated the interaction between top-down attentional control and multisensory processing in humans. Using semantically congruent and incongruent audiovisual stimulus streams, we found target detection to be consistently improved in the setting of distributed audiovisual attention versus focused visual attention. This performance benefit was manifested as faster reaction times for congruent audiovisual stimuli, and as accuracy improvements for incongruent stimuli, resulting in a resolution of stimulus interference. Electrophysiological recordings revealed that these behavioral enhancements were associated with reduced neural processing of both auditory and visual components of the audiovisual stimuli under distributed vs. focused visual attention. These neural changes were observed at early processing latencies, within 100–300 ms post-stimulus onset, and localized to auditory, visual, and polysensory temporal cortices. These results highlight a novel neural mechanism for top-down driven performance benefits via enhanced efficacy of sensory neural processing during distributed audiovisual attention relative to focused visual attention. PMID:22933811

  9. High-dynamic-range scene compression in humans

    NASA Astrophysics Data System (ADS)

    McCann, John J.

    2006-02-01

    Single pixel dynamic-range compression alters a particular input value to a unique output value - a look-up table. It is used in chemical and most digital photographic systems having S-shaped transforms to render high-range scenes onto low-range media. Post-receptor neural processing is spatial, as shown by the physiological experiments of Dowling, Barlow, Kuffler, and Hubel & Wiesel. Human vision does not render a particular receptor-quanta catch as a unique response. Instead, because of spatial processing, the response to a particular quanta catch can be any color. Visual response is scene dependent. Stockham proposed an approach to model human range compression using low-spatial frequency filters. Campbell, Ginsberg, Wilson, Watson, Daly and many others have developed spatial-frequency channel models. This paper describes experiments measuring the properties of desirable spatial-frequency filters for a variety of scenes. Given the radiances of each pixel in the scene and the observed appearances of objects in the image, one can calculate the visual mask for that individual image. Here, visual mask is the spatial pattern of changes made by the visual system in processing the input image. It is the spatial signature of human vision. Low-dynamic range images with many white areas need no spatial filtering. High-dynamic-range images with many blacks, or deep shadows, require strong spatial filtering. Sun on the right and shade on the left requires directional filters. These experiments show that variable scene- scenedependent filters are necessary to mimic human vision. Although spatial-frequency filters can model human dependent appearances, the problem still remains that an analysis of the scene is still needed to calculate the scene-dependent strengths of each of the filters for each frequency.

  10. Deep recurrent neural network reveals a hierarchy of process memory during dynamic natural vision.

    PubMed

    Shi, Junxing; Wen, Haiguang; Zhang, Yizhen; Han, Kuan; Liu, Zhongming

    2018-05-01

    The human visual cortex extracts both spatial and temporal visual features to support perception and guide behavior. Deep convolutional neural networks (CNNs) provide a computational framework to model cortical representation and organization for spatial visual processing, but unable to explain how the brain processes temporal information. To overcome this limitation, we extended a CNN by adding recurrent connections to different layers of the CNN to allow spatial representations to be remembered and accumulated over time. The extended model, or the recurrent neural network (RNN), embodied a hierarchical and distributed model of process memory as an integral part of visual processing. Unlike the CNN, the RNN learned spatiotemporal features from videos to enable action recognition. The RNN better predicted cortical responses to natural movie stimuli than the CNN, at all visual areas, especially those along the dorsal stream. As a fully observable model of visual processing, the RNN also revealed a cortical hierarchy of temporal receptive window, dynamics of process memory, and spatiotemporal representations. These results support the hypothesis of process memory, and demonstrate the potential of using the RNN for in-depth computational understanding of dynamic natural vision. © 2018 Wiley Periodicals, Inc.

  11. Signed language and human action processing: evidence for functional constraints on the human mirror-neuron system.

    PubMed

    Corina, David P; Knapp, Heather Patterson

    2008-12-01

    In the quest to further understand the neural underpinning of human communication, researchers have turned to studies of naturally occurring signed languages used in Deaf communities. The comparison of the commonalities and differences between spoken and signed languages provides an opportunity to determine core neural systems responsible for linguistic communication independent of the modality in which a language is expressed. The present article examines such studies, and in addition asks what we can learn about human languages by contrasting formal visual-gestural linguistic systems (signed languages) with more general human action perception. To understand visual language perception, it is important to distinguish the demands of general human motion processing from the highly task-dependent demands associated with extracting linguistic meaning from arbitrary, conventionalized gestures. This endeavor is particularly important because theorists have suggested close homologies between perception and production of actions and functions of human language and social communication. We review recent behavioral, functional imaging, and neuropsychological studies that explore dissociations between the processing of human actions and signed languages. These data suggest incomplete overlap between the mirror-neuron systems proposed to mediate human action and language.

  12. Temporal Processing Capacity in High-Level Visual Cortex Is Domain Specific.

    PubMed

    Stigliani, Anthony; Weiner, Kevin S; Grill-Spector, Kalanit

    2015-09-09

    Prevailing hierarchical models propose that temporal processing capacity--the amount of information that a brain region processes in a unit time--decreases at higher stages in the ventral stream regardless of domain. However, it is unknown if temporal processing capacities are domain general or domain specific in human high-level visual cortex. Using a novel fMRI paradigm, we measured temporal capacities of functional regions in high-level visual cortex. Contrary to hierarchical models, our data reveal domain-specific processing capacities as follows: (1) regions processing information from different domains have differential temporal capacities within each stage of the visual hierarchy and (2) domain-specific regions display the same temporal capacity regardless of their position in the processing hierarchy. In general, character-selective regions have the lowest capacity, face- and place-selective regions have an intermediate capacity, and body-selective regions have the highest capacity. Notably, domain-specific temporal processing capacities are not apparent in V1 and have perceptual implications. Behavioral testing revealed that the encoding capacity of body images is higher than that of characters, faces, and places, and there is a correspondence between peak encoding rates and cortical capacities for characters and bodies. The present evidence supports a model in which the natural statistics of temporal information in the visual world may affect domain-specific temporal processing and encoding capacities. These findings suggest that the functional organization of high-level visual cortex may be constrained by temporal characteristics of stimuli in the natural world, and this temporal capacity is a characteristic of domain-specific networks in high-level visual cortex. Significance statement: Visual stimuli bombard us at different rates every day. For example, words and scenes are typically stationary and vary at slow rates. In contrast, bodies are dynamic and typically change at faster rates. Using a novel fMRI paradigm, we measured temporal processing capacities of functional regions in human high-level visual cortex. Contrary to prevailing theories, we find that different regions have different processing capacities, which have behavioral implications. In general, character-selective regions have the lowest capacity, face- and place-selective regions have an intermediate capacity, and body-selective regions have the highest capacity. These results suggest that temporal processing capacity is a characteristic of domain-specific networks in high-level visual cortex and contributes to the segregation of cortical regions. Copyright © 2015 the authors 0270-6474/15/3512412-13$15.00/0.

  13. A theta rhythm in macaque visual cortex and its attentional modulation

    PubMed Central

    Spyropoulos, Georgios; Fries, Pascal

    2018-01-01

    Theta rhythms govern rodent sniffing and whisking, and human language processing. Human psychophysics suggests a role for theta also in visual attention. However, little is known about theta in visual areas and its attentional modulation. We used electrocorticography (ECoG) to record local field potentials (LFPs) simultaneously from areas V1, V2, V4, and TEO of two macaque monkeys performing a selective visual attention task. We found a ≈4-Hz theta rhythm within both the V1–V2 and the V4–TEO region, and theta synchronization between them, with a predominantly feedforward directed influence. ECoG coverage of large parts of these regions revealed a surprising spatial correspondence between theta and visually induced gamma. Furthermore, gamma power was modulated with theta phase. Selective attention to the respective visual stimulus strongly reduced these theta-rhythmic processes, leading to an unusually strong attention effect for V1. Microsaccades (MSs) were partly locked to theta. However, neuronal theta rhythms tended to be even more pronounced for epochs devoid of MSs. Thus, we find an MS-independent theta rhythm specific to visually driven parts of V1–V2, which rhythmically modulates local gamma and entrains V4–TEO, and which is strongly reduced by attention. We propose that the less theta-rhythmic and thereby more continuous processing of the attended stimulus serves the exploitation of this behaviorally most relevant information. The theta-rhythmic and thereby intermittent processing of the unattended stimulus likely reflects the ecologically important exploration of less relevant sources of information. PMID:29848632

  14. Sensitivity to timing and order in human visual cortex

    PubMed Central

    Singer, Jedediah M.; Madsen, Joseph R.; Anderson, William S.

    2014-01-01

    Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. PMID:25429116

  15. Stream specificity and asymmetries in feature binding and content-addressable access in visual encoding and memory.

    PubMed

    Huynh, Duong L; Tripathy, Srimant P; Bedell, Harold E; Ögmen, Haluk

    2015-01-01

    Human memory is content addressable-i.e., contents of the memory can be accessed using partial information about the bound features of a stored item. In this study, we used a cross-feature cuing technique to examine how the human visual system encodes, binds, and retains information about multiple stimulus features within a set of moving objects. We sought to characterize the roles of three different features (position, color, and direction of motion, the latter two of which are processed preferentially within the ventral and dorsal visual streams, respectively) in the construction and maintenance of object representations. We investigated the extent to which these features are bound together across the following processing stages: during stimulus encoding, sensory (iconic) memory, and visual short-term memory. Whereas all features examined here can serve as cues for addressing content, their effectiveness shows asymmetries and varies according to cue-report pairings and the stage of information processing and storage. Position-based indexing theories predict that position should be more effective as a cue compared to other features. While we found a privileged role for position as a cue at the stimulus-encoding stage, position was not the privileged cue at the sensory and visual short-term memory stages. Instead, the pattern that emerged from our findings is one that mirrors the parallel processing streams in the visual system. This stream-specific binding and cuing effectiveness manifests itself in all three stages of information processing examined here. Finally, we find that the Leaky Flask model proposed in our previous study is applicable to all three features.

  16. Attentional load modulates responses of human primary visual cortex to invisible stimuli.

    PubMed

    Bahrami, Bahador; Lavie, Nilli; Rees, Geraint

    2007-03-20

    Visual neuroscience has long sought to determine the extent to which stimulus-evoked activity in visual cortex depends on attention and awareness. Some influential theories of consciousness maintain that the allocation of attention is restricted to conscious representations [1, 2]. However, in the load theory of attention [3], competition between task-relevant and task-irrelevant stimuli for limited-capacity attention does not depend on conscious perception of the irrelevant stimuli. The critical test is whether the level of attentional load in a relevant task would determine unconscious neural processing of invisible stimuli. Human participants were scanned with high-field fMRI while they performed a foveal task of low or high attentional load. Irrelevant, invisible monocular stimuli were simultaneously presented peripherally and were continuously suppressed by a flashing mask in the other eye [4]. Attentional load in the foveal task strongly modulated retinotopic activity evoked in primary visual cortex (V1) by the invisible stimuli. Contrary to traditional views [1, 2, 5, 6], we found that availability of attentional capacity determines neural representations related to unconscious processing of continuously suppressed stimuli in human primary visual cortex. Spillover of attention to cortical representations of invisible stimuli (under low load) cannot be a sufficient condition for their awareness.

  17. The Human Brain Uses Noise

    NASA Astrophysics Data System (ADS)

    Mori, Toshio; Kai, Shoichi

    2003-05-01

    We present the first observation of stochastic resonance (SR) in the human brain's visual processing area. The novel experimental protocol is to stimulate the right eye with a sub-threshold periodic optical signal and the left eye with a noisy one. The stimuli bypass sensory organs and are mixed in the visual cortex. With many noise sources present in the brain, higher brain functions, e.g. perception and cognition, may exploit SR.

  18. Regions of mid-level human visual cortex sensitive to the global coherence of local image patches.

    PubMed

    Mannion, Damien J; Kersten, Daniel J; Olman, Cheryl A

    2014-08-01

    The global structural arrangement and spatial layout of the visual environment must be derived from the integration of local signals represented in the lower tiers of the visual system. This interaction between the spatially local and global properties of visual stimulation underlies many of our visual capacities, and how this is achieved in the brain is a central question for visual and cognitive neuroscience. Here, we examine the sensitivity of regions of the posterior human brain to the global coordination of spatially displaced naturalistic image patches. We presented observers with image patches in two circular apertures to the left and right of central fixation, with the patches drawn from either the same (coherent condition) or different (noncoherent condition) extended image. Using fMRI at 7T (n = 5), we find that global coherence affected signal amplitude in regions of dorsal mid-level cortex. Furthermore, we find that extensive regions of mid-level visual cortex contained information in their local activity pattern that could discriminate coherent and noncoherent stimuli. These findings indicate that the global coordination of local naturalistic image information has important consequences for the processing in human mid-level visual cortex.

  19. Goal-Directed Visual Processing Differentially Impacts Human Ventral and Dorsal Visual Representations

    PubMed Central

    2017-01-01

    Recent studies have challenged the ventral/“what” and dorsal/“where” two-visual-processing-pathway view by showing the existence of “what” and “where” information in both pathways. Is the two-pathway distinction still valid? Here, we examined how goal-directed visual information processing may differentially impact visual representations in these two pathways. Using fMRI and multivariate pattern analysis, in three experiments on human participants (57% females), by manipulating whether color or shape was task-relevant and how they were conjoined, we examined shape-based object category decoding in occipitotemporal and parietal regions. We found that object category representations in all the regions examined were influenced by whether or not object shape was task-relevant. This task effect, however, tended to decrease as task-relevant and irrelevant features were more integrated, reflecting the well-known object-based feature encoding. Interestingly, task relevance played a relatively minor role in driving the representational structures of early visual and ventral object regions. They were driven predominantly by variations in object shapes. In contrast, the effect of task was much greater in dorsal than ventral regions, with object category and task relevance both contributing significantly to the representational structures of the dorsal regions. These results showed that, whereas visual representations in the ventral pathway are more invariant and reflect “what an object is,” those in the dorsal pathway are more adaptive and reflect “what we do with it.” Thus, despite the existence of “what” and “where” information in both visual processing pathways, the two pathways may still differ fundamentally in their roles in visual information representation. SIGNIFICANCE STATEMENT Visual information is thought to be processed in two distinctive pathways: the ventral pathway that processes “what” an object is and the dorsal pathway that processes “where” it is located. This view has been challenged by recent studies revealing the existence of “what” and “where” information in both pathways. Here, we found that goal-directed visual information processing differentially modulates shape-based object category representations in the two pathways. Whereas ventral representations are more invariant to the demand of the task, reflecting what an object is, dorsal representations are more adaptive, reflecting what we do with the object. Thus, despite the existence of “what” and “where” information in both pathways, visual representations may still differ fundamentally in the two pathways. PMID:28821655

  20. The influence of steroid sex hormones on the cognitive and emotional processing of visual stimuli in humans.

    PubMed

    Little, Anthony C

    2013-10-01

    Steroid sex hormones are responsible for some of the differences between men and women. In this article, I review evidence that steroid sex hormones impact on visual processing. Given prominent sex-differences, I focus on three topics for sex hormone effects for which there is most research available: 1. Preference and mate choice, 2. Emotion and recognition, and 3. Cerebral/perceptual asymmetries and visual-spatial abilities. For each topic, researchers have examined sex hormones and visual processing using various methods. I review indirect evidence addressing variation according to: menstrual cycle phase, pregnancy, puberty, and menopause. I further address studies of variation in testosterone and a measure of prenatal testosterone, 2D:4D, on visual processing. The most conclusive evidence, however, comes from experiments. Studies in which hormones are administrated are discussed. Overall, many studies demonstrate that sex steroids are associated with visual processing. However, findings are sometimes inconsistent, differences in methodology make strong comparisons between studies difficult, and we generally know more about activational than organizational effects. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Perception of biological motion from size-invariant body representations.

    PubMed

    Lappe, Markus; Wittinghofer, Karin; de Lussanet, Marc H E

    2015-01-01

    The visual recognition of action is one of the socially most important and computationally demanding capacities of the human visual system. It combines visual shape recognition with complex non-rigid motion perception. Action presented as a point-light animation is a striking visual experience for anyone who sees it for the first time. Information about the shape and posture of the human body is sparse in point-light animations, but it is essential for action recognition. In the posturo-temporal filter model of biological motion perception posture information is picked up by visual neurons tuned to the form of the human body before body motion is calculated. We tested whether point-light stimuli are processed through posture recognition of the human body form by using a typical feature of form recognition, namely size invariance. We constructed a point-light stimulus that can only be perceived through a size-invariant mechanism. This stimulus changes rapidly in size from one image to the next. It thus disrupts continuity of early visuo-spatial properties but maintains continuity of the body posture representation. Despite this massive manipulation at the visuo-spatial level, size-changing point-light figures are spontaneously recognized by naive observers, and support discrimination of human body motion.

  2. Differential processing of binocular and monocular gloss cues in human visual cortex.

    PubMed

    Sun, Hua-Chun; Di Luca, Massimiliano; Ban, Hiroshi; Muryy, Alexander; Fleming, Roland W; Welchman, Andrew E

    2016-06-01

    The visual impression of an object's surface reflectance ("gloss") relies on a range of visual cues, both monocular and binocular. Whereas previous imaging work has identified processing within ventral visual areas as important for monocular cues, little is known about cortical areas involved in processing binocular cues. Here, we used human functional MRI (fMRI) to test for brain areas selectively involved in the processing of binocular cues. We manipulated stereoscopic information to create four conditions that differed in their disparity structure and in the impression of surface gloss that they evoked. We performed multivoxel pattern analysis to find areas whose fMRI responses allow classes of stimuli to be distinguished based on their depth structure vs. material appearance. We show that higher dorsal areas play a role in processing binocular gloss information, in addition to known ventral areas involved in material processing, with ventral area lateral occipital responding to both object shape and surface material properties. Moreover, we tested for similarities between the representation of gloss from binocular cues and monocular cues. Specifically, we tested for transfer in the decoding performance of an algorithm trained on glossy vs. matte objects defined by either binocular or by monocular cues. We found transfer effects from monocular to binocular cues in dorsal visual area V3B/kinetic occipital (KO), suggesting a shared representation of the two cues in this area. These results indicate the involvement of mid- to high-level visual circuitry in the estimation of surface material properties, with V3B/KO potentially playing a role in integrating monocular and binocular cues. Copyright © 2016 the American Physiological Society.

  3. Brain processing of visual information during fast eye movements maintains motor performance.

    PubMed

    Panouillères, Muriel; Gaveau, Valérie; Socasau, Camille; Urquizar, Christian; Pélisson, Denis

    2013-01-01

    Movement accuracy depends crucially on the ability to detect errors while actions are being performed. When inaccuracies occur repeatedly, both an immediate motor correction and a progressive adaptation of the motor command can unfold. Of all the movements in the motor repertoire of humans, saccadic eye movements are the fastest. Due to the high speed of saccades, and to the impairment of visual perception during saccades, a phenomenon called "saccadic suppression", it is widely believed that the adaptive mechanisms maintaining saccadic performance depend critically on visual error signals acquired after saccade completion. Here, we demonstrate that, contrary to this widespread view, saccadic adaptation can be based entirely on visual information presented during saccades. Our results show that visual error signals introduced during saccade execution--by shifting a visual target at saccade onset and blanking it at saccade offset--induce the same level of adaptation as error signals, presented for the same duration, but after saccade completion. In addition, they reveal that this processing of intra-saccadic visual information for adaptation depends critically on visual information presented during the deceleration phase, but not the acceleration phase, of the saccade. These findings demonstrate that the human central nervous system can use short intra-saccadic glimpses of visual information for motor adaptation, and they call for a reappraisal of current models of saccadic adaptation.

  4. Cholinergic enhancement of visual attention and neural oscillations in the human brain.

    PubMed

    Bauer, Markus; Kluge, Christian; Bach, Dominik; Bradbury, David; Heinze, Hans Jochen; Dolan, Raymond J; Driver, Jon

    2012-03-06

    Cognitive processes such as visual perception and selective attention induce specific patterns of brain oscillations. The neurochemical bases of these spectral changes in neural activity are largely unknown, but neuromodulators are thought to regulate processing. The cholinergic system is linked to attentional function in vivo, whereas separate in vitro studies show that cholinergic agonists induce high-frequency oscillations in slice preparations. This has led to theoretical proposals that cholinergic enhancement of visual attention might operate via gamma oscillations in visual cortex, although low-frequency alpha/beta modulation may also play a key role. Here we used MEG to record cortical oscillations in the context of administration of a cholinergic agonist (physostigmine) during a spatial visual attention task in humans. This cholinergic agonist enhanced spatial attention effects on low-frequency alpha/beta oscillations in visual cortex, an effect correlating with a drug-induced speeding of performance. By contrast, the cholinergic agonist did not alter high-frequency gamma oscillations in visual cortex. Thus, our findings show that cholinergic neuromodulation enhances attentional selection via an impact on oscillatory synchrony in visual cortex, for low rather than high frequencies. We discuss this dissociation between high- and low-frequency oscillations in relation to proposals that lower-frequency oscillations are generated by feedback pathways within visual cortex. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. A computational model of visual marking using an inter-connected network of spiking neurons: the spiking search over time & space model (sSoTS).

    PubMed

    Mavritsaki, Eirini; Heinke, Dietmar; Humphreys, Glyn W; Deco, Gustavo

    2006-01-01

    In the real world, visual information is selected over time as well as space, when we prioritise new stimuli for attention. Watson and Humphreys [Watson, D., Humphreys, G.W., 1997. Visual marking: prioritizing selection for new objects by top-down attentional inhibition of old objects. Psychological Review 104, 90-122] presented evidence that new information in search tasks is prioritised by (amongst other processes) active ignoring of old items - a process they termed visual marking. In this paper we present, for the first time, an explicit computational model of visual marking using biologically plausible activation functions. The "spiking search over time and space" model (sSoTS) incorporates different synaptic components (NMDA, AMPA, GABA) and a frequency adaptation mechanism based on [Ca(2+)] sensitive K(+) current. This frequency adaptation current can act as a mechanism that suppresses the previously attended items. We show that, when coupled with a process of active inhibition applied to old items, frequency adaptation leads to old items being de-prioritised (and new items prioritised) across time in search. Furthermore, the time course of these processes mimics the time course of the preview effect in human search. The results indicate that the sSoTS model can provide a biologically plausible account of human search over time as well as space.

  6. Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli

    PubMed Central

    Störmer, Viola S.; McDonald, John J.; Hillyard, Steven A.

    2009-01-01

    The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex. PMID:20007778

  7. Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli.

    PubMed

    Störmer, Viola S; McDonald, John J; Hillyard, Steven A

    2009-12-29

    The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex.

  8. Structural and functional analyses of human cerebral cortex using a surface-based atlas

    NASA Technical Reports Server (NTRS)

    Van Essen, D. C.; Drury, H. A.

    1997-01-01

    We have analyzed the geometry, geography, and functional organization of human cerebral cortex using surface reconstructions and cortical flat maps of the left and right hemispheres generated from a digital atlas (the Visible Man). The total surface area of the reconstructed Visible Man neocortex is 1570 cm2 (both hemispheres), approximately 70% of which is buried in sulci. By linking the Visible Man cerebrum to the Talairach stereotaxic coordinate space, the locations of activation foci reported in neuroimaging studies can be readily visualized in relation to the cortical surface. The associated spatial uncertainty was empirically shown to have a radius in three dimensions of approximately 10 mm. Application of this approach to studies of visual cortex reveals the overall patterns of activation associated with different aspects of visual function and the relationship of these patterns to topographically organized visual areas. Our analysis supports a distinction between an anterior region in ventral occipito-temporal cortex that is selectively involved in form processing and a more posterior region (in or near areas VP and V4v) involved in both form and color processing. Foci associated with motion processing are mainly concentrated in a region along the occipito-temporal junction, the ventral portion of which overlaps with foci also implicated in form processing. Comparisons between flat maps of human and macaque monkey cerebral cortex indicate significant differences as well as many similarities in the relative sizes and positions of cortical regions known or suspected to be homologous in the two species.

  9. Recognition Decisions From Visual Working Memory Are Mediated by Continuous Latent Strengths.

    PubMed

    Ricker, Timothy J; Thiele, Jonathan E; Swagman, April R; Rouder, Jeffrey N

    2017-08-01

    Making recognition decisions often requires us to reference the contents of working memory, the information available for ongoing cognitive processing. As such, understanding how recognition decisions are made when based on the contents of working memory is of critical importance. In this work we examine whether recognition decisions based on the contents of visual working memory follow a continuous decision process of graded information about the correct choice or a discrete decision process reflecting only knowing and guessing. We find a clear pattern in favor of a continuous latent strength model of visual working memory-based decision making, supporting the notion that visual recognition decision processes are impacted by the degree of matching between the contents of working memory and the choices given. Relation to relevant findings and the implications for human information processing more generally are discussed. Copyright © 2016 Cognitive Science Society, Inc.

  10. Decoding conjunctions of direction-of-motion and binocular disparity from human visual cortex.

    PubMed

    Seymour, Kiley J; Clifford, Colin W G

    2012-05-01

    Motion and binocular disparity are two features in our environment that share a common correspondence problem. Decades of psychophysical research dedicated to understanding stereopsis suggest that these features interact early in human visual processing to disambiguate depth. Single-unit recordings in the monkey also provide evidence for the joint encoding of motion and disparity across much of the dorsal visual stream. Here, we used functional MRI and multivariate pattern analysis to examine where in the human brain conjunctions of motion and disparity are encoded. Subjects sequentially viewed two stimuli that could be distinguished only by their conjunctions of motion and disparity. Specifically, each stimulus contained the same feature information (leftward and rightward motion and crossed and uncrossed disparity) but differed exclusively in the way these features were paired. Our results revealed that a linear classifier could accurately decode which stimulus a subject was viewing based on voxel activation patterns throughout the dorsal visual areas and as early as V2. This decoding success was conditional on some voxels being individually sensitive to the unique conjunctions comprising each stimulus, thus a classifier could not rely on independent information about motion and binocular disparity to distinguish these conjunctions. This study expands on evidence that disparity and motion interact at many levels of human visual processing, particularly within the dorsal stream. It also lends support to the idea that stereopsis is subserved by early mechanisms also tuned to direction of motion.

  11. Visual Contrast Enhancement Algorithm Based on Histogram Equalization

    PubMed Central

    Ting, Chih-Chung; Wu, Bing-Fei; Chung, Meng-Liang; Chiu, Chung-Cheng; Wu, Ya-Ching

    2015-01-01

    Image enhancement techniques primarily improve the contrast of an image to lend it a better appearance. One of the popular enhancement methods is histogram equalization (HE) because of its simplicity and effectiveness. However, it is rarely applied to consumer electronics products because it can cause excessive contrast enhancement and feature loss problems. These problems make the images processed by HE look unnatural and introduce unwanted artifacts in them. In this study, a visual contrast enhancement algorithm (VCEA) based on HE is proposed. VCEA considers the requirements of the human visual perception in order to address the drawbacks of HE. It effectively solves the excessive contrast enhancement problem by adjusting the spaces between two adjacent gray values of the HE histogram. In addition, VCEA reduces the effects of the feature loss problem by using the obtained spaces. Furthermore, VCEA enhances the detailed textures of an image to generate an enhanced image with better visual quality. Experimental results show that images obtained by applying VCEA have higher contrast and are more suited to human visual perception than those processed by HE and other HE-based methods. PMID:26184219

  12. Functional Characterization and Differential Coactivation Patterns of Two Cytoarchitectonic Visual Areas on the Human Posterior Fusiform Gyrus

    PubMed Central

    Caspers, Julian; Zilles, Karl; Amunts, Katrin; Laird, Angela R.; Fox, Peter T.; Eickhoff, Simon B.

    2016-01-01

    The ventral stream of the human extrastriate visual cortex shows a considerable functional heterogeneity from early visual processing (posterior) to higher, domain-specific processing (anterior). The fusiform gyrus hosts several of those “high-level” functional areas. We recently found a subdivision of the posterior fusiform gyrus on the microstructural level, that is, two distinct cytoarchitectonic areas, FG1 and FG2 (Caspers et al., Brain Structure & Function, 2013). To gain a first insight in the function of these two areas, here we studied their behavioral involvement and coactivation patterns by means of meta-analytic connectivity modeling based on the BrainMap database (www.brainmap.org), using probabilistic maps of these areas as seed regions. The coactivation patterns of the areas support the concept of a common involvement in a core network subserving different cognitive tasks, that is, object recognition, visual language perception, or visual attention. In addition, the analysis supports the previous cytoarchitectonic parcellation, indicating that FG1 appears as a transitional area between early and higher visual cortex and FG2 as a higher-order one. The latter area is furthermore lateralized, as it shows strong relations to the visual language processing system in the left hemisphere, while its right side is stronger associated with face selective regions. These findings indicate that functional lateralization of area FG2 relies on a different pattern of connectivity rather than side-specific cytoarchitectonic features. PMID:24038902

  13. Interpersonal touch suppresses visual processing of aversive stimuli

    PubMed Central

    Kawamichi, Hiroaki; Kitada, Ryo; Yoshihara, Kazufumi; Takahashi, Haruka K.; Sadato, Norihiro

    2015-01-01

    Social contact is essential for survival in human society. A previous study demonstrated that interpersonal contact alleviates pain-related distress by suppressing the activity of its underlying neural network. One explanation for this is that attention is shifted from the cause of distress to interpersonal contact. To test this hypothesis, we conducted a functional MRI (fMRI) study wherein eight pairs of close female friends rated the aversiveness of aversive and non-aversive visual stimuli under two conditions: joining hands either with a rubber model (rubber-hand condition) or with a close friend (human-hand condition). Subsequently, participants rated the overall comfortableness of each condition. The rating result after fMRI indicated that participants experienced greater comfortableness during the human-hand compared to the rubber-hand condition, whereas aversiveness ratings during fMRI were comparable across conditions. The fMRI results showed that the two conditions commonly produced aversive-related activation in both sides of the visual cortex (including V1, V2, and V5). An interaction between aversiveness and hand type showed rubber-hand-specific activation for (aversive > non-aversive) in other visual areas (including V1, V2, V3, and V4v). The effect of interpersonal contact on the processing of aversive stimuli was negatively correlated with the increment of attentional focus to aversiveness measured by a pain-catastrophizing scale. These results suggest that interpersonal touch suppresses the processing of aversive visual stimuli in the occipital cortex. This effect covaried with aversiveness-insensitivity, such that aversive-insensitive individuals might require a lesser degree of attentional capture to aversive-stimulus processing. As joining hands did not influence the subjective ratings of aversiveness, interpersonal touch may operate by redirecting excessive attention away from aversive characteristics of the stimuli. PMID:25904856

  14. How Many Words Is a Picture Worth? Integrating Visual Literacy in Language Learning with Photographs

    ERIC Educational Resources Information Center

    Baker, Lottie

    2015-01-01

    Cognitive research has shown that the human brain processes images quicker than it processes words, and images are more likely than text to remain in long-term memory. With the expansion of technology that allows people from all walks of life to create and share photographs with a few clicks, the world seems to value visual media more than ever…

  15. Noise-Induced Entrainment and Stochastic Resonance in Human Brain Waves

    NASA Astrophysics Data System (ADS)

    Mori, Toshio; Kai, Shoichi

    2002-05-01

    We present the first observation of stochastic resonance (SR) in the human brain's visual processing area. The novel experimental protocol is to stimulate the right eye with a subthreshold periodic optical signal and the left eye with a noisy one. The stimuli bypass sensory organs and are mixed in the visual cortex. With many noise sources present in the brain, higher brain functions, e.g., perception and cognition, may exploit SR.

  16. Applying Strategic Visualization(Registered Trademark) to Lunar and Planetary Mission Design

    NASA Technical Reports Server (NTRS)

    Frassanito, John R.; Cooke, D. R.

    2002-01-01

    NASA teams, such as the NASA Exploration Team (NEXT), utilize advanced computational visualization processes to develop mission designs and architectures for lunar and planetary missions. One such process, Strategic Visualization (trademark), is a tool used extensively to help mission designers visualize various design alternatives and present them to other participants of their team. The participants, which may include NASA, industry, and the academic community, are distributed within a virtual network. Consequently, computer animation and other digital techniques provide an efficient means to communicate top-level technical information among team members. Today,Strategic Visualization(trademark) is used extensively both in the mission design process within the technical community, and to communicate the value of space exploration to the general public. Movies and digital images have been generated and shown on nationally broadcast television and the Internet, as well as in magazines and digital media. In our presentation will show excerpts of a computer-generated animation depicting the reference Earth/Moon L1 Libration Point Gateway architecture. The Gateway serves as a staging corridor for human expeditions to the lunar poles and other surface locations. Also shown are crew transfer systems and current reference lunar excursion vehicles as well as the Human and robotic construction of an inflatable telescope array for deployment to the Sun/Earth Libration Point.

  17. Distinct spatio-temporal profiles of beta-oscillations within visual and sensorimotor areas during action recognition as revealed by MEG.

    PubMed

    Pavlidou, Anastasia; Schnitzler, Alfons; Lange, Joachim

    2014-05-01

    The neural correlates of action recognition have been widely studied in visual and sensorimotor areas of the human brain. However, the role of neuronal oscillations involved during the process of action recognition remains unclear. Here, we were interested in how the plausibility of an action modulates neuronal oscillations in visual and sensorimotor areas. Subjects viewed point-light displays (PLDs) of biomechanically plausible and implausible versions of the same actions. Using magnetoencephalography (MEG), we examined dynamic changes of oscillatory activity during these action recognition processes. While both actions elicited oscillatory activity in visual and sensorimotor areas in several frequency bands, a significant difference was confined to the beta-band (∼20 Hz). An increase of power for plausible actions was observed in left temporal, parieto-occipital and sensorimotor areas of the brain, in the beta-band in successive order between 1650 and 2650 msec. These distinct spatio-temporal beta-band profiles suggest that the action recognition process is modulated by the degree of biomechanical plausibility of the action, and that spectral power in the beta-band may provide a functional interaction between visual and sensorimotor areas in humans. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. DNA Data Visualization (DDV): Software for Generating Web-Based Interfaces Supporting Navigation and Analysis of DNA Sequence Data of Entire Genomes.

    PubMed

    Neugebauer, Tomasz; Bordeleau, Eric; Burrus, Vincent; Brzezinski, Ryszard

    2015-01-01

    Data visualization methods are necessary during the exploration and analysis activities of an increasingly data-intensive scientific process. There are few existing visualization methods for raw nucleotide sequences of a whole genome or chromosome. Software for data visualization should allow the researchers to create accessible data visualization interfaces that can be exported and shared with others on the web. Herein, novel software developed for generating DNA data visualization interfaces is described. The software converts DNA data sets into images that are further processed as multi-scale images to be accessed through a web-based interface that supports zooming, panning and sequence fragment selection. Nucleotide composition frequencies and GC skew of a selected sequence segment can be obtained through the interface. The software was used to generate DNA data visualization of human and bacterial chromosomes. Examples of visually detectable features such as short and long direct repeats, long terminal repeats, mobile genetic elements, heterochromatic segments in microbial and human chromosomes, are presented. The software and its source code are available for download and further development. The visualization interfaces generated with the software allow for the immediate identification and observation of several types of sequence patterns in genomes of various sizes and origins. The visualization interfaces generated with the software are readily accessible through a web browser. This software is a useful research and teaching tool for genetics and structural genomics.

  19. Biologically Inspired Visual Model With Preliminary Cognition and Active Attention Adjustment.

    PubMed

    Qiao, Hong; Xi, Xuanyang; Li, Yinlin; Wu, Wei; Li, Fengfu

    2015-11-01

    Recently, many computational models have been proposed to simulate visual cognition process. For example, the hierarchical Max-Pooling (HMAX) model was proposed according to the hierarchical and bottom-up structure of V1 to V4 in the ventral pathway of primate visual cortex, which could achieve position- and scale-tolerant recognition. In our previous work, we have introduced memory and association into the HMAX model to simulate visual cognition process. In this paper, we improve our theoretical framework by mimicking a more elaborate structure and function of the primate visual cortex. We will mainly focus on the new formation of memory and association in visual processing under different circumstances as well as preliminary cognition and active adjustment in the inferior temporal cortex, which are absent in the HMAX model. The main contributions of this paper are: 1) in the memory and association part, we apply deep convolutional neural networks to extract various episodic features of the objects since people use different features for object recognition. Moreover, to achieve a fast and robust recognition in the retrieval and association process, different types of features are stored in separated clusters and the feature binding of the same object is stimulated in a loop discharge manner and 2) in the preliminary cognition and active adjustment part, we introduce preliminary cognition to classify different types of objects since distinct neural circuits in a human brain are used for identification of various types of objects. Furthermore, active cognition adjustment of occlusion and orientation is implemented to the model to mimic the top-down effect in human cognition process. Finally, our model is evaluated on two face databases CAS-PEAL-R1 and AR. The results demonstrate that our model exhibits its efficiency on visual recognition process with much lower memory storage requirement and a better performance compared with the traditional purely computational methods.

  20. Rapid Simultaneous Enhancement of Visual Sensitivity and Perceived Contrast during Saccade Preparation

    PubMed Central

    Rolfs, Martin; Carrasco, Marisa

    2012-01-01

    Humans and other animals with foveate vision make saccadic eye movements to prioritize the visual analysis of behaviorally relevant information. Even before movement onset, visual processing is selectively enhanced at the target of a saccade, presumably gated by brain areas controlling eye movements. Here we assess concurrent changes in visual performance and perceived contrast before saccades, and show that saccade preparation enhances perception rapidly, altering early visual processing in a manner akin to increasing the physical contrast of the visual input. Observers compared orientation and contrast of a test stimulus, appearing briefly before a saccade, to a standard stimulus, presented previously during a fixation period. We found simultaneous progressive enhancement in both orientation discrimination performance and perceived contrast as time approached saccade onset. These effects were robust as early as 60 ms after the eye movement was cued, much faster than the voluntary deployment of covert attention (without eye movements), which takes ~300 ms. Our results link the dynamics of saccade preparation, visual performance, and subjective experience and show that upcoming eye movements alter visual processing by increasing the signal strength. PMID:23035086

  1. A computational model of spatial visualization capacity.

    PubMed

    Lyon, Don R; Gunzelmann, Glenn; Gluck, Kevin A

    2008-09-01

    Visualizing spatial material is a cornerstone of human problem solving, but human visualization capacity is sharply limited. To investigate the sources of this limit, we developed a new task to measure visualization accuracy for verbally-described spatial paths (similar to street directions), and implemented a computational process model to perform it. In this model, developed within the Adaptive Control of Thought-Rational (ACT-R) architecture, visualization capacity is limited by three mechanisms. Two of these (associative interference and decay) are longstanding characteristics of ACT-R's declarative memory. A third (spatial interference) is a new mechanism motivated by spatial proximity effects in our data. We tested the model in two experiments, one with parameter-value fitting, and a replication without further fitting. Correspondence between model and data was close in both experiments, suggesting that the model may be useful for understanding why visualizing new, complex spatial material is so difficult.

  2. Visual analytics for aviation safety: A collaborative approach to sensemaking

    NASA Astrophysics Data System (ADS)

    Wade, Andrew

    Visual analytics, the "science of analytical reasoning facilitated by interactive visual interfaces", is more than just visualization. Understanding the human reasoning process is essential for designing effective visualization tools and providing correct analyses. This thesis describes the evolution, application and evaluation of a new method for studying analytical reasoning that we have labeled paired analysis. Paired analysis combines subject matter experts (SMEs) and tool experts (TE) in an analytic dyad, here used to investigate aircraft maintenance and safety data. The method was developed and evaluated using interviews, pilot studies and analytic sessions during an internship at the Boeing Company. By enabling a collaborative approach to sensemaking that can be captured by researchers, paired analysis yielded rich data on human analytical reasoning that can be used to support analytic tool development and analyst training. Keywords: visual analytics, paired analysis, sensemaking, boeing, collaborative analysis.

  3. The Processing of Biologically Plausible and Implausible forms in American Sign Language: Evidence for Perceptual Tuning.

    PubMed

    Almeida, Diogo; Poeppel, David; Corina, David

    The human auditory system distinguishes speech-like information from general auditory signals in a remarkably fast and efficient way. Combining psychophysics and neurophysiology (MEG), we demonstrate a similar result for the processing of visual information used for language communication in users of sign languages. We demonstrate that the earliest visual cortical responses in deaf signers viewing American Sign Language (ASL) signs show specific modulations to violations of anatomic constraints that would make the sign either possible or impossible to articulate. These neural data are accompanied with a significantly increased perceptual sensitivity to the anatomical incongruity. The differential effects in the early visual evoked potentials arguably reflect an expectation-driven assessment of somatic representational integrity, suggesting that language experience and/or auditory deprivation may shape the neuronal mechanisms underlying the analysis of complex human form. The data demonstrate that the perceptual tuning that underlies the discrimination of language and non-language information is not limited to spoken languages but extends to languages expressed in the visual modality.

  4. Can you hear me yet? An intracranial investigation of speech and non-speech audiovisual interactions in human cortex.

    PubMed

    Rhone, Ariane E; Nourski, Kirill V; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A; McMurray, Bob

    In everyday conversation, viewing a talker's face can provide information about the timing and content of an upcoming speech signal, resulting in improved intelligibility. Using electrocorticography, we tested whether human auditory cortex in Heschl's gyrus (HG) and on superior temporal gyrus (STG) and motor cortex on precentral gyrus (PreC) were responsive to visual/gestural information prior to the onset of sound and whether early stages of auditory processing were sensitive to the visual content (speech syllable versus non-speech motion). Event-related band power (ERBP) in the high gamma band was content-specific prior to acoustic onset on STG and PreC, and ERBP in the beta band differed in all three areas. Following sound onset, we found with no evidence for content-specificity in HG, evidence for visual specificity in PreC, and specificity for both modalities in STG. These results support models of audio-visual processing in which sensory information is integrated in non-primary cortical areas.

  5. Dangerous animals capture and maintain attention in humans.

    PubMed

    Yorzinski, Jessica L; Penkunas, Michael J; Platt, Michael L; Coss, Richard G

    2014-05-28

    Predation is a major source of natural selection on primates and may have shaped attentional processes that allow primates to rapidly detect dangerous animals. Because ancestral humans were subjected to predation, a process that continues at very low frequencies, we examined the visual processes by which men and women detect dangerous animals (snakes and lions). We recorded the eye movements of participants as they detected images of a dangerous animal (target) among arrays of nondangerous animals (distractors) as well as detected images of a nondangerous animal (target) among arrays of dangerous animals (distractors). We found that participants were quicker to locate targets when the targets were dangerous animals compared with nondangerous animals, even when spatial frequency and luminance were controlled. The participants were slower to locate nondangerous targets because they spent more time looking at dangerous distractors, a process known as delayed disengagement, and looked at a larger number of dangerous distractors. These results indicate that dangerous animals capture and maintain attention in humans, suggesting that historical predation has shaped some facets of visual orienting and its underlying neural architecture in modern humans.

  6. Face Pareidolia in the Rhesus Monkey.

    PubMed

    Taubert, Jessica; Wardle, Susan G; Flessert, Molly; Leopold, David A; Ungerleider, Leslie G

    2017-08-21

    Face perception in humans and nonhuman primates is rapid and accurate [1-4]. In the human brain, a network of visual-processing regions is specialized for faces [5-7]. Although face processing is a priority of the primate visual system, face detection is not infallible. Face pareidolia is the compelling illusion of perceiving facial features on inanimate objects, such as the illusory face on the surface of the moon. Although face pareidolia is commonly experienced by humans, its presence in other species is unknown. Here we provide evidence for face pareidolia in a species known to possess a complex face-processing system [8-10]: the rhesus monkey (Macaca mulatta). In a visual preference task [11, 12], monkeys looked longer at photographs of objects that elicited face pareidolia in human observers than at photographs of similar objects that did not elicit illusory faces. Examination of eye movements revealed that monkeys fixated the illusory internal facial features in a pattern consistent with how they view photographs of faces [13]. Although the specialized response to faces observed in humans [1, 3, 5-7, 14] is often argued to be continuous across primates [4, 15], it was previously unclear whether face pareidolia arose from a uniquely human capacity. For example, pareidolia could be a product of the human aptitude for perceptual abstraction or result from frequent exposure to cartoons and illustrations that anthropomorphize inanimate objects. Instead, our results indicate that the perception of illusory facial features on inanimate objects is driven by a broadly tuned face-detection mechanism that we share with other species. Published by Elsevier Ltd.

  7. Evidence for auditory-visual processing specific to biological motion.

    PubMed

    Wuerger, Sophie M; Crocker-Buque, Alexander; Meyer, Georg F

    2012-01-01

    Biological motion is usually associated with highly correlated sensory signals from more than one modality: an approaching human walker will not only have a visual representation, namely an increase in the retinal size of the walker's image, but also a synchronous auditory signal since the walker's footsteps will grow louder. We investigated whether the multisensorial processing of biological motion is subject to different constraints than ecologically invalid motion. Observers were presented with a visual point-light walker and/or synchronised auditory footsteps; the walker was either approaching the observer (looming motion) or walking away (receding motion). A scrambled point-light walker served as a control. Observers were asked to detect the walker's motion as quickly and as accurately as possible. In Experiment 1 we tested whether the reaction time advantage due to redundant information in the auditory and visual modality is specific for biological motion. We found no evidence for such an effect: the reaction time reduction was accounted for by statistical facilitation for both biological and scrambled motion. In Experiment 2, we dissociated the auditory and visual information and tested whether inconsistent motion directions across the auditory and visual modality yield longer reaction times in comparison to consistent motion directions. Here we find an effect specific to biological motion: motion incongruency leads to longer reaction times only when the visual walker is intact and recognisable as a human figure. If the figure of the walker is abolished by scrambling, motion incongruency has no effect on the speed of the observers' judgments. In conjunction with Experiment 1 this suggests that conflicting auditory-visual motion information of an intact human walker leads to interference and thereby delaying the response.

  8. Internal state of monkey primary visual cortex (V1) predicts figure-ground perception.

    PubMed

    Supèr, Hans; van der Togt, Chris; Spekreijse, Henk; Lamme, Victor A F

    2003-04-15

    When stimulus information enters the visual cortex, it is rapidly processed for identification. However, sometimes the processing of the stimulus is inadequate and the subject fails to notice the stimulus. Human psychophysical studies show that this occurs during states of inattention or absent-mindedness. At a neurophysiological level, it remains unclear what these states are. To study the role of cortical state in perception, we analyzed neural activity in the monkey primary visual cortex before the appearance of a stimulus. We show that, before the appearance of a reported stimulus, neural activity was stronger and more correlated than for a not-reported stimulus. This indicates that the strength of neural activity and the functional connectivity between neurons in the primary visual cortex participate in the perceptual processing of stimulus information. Thus, to detect a stimulus, the visual cortex needs to be in an appropriate state.

  9. Experiences in using DISCUS for visualizing human communication

    NASA Astrophysics Data System (ADS)

    Groehn, Matti; Nieminen, Marko; Haho, Paeivi; Smeds, Riitta

    2000-02-01

    In this paper, we present further improvement to the DISCUS software that can be used to record and analyze the flow and constants of business process simulation session discussion. The tool was initially introduced in 'visual data exploration and analysis IV' conference. The initial features of the tool enabled the visualization of discussion flow in business process simulation sessions and the creation of SOM analyses. The improvements of the tool consists of additional visualization possibilities that enable quick on-line analyses and improved graphical statistics. We have also created the very first interface to audio data and implemented two ways to visualize it. We also outline additional possibilities to use the tool in other application areas: these include usability testing and the possibility to use the tool for capturing design rationale in a product development process. The data gathered with DISCUS may be used in other applications, and further work may be done with data ming techniques.

  10. Visualization of airflow growing soap bubbles

    NASA Astrophysics Data System (ADS)

    Al Rahbi, Hamood; Bock, Matthew; Ryu, Sangjin

    2016-11-01

    Visualizing airflow inside growing soap bubbles can answer questions regarding the fluid dynamics of soap bubble blowing, which is a model system for flows with a gas-liquid-gas interface. Also, understanding the soap bubble blowing process is practical because it can contribute to controlling industrial processes similar to soap bubble blowing. In this study, we visualized airflow which grows soap bubbles using the smoke wire technique to understand how airflow blows soap bubbles. The soap bubble blower setup was built to mimic the human blowing process of soap bubbles, which consists of a blower, a nozzle and a bubble ring. The smoke wire was placed between the nozzle and the bubble ring, and smoke-visualized airflow was captured using a high speed camera. Our visualization shows how air jet flows into the growing soap bubble on the ring and how the airflow interacts with the soap film of growing bubble.

  11. Visual Perceptual Learning and Models.

    PubMed

    Dosher, Barbara; Lu, Zhong-Lin

    2017-09-15

    Visual perceptual learning through practice or training can significantly improve performance on visual tasks. Originally seen as a manifestation of plasticity in the primary visual cortex, perceptual learning is more readily understood as improvements in the function of brain networks that integrate processes, including sensory representations, decision, attention, and reward, and balance plasticity with system stability. This review considers the primary phenomena of perceptual learning, theories of perceptual learning, and perceptual learning's effect on signal and noise in visual processing and decision. Models, especially computational models, play a key role in behavioral and physiological investigations of the mechanisms of perceptual learning and for understanding, predicting, and optimizing human perceptual processes, learning, and performance. Performance improvements resulting from reweighting or readout of sensory inputs to decision provide a strong theoretical framework for interpreting perceptual learning and transfer that may prove useful in optimizing learning in real-world applications.

  12. Abnormal visual scan paths: a psychophysiological marker of delusions in schizophrenia.

    PubMed

    Phillips, M L; David, A S

    1998-02-09

    The role of the visual scan path as a psychophysiological marker of visual attention has been highlighted previously (Phillips and David, 1994). We investigated information processing in schizophrenic patients with severe delusions and again when the delusions were subsiding using visual scan path measurements. We aimed to demonstrate a specific deficit in processing human faces in deluded subjects by relating this to abnormal viewing strategies. Scan paths were measured in six deluded and five non-deluded schizophrenics (matched for medication and negative symptoms), and nine age-matched normal controls. Deluded subjects had abnormal scan paths in a recognition task, fixating non-feature areas significantly more than controls, but were equally accurate. Re-testing after improvement in delusional conviction revealed fewer group differences. The results suggest state-dependent abnormal information processing in schizophrenics when deluded, with reliance on less-salient visual information for decision-making.

  13. Stereoscopic processing of crossed and uncrossed disparities in the human visual cortex.

    PubMed

    Li, Yuan; Zhang, Chuncheng; Hou, Chunping; Yao, Li; Zhang, Jiacai; Long, Zhiying

    2017-12-21

    Binocular disparity provides a powerful cue for depth perception in a stereoscopic environment. Despite increasing knowledge of the cortical areas that process disparity from neuroimaging studies, the neural mechanism underlying disparity sign processing [crossed disparity (CD)/uncrossed disparity (UD)] is still poorly understood. In the present study, functional magnetic resonance imaging (fMRI) was used to explore different neural features that are relevant to disparity-sign processing. We performed an fMRI experiment on 27 right-handed healthy human volunteers by using both general linear model (GLM) and multi-voxel pattern analysis (MVPA) methods. First, GLM was used to determine the cortical areas that displayed different responses to different disparity signs. Second, MVPA was used to determine how the cortical areas discriminate different disparity signs. The GLM analysis results indicated that shapes with UD induced significantly stronger activity in the sub-region (LO) of the lateral occipital cortex (LOC) than those with CD. The results of MVPA based on region of interest indicated that areas V3d and V3A displayed higher accuracy in the discrimination of crossed and uncrossed disparities than LOC. The results of searchlight-based MVPA indicated that the dorsal visual cortex showed significantly higher prediction accuracy than the ventral visual cortex and the sub-region LO of LOC showed high accuracy in the discrimination of crossed and uncrossed disparities. The results may suggest the dorsal visual areas are more discriminative to the disparity signs than the ventral visual areas although they are not sensitive to the disparity sign processing. Moreover, the LO in the ventral visual cortex is relevant to the recognition of shapes with different disparity signs and discriminative to the disparity sign.

  14. The µ-opioid system promotes visual attention to faces and eyes.

    PubMed

    Chelnokova, Olga; Laeng, Bruno; Løseth, Guro; Eikemo, Marie; Willoch, Frode; Leknes, Siri

    2016-12-01

    Paying attention to others' faces and eyes is a cornerstone of human social behavior. The µ-opioid receptor (MOR) system, central to social reward-processing in rodents and primates, has been proposed to mediate the capacity for affiliative reward in humans. We assessed the role of the human MOR system in visual exploration of faces and eyes of conspecifics. Thirty healthy males received a novel, bidirectional battery of psychopharmacological treatment (an MOR agonist, a non-selective opioid antagonist, or placebo, on three separate days). Eye-movements were recorded while participants viewed facial photographs. We predicted that the MOR system would promote visual exploration of faces, and hypothesized that MOR agonism would increase, whereas antagonism decrease overt attention to the information-rich eye region. The expected linear effect of MOR manipulation on visual attention to the stimuli was observed, such that MOR agonism increased while antagonism decreased visual exploration of faces and overt attention to the eyes. The observed effects suggest that the human MOR system promotes overt visual attention to socially significant cues, in line with theories linking reward value to gaze control and target selection. Enhanced attention to others' faces and eyes represents a putative behavioral mechanism through which the human MOR system promotes social interest. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  15. People-oriented Information Visualization Design

    NASA Astrophysics Data System (ADS)

    Chen, Zhiyong; Zhang, Bolun

    2018-04-01

    In the 21st century with rapid development, in the wake of the continuous progress of science and technology, human society enters the information era and the era of big data, and the lifestyle and aesthetic system also change accordingly, so the emerging field of information visualization is increasingly popular. Information visualization design is the process of visualizing all kinds of tedious information data, so as to quickly accept information and save time-cost. Along with the development of the process of information visualization, information design, also becomes hotter and hotter, and emotional design, people-oriented design is an indispensable part of in the design of information. This paper probes information visualization design through emotional analysis of information design based on the social context of people-oriented experience from the perspective of art design. Based on the three levels of emotional information design: instinct level, behavior level and reflective level research, to explore and discuss information visualization design.

  16. Differential temporal dynamics during visual imagery and perception.

    PubMed

    Dijkstra, Nadine; Mostert, Pim; Lange, Floris P de; Bosch, Sander; van Gerven, Marcel Aj

    2018-05-29

    Visual perception and imagery rely on similar representations in the visual cortex. During perception, visual activity is characterized by distinct processing stages, but the temporal dynamics underlying imagery remain unclear. Here, we investigated the dynamics of visual imagery in human participants using magnetoencephalography. Firstly, we show that, compared to perception, imagery decoding becomes significant later and representations at the start of imagery already overlap with later time points. This suggests that during imagery, the entire visual representation is activated at once or that there are large differences in the timing of imagery between trials. Secondly, we found consistent overlap between imagery and perceptual processing around 160 ms and from 300 ms after stimulus onset. This indicates that the N170 gets reactivated during imagery and that imagery does not rely on early perceptual representations. Together, these results provide important insights for our understanding of the neural mechanisms of visual imagery. © 2018, Dijkstra et al.

  17. Networks for image acquisition, processing and display

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.

    1990-01-01

    The human visual system comprises layers of networks which sample, process, and code images. Understanding these networks is a valuable means of understanding human vision and of designing autonomous vision systems based on network processing. Ames Research Center has an ongoing program to develop computational models of such networks. The models predict human performance in detection of targets and in discrimination of displayed information. In addition, the models are artificial vision systems sharing properties with biological vision that has been tuned by evolution for high performance. Properties include variable density sampling, noise immunity, multi-resolution coding, and fault-tolerance. The research stresses analysis of noise in visual networks, including sampling, photon, and processing unit noises. Specific accomplishments include: models of sampling array growth with variable density and irregularity comparable to that of the retinal cone mosaic; noise models of networks with signal-dependent and independent noise; models of network connection development for preserving spatial registration and interpolation; multi-resolution encoding models based on hexagonal arrays (HOP transform); and mathematical procedures for simplifying analysis of large networks.

  18. MAVEN-SA: Model-Based Automated Visualization for Enhanced Situation Awareness

    DTIC Science & Technology

    2005-11-01

    34 methods. But historically, as arts evolve, these how to methods become systematized and codified (e.g. the development and refinement of color theory ...schema (as necessary) 3. Draw inferences from new knowledge to support decision making process 33 Visual language theory suggests that humans process...informed by theories of learning. Over the years, many types of software have been developed to support student learning. The various types of

  19. Context processing in adolescents with autism spectrum disorder: How complex could it be?

    PubMed

    Ben-Yosef, Dekel; Anaki, David; Golan, Ofer

    2017-03-01

    The ability of individuals with Autism Spectrum Disorder (ASD) to process context has long been debated: According to the Weak Central Coherence theory, ASD is characterized by poor global processing, and consequently-poor context processing. In contrast, the Social Cognition theory argues individuals with ASD will present difficulties only in social context processing. The complexity theory of autism suggests context processing in ASD will depend on task complexity. The current study examined this controversy through two priming tasks, one presenting human stimuli (facial expressions) and the other presenting non-human stimuli (animal faces). Both tasks presented visual targets, preceded by congruent, incongruent, or neutral auditory primes. Local and global processing were examined by presenting the visual targets in three spatial frequency conditions: High frequency, low frequency, and broadband. Tasks were administered to 16 adolescents with high functioning ASD and 16 matched typically developing adolescents. Reaction time and accuracy were measured for each task in each condition. Results indicated that individuals with ASD processed context for both human and non-human stimuli, except in one condition, in which human stimuli had to be processed globally (i.e., target presented in low frequency). The task demands presented in this condition, and the performance deficit shown in the ASD group as a result, could be understood in terms of cognitive overload. These findings provide support for the complexity theory of autism and extend it. Our results also demonstrate how associative priming could support intact context processing of human and non-human stimuli in individuals with ASD. Autism Res 2017, 10: 520-530. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.

  20. The role of early visual cortex in visual short-term memory and visual attention.

    PubMed

    Offen, Shani; Schluppeck, Denis; Heeger, David J

    2009-06-01

    We measured cortical activity with functional magnetic resonance imaging to probe the involvement of early visual cortex in visual short-term memory and visual attention. In four experimental tasks, human subjects viewed two visual stimuli separated by a variable delay period. The tasks placed differential demands on short-term memory and attention, but the stimuli were visually identical until after the delay period. Early visual cortex exhibited sustained responses throughout the delay when subjects performed attention-demanding tasks, but delay-period activity was not distinguishable from zero when subjects performed a task that required short-term memory. This dissociation reveals different computational mechanisms underlying the two processes.

  1. Spatial Mechanisms within the Dorsal Visual Pathway Contribute to the Configural Processing of Faces.

    PubMed

    Zachariou, Valentinos; Nikas, Christine V; Safiullah, Zaid N; Gotts, Stephen J; Ungerleider, Leslie G

    2017-08-01

    Human face recognition is often attributed to configural processing; namely, processing the spatial relationships among the features of a face. If configural processing depends on fine-grained spatial information, do visuospatial mechanisms within the dorsal visual pathway contribute to this process? We explored this question in human adults using functional magnetic resonance imaging and transcranial magnetic stimulation (TMS) in a same-different face detection task. Within localized, spatial-processing regions of the posterior parietal cortex, configural face differences led to significantly stronger activation compared to featural face differences, and the magnitude of this activation correlated with behavioral performance. In addition, detection of configural relative to featural face differences led to significantly stronger functional connectivity between the right FFA and the spatial processing regions of the dorsal stream, whereas detection of featural relative to configural face differences led to stronger functional connectivity between the right FFA and left FFA. Critically, TMS centered on these parietal regions impaired performance on configural but not featural face difference detections. We conclude that spatial mechanisms within the dorsal visual pathway contribute to the configural processing of facial features and, more broadly, that the dorsal stream may contribute to the veridical perception of faces. Published by Oxford University Press 2016.

  2. Sensitivity to timing and order in human visual cortex.

    PubMed

    Singer, Jedediah M; Madsen, Joseph R; Anderson, William S; Kreiman, Gabriel

    2015-03-01

    Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. Copyright © 2015 the American Physiological Society.

  3. A biologically inspired neural model for visual and proprioceptive integration including sensory training.

    PubMed

    Saidi, Maryam; Towhidkhah, Farzad; Gharibzadeh, Shahriar; Lari, Abdolaziz Azizi

    2013-12-01

    Humans perceive the surrounding world by integration of information through different sensory modalities. Earlier models of multisensory integration rely mainly on traditional Bayesian and causal Bayesian inferences for single causal (source) and two causal (for two senses such as visual and auditory systems), respectively. In this paper a new recurrent neural model is presented for integration of visual and proprioceptive information. This model is based on population coding which is able to mimic multisensory integration of neural centers in the human brain. The simulation results agree with those achieved by casual Bayesian inference. The model can also simulate the sensory training process of visual and proprioceptive information in human. Training process in multisensory integration is a point with less attention in the literature before. The effect of proprioceptive training on multisensory perception was investigated through a set of experiments in our previous study. The current study, evaluates the effect of both modalities, i.e., visual and proprioceptive training and compares them with each other through a set of new experiments. In these experiments, the subject was asked to move his/her hand in a circle and estimate its position. The experiments were performed on eight subjects with proprioception training and eight subjects with visual training. Results of the experiments show three important points: (1) visual learning rate is significantly more than that of proprioception; (2) means of visual and proprioceptive errors are decreased by training but statistical analysis shows that this decrement is significant for proprioceptive error and non-significant for visual error, and (3) visual errors in training phase even in the beginning of it, is much less than errors of the main test stage because in the main test, the subject has to focus on two senses. The results of the experiments in this paper is in agreement with the results of the neural model simulation.

  4. Foveal analysis and peripheral selection during active visual sampling

    PubMed Central

    Ludwig, Casimir J. H.; Davies, J. Rhys; Eckstein, Miguel P.

    2014-01-01

    Human vision is an active process in which information is sampled during brief periods of stable fixation in between gaze shifts. Foveal analysis serves to identify the currently fixated object and has to be coordinated with a peripheral selection process of the next fixation location. Models of visual search and scene perception typically focus on the latter, without considering foveal processing requirements. We developed a dual-task noise classification technique that enables identification of the information uptake for foveal analysis and peripheral selection within a single fixation. Human observers had to use foveal vision to extract visual feature information (orientation) from different locations for a psychophysical comparison. The selection of to-be-fixated locations was guided by a different feature (luminance contrast). We inserted noise in both visual features and identified the uptake of information by looking at correlations between the noise at different points in time and behavior. Our data show that foveal analysis and peripheral selection proceeded completely in parallel. Peripheral processing stopped some time before the onset of an eye movement, but foveal analysis continued during this period. Variations in the difficulty of foveal processing did not influence the uptake of peripheral information and the efficacy of peripheral selection, suggesting that foveal analysis and peripheral selection operated independently. These results provide important theoretical constraints on how to model target selection in conjunction with foveal object identification: in parallel and independently. PMID:24385588

  5. Basic multisensory functions can be acquired after congenital visual pattern deprivation in humans.

    PubMed

    Putzar, Lisa; Gondan, Matthias; Röder, Brigitte

    2012-01-01

    People treated for bilateral congenital cataracts offer a model to study the influence of visual deprivation in early infancy on visual and multisensory development. We investigated cross-modal integration capabilities in cataract patients using a simple detection task that provided redundant information to two different senses. In both patients and controls, redundancy gains were consistent with coactivation models, indicating an integrated processing of modality-specific information. This finding is in contrast with recent studies showing impaired higher-level multisensory interactions in cataract patients. The present results suggest that basic cross-modal integrative processes for simple short stimuli do not depend on visual and/or crossmodal input since birth.

  6. Designing visual displays and system models for safe reactor operations based on the user`s perspective of the system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown-VanHoozer, S.A.

    Most designers are not schooled in the area of human-interaction psychology and therefore tend to rely on the traditional ergonomic aspects of human factors when designing complex human-interactive workstations related to reactor operations. They do not take into account the differences in user information processing behavior and how these behaviors may affect individual and team performance when accessing visual displays or utilizing system models in process and control room areas. Unfortunately, by ignoring the importance of the integration of the user interface at the information process level, the result can be sub-optimization and inherently error- and failure-prone systems. Therefore, tomore » minimize or eliminate failures in human-interactive systems, it is essential that the designers understand how each user`s processing characteristics affects how the user gathers information, and how the user communicates the information to the designer and other users. A different type of approach in achieving this understanding is Neuro Linguistic Programming (NLP). The material presented in this paper is based on two studies involving the design of visual displays, NLP, and the user`s perspective model of a reactor system. The studies involve the methodology known as NLP, and its use in expanding design choices from the user`s ``model of the world,`` in the areas of virtual reality, workstation design, team structure, decision and learning style patterns, safety operations, pattern recognition, and much, much more.« less

  7. Neural Pathways Conveying Novisual Information to the Visual Cortex

    PubMed Central

    2013-01-01

    The visual cortex has been traditionally considered as a stimulus-driven, unimodal system with a hierarchical organization. However, recent animal and human studies have shown that the visual cortex responds to non-visual stimuli, especially in individuals with visual deprivation congenitally, indicating the supramodal nature of the functional representation in the visual cortex. To understand the neural substrates of the cross-modal processing of the non-visual signals in the visual cortex, we firstly showed the supramodal nature of the visual cortex. We then reviewed how the nonvisual signals reach the visual cortex. Moreover, we discussed if these non-visual pathways are reshaped by early visual deprivation. Finally, the open question about the nature (stimulus-driven or top-down) of non-visual signals is also discussed. PMID:23840972

  8. Sensing Super-Position: Human Sensing Beyond the Visual Spectrum

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Schipper, John F.

    2007-01-01

    The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This paper addresses the technical feasibility of augmenting human vision through Sensing Super-position by mixing natural Human sensing. The current implementation of the device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of Lie human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system. The human brain is superior to most existing computer systems in rapidly extracting relevant information from blurred, noisy, and redundant images. From a theoretical viewpoint, this means that the available bandwidth is not exploited in an optimal way. While image-processing techniques can manipulate, condense and focus the information (e.g., Fourier Transforms), keeping the mapping as direct and simple as possible might also reduce the risk of accidentally filtering out important clues. After all, especially a perfect non-redundant sound representation is prone to loss of relevant information in the non-perfect human hearing system. Also, a complicated non-redundant image-to-sound mapping may well be far more difficult to learn and comprehend than a straightforward mapping, while the mapping system would increase in complexity and cost. This work will demonstrate some basic information processing for optimal information capture for headmounted systems.

  9. Two Dream Machines: Television and the Human Brain.

    ERIC Educational Resources Information Center

    Deming, Caren J.

    Research into brain physiology and dream psychology have helped to illuminate the biological purposes and processes of dreaming. Physical and functional characteristics shared by dreaming and television include the perception of visual and auditory images, operation in a binary mode, and the encoding of visual information. Research is needed in…

  10. Modality-independent coding of spatial layout in the human brain

    PubMed Central

    Wolbers, Thomas; Klatzky, Roberta L.; Loomis, Jack M.; Wutte, Magdalena G.; Giudice, Nicholas A.

    2011-01-01

    Summary In many non-human species, neural computations of navigational information such as position and orientation are not tied to a specific sensory modality [1, 2]. Rather, spatial signals are integrated from multiple input sources, likely leading to abstract representations of space. In contrast, the potential for abstract spatial representations in humans is not known, as most neuroscientific experiments on human navigation have focused exclusively on visual cues. Here, we tested the modality independence hypothesis with two fMRI experiments that characterized computations in regions implicated in processing spatial layout [3]. According to the hypothesis, such regions should be recruited for spatial computation of 3-D geometric configuration, independent of a specific sensory modality. In support of this view, sighted participants showed strong activation of the parahippocampal place area (PPA) and the retrosplenial cortex (RSC) for visual and haptic exploration of information-matched scenes but not objects. Functional connectivity analyses suggested that these effects were not related to visual recoding, which was further supported by a similar preference for haptic scenes found with blind participants. Taken together, these findings establish the PPA/RSC network as critical in modality-independent spatial computations and provide important evidence for a theory of high-level abstract spatial information processing in the human brain. PMID:21620708

  11. The effect of human engagement depicted in contextual photographs on the visual attention patterns of adults with traumatic brain injury.

    PubMed

    Thiessen, Amber; Brown, Jessica; Beukelman, David; Hux, Karen

    2017-09-01

    Photographs are a frequently employed tool for the rehabilitation of adults with traumatic brain injury (TBI). Speech-language pathologists (SLPs) working with these individuals must select photos that are easily identifiable and meaningful to their clients. In this investigation, we examined the visual attention response to camera- (i.e., depicted human figure looking toward camera) and task-engaged (i.e., depicted human figure looking at and touching an object) contextual photographs for a group of adults with TBI and a group of adults without neurological conditions. Eye-tracking technology served to accurately and objectively measure visual fixations. Although differences were hypothesized given the cognitive deficits associated with TBI, study results revealed little difference in the visual fixation patterns of adults with and without TBI. Specifically, both groups of participants tended to fixate rapidly on the depicted human figure and fixate more on objects in which a human figure was task-engaged than when a human figure was camera-engaged. These results indicate that strategic placement of human figures in a contextual photograph may modify the way in which individuals with TBI visually attend to and interpret photographs. In addition, task-engagement appears to have a guiding effect on visual attention that may be of benefit to SLPs hoping to select more effective contextual photographs for their clients with TBI. Finally, the limited differences in visual attention patterns between individuals with TBI and their age and gender matched peers without neurological impairments indicates that these two groups find similar photograph regions to be worthy of visual fixation. Readers will gain knowledge regarding the photograph selection process for individuals with TBI. In addition, readers will be able to identify camera- and task-engaged photographs and to explain why task-engagement may be a beneficial component of contextual photographs. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Perceptual deficits of object identification: apperceptive agnosia.

    PubMed

    Milner, A David; Cavina-Pratesi, Cristiana

    2018-01-01

    It is argued here that apperceptive object agnosia (generally now known as visual form agnosia) is in reality not a kind of agnosia, but rather a form of "imperception" (to use the term coined by Hughlings Jackson). We further argue that its proximate cause is a bilateral loss (or functional loss) of the visual form processing systems embodied in the human lateral occipital cortex (area LO). According to the dual-system model of cortical visual processing elaborated by Milner and Goodale (2006), area LO constitutes a crucial component of the ventral stream, and indeed is essential for providing the figural qualities inherent in our normal visual perception of the world. According to this account, the functional loss of area LO would leave only spared visual areas within the occipito-parietal dorsal stream - dedicated to the control of visually-guided actions - potentially able to provide some aspects of visual shape processing in patients with apperceptive agnosia. We review the relevant evidence from such individuals, concentrating particularly on the well-researched patient D.F. We conclude that studies of this kind can provide useful pointers to an understanding of the processing characteristics of parietal-lobe visual mechanisms and their interactions with occipitotemporal perceptual systems in the guidance of action. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. The Spatial and the Visual in Mental Spatial Reasoning: An Ill-Posed Distinction

    NASA Astrophysics Data System (ADS)

    Schultheis, Holger; Bertel, Sven; Barkowsky, Thomas; Seifert, Inessa

    It is an ongoing and controversial debate in cognitive science which aspects of knowledge humans process visually and which ones they process spatially. Similarly, artificial intelligence (AI) and cognitive science research, in building computational cognitive systems, tended to use strictly spatial or strictly visual representations. The resulting systems, however, were suboptimal both with respect to computational efficiency and cognitive plau sibility. In this paper, we propose that the problems in both research strands stem from a mis conception of the visual and the spatial in mental spatial knowl edge pro cessing. Instead of viewing the visual and the spatial as two clearly separable categories, they should be conceptualized as the extremes of a con tinuous dimension of representation. Regarding psychology, a continuous di mension avoids the need to exclusively assign processes and representations to either one of the cate gories and, thus, facilitates a more unambiguous rating of processes and rep resentations. Regarding AI and cognitive science, the con cept of a continuous spatial / visual dimension provides the possibility of rep re sentation structures which can vary continuously along the spatial / visual di mension. As a first step in exploiting these potential advantages of the pro posed conception we (a) introduce criteria allowing for a non-dichotomic judgment of processes and representations and (b) present an approach towards rep re sentation structures that can flexibly vary along the spatial / visual dimension.

  14. The "social" and "interpersonal" body in spatial cognition. The role of agency and interagency.

    PubMed

    Crivelli, Davide; Balconi, Michela

    2015-09-01

    In order to interact effectively, we need to represent our action as produced by human beings. According to direct access theories, the first steps of visual information processing offer us an informed direct grasp of the situation, especially when social and interpersonal components are implicated. Biological system detection may be the gateway of such smart processes and then may influence initial stages of perception fostering adaptive social behaviour. To investigate early neural correlates of human agency detection in ecological situations with more high or low social impact, we compared scenes showing a human versus artificial agent interacting with a human agent. Twenty volunteers participated in the study. They were asked to observe dynamic visual stimuli showing realistic interactions. ERP (event-related potentials) were recorded. Each stimulus depicted an arm executing a gesture addressed to a human agent. Visual features of the arm were manipulated: in half of trials, it was real; in other trials, it was deprived of some details and transformed in a statue-like arm. EEG morphological analysis revealed an early negative deflection peaking at about 155 ms. Peak amplitude data have been statistically analysed by repeated-measures ANOVAs. It was found that the peak was ampler in the left inferior posterior region when the gesturing arm was human. The early negative deflection, N150, which we found to be different between the human and artificial conditions, is presumably associated with human agency detection in high interpersonal context.

  15. Looking away from faces: influence of high-level visual processes on saccade programming.

    PubMed

    Morand, Stéphanie M; Grosbras, Marie-Hélène; Caldara, Roberto; Harvey, Monika

    2010-03-30

    Human faces capture attention more than other visual stimuli. Here we investigated whether such face-specific biases rely on automatic (involuntary) or voluntary orienting responses. To this end, we used an anti-saccade paradigm, which requires the ability to inhibit a reflexive automatic response and to generate a voluntary saccade in the opposite direction of the stimulus. To control for potential low-level confounds in the eye-movement data, we manipulated the high-level visual properties of the stimuli while normalizing their global low-level visual properties. Eye movements were recorded in 21 participants who performed either pro- or anti-saccades to a face, car, or noise pattern, randomly presented to the left or right of a fixation point. For each trial, a symbolic cue instructed the observer to generate either a pro-saccade or an anti-saccade. We report a significant increase in anti-saccade error rates for faces compared to cars and noise patterns, as well as faster pro-saccades to faces and cars in comparison to noise patterns. These results indicate that human faces induce stronger involuntary orienting responses than other visual objects, i.e., responses that are beyond the control of the observer. Importantly, this involuntary processing cannot be accounted for by global low-level visual factors.

  16. Metacognitive Confidence Increases with, but Does Not Determine, Visual Perceptual Learning.

    PubMed

    Zizlsperger, Leopold; Kümmel, Florian; Haarmeier, Thomas

    2016-01-01

    While perceptual learning increases objective sensitivity, the effects on the constant interaction of the process of perception and its metacognitive evaluation have been rarely investigated. Visual perception has been described as a process of probabilistic inference featuring metacognitive evaluations of choice certainty. For visual motion perception in healthy, naive human subjects here we show that perceptual sensitivity and confidence in it increased with training. The metacognitive sensitivity-estimated from certainty ratings by a bias-free signal detection theoretic approach-in contrast, did not. Concomitant 3Hz transcranial alternating current stimulation (tACS) was applied in compliance with previous findings on effective high-low cross-frequency coupling subserving signal detection. While perceptual accuracy and confidence in it improved with training, there were no statistically significant tACS effects. Neither metacognitive sensitivity in distinguishing between their own correct and incorrect stimulus classifications, nor decision confidence itself determined the subjects' visual perceptual learning. Improvements of objective performance and the metacognitive confidence in it were rather determined by the perceptual sensitivity at the outset of the experiment. Post-decision certainty in visual perceptual learning was neither independent of objective performance, nor requisite for changes in sensitivity, but rather covaried with objective performance. The exact functional role of metacognitive confidence in human visual perception has yet to be determined.

  17. Semantic integration of differently asynchronous audio-visual information in videos of real-world events in cognitive processing: an ERP study.

    PubMed

    Liu, Baolin; Wu, Guangning; Wang, Zhongning; Ji, Xiang

    2011-07-01

    In the real world, some of the auditory and visual information received by the human brain are temporally asynchronous. How is such information integrated in cognitive processing in the brain? In this paper, we aimed to study the semantic integration of differently asynchronous audio-visual information in cognitive processing using ERP (event-related potential) method. Subjects were presented with videos of real world events, in which the auditory and visual information are temporally asynchronous. When the critical action was prior to the sound, sounds incongruous with the preceding critical actions elicited a N400 effect when compared to congruous condition. This result demonstrates that semantic contextual integration indexed by N400 also applies to cognitive processing of multisensory information. In addition, the N400 effect is early in latency when contrasted with other visually induced N400 studies. It is shown that cross modal information is facilitated in time when contrasted with visual information in isolation. When the sound was prior to the critical action, a larger late positive wave was observed under the incongruous condition compared to congruous condition. P600 might represent a reanalysis process, in which the mismatch between the critical action and the preceding sound was evaluated. It is shown that environmental sound may affect the cognitive processing of a visual event. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  18. Proceedings of the Augmented VIsual Display (AVID) Research Workshop

    NASA Technical Reports Server (NTRS)

    Kaiser, Mary K. (Editor); Sweet, Barbara T. (Editor)

    1993-01-01

    The papers, abstracts, and presentations were presented at a three day workshop focused on sensor modeling and simulation, and image enhancement, processing, and fusion. The technical sessions emphasized how sensor technology can be used to create visual imagery adequate for aircraft control and operations. Participants from industry, government, and academic laboratories contributed to panels on Sensor Systems, Sensor Modeling, Sensor Fusion, Image Processing (Computer and Human Vision), and Image Evaluation and Metrics.

  19. Adaptation in human visual cortex as a mechanism for rapid discrimination of aversive stimuli.

    PubMed

    Keil, Andreas; Stolarova, Margarita; Moratti, Stephan; Ray, William J

    2007-06-01

    The ability to react rapidly and efficiently to adverse stimuli is crucial for survival. Neuroscience and behavioral studies have converged to show that visual information associated with aversive content is processed quickly and accurately and is associated with rapid amplification of the neural responses. In particular, unpleasant visual information has repeatedly been shown to evoke increased cortical activity during early visual processing between 60 and 120 ms following the onset of a stimulus. However, the nature of these early responses is not well understood. Using neutral versus unpleasant colored pictures, the current report examines the time course of short-term changes in the human visual cortex when a subject is repeatedly exposed to simple grating stimuli in a classical conditioning paradigm. We analyzed changes in amplitude and synchrony of large-scale oscillatory activity across 2 days of testing, which included baseline measurements, 2 conditioning sessions, and a final extinction session. We found a gradual increase in amplitude and synchrony of very early cortical oscillations in the 20-35 Hz range across conditioning sessions, specifically for conditioned stimuli predicting aversive visual events. This increase for conditioned stimuli affected stimulus-locked cortical oscillations at a latency of around 60-90 ms and disappeared during extinction. Our findings suggest that reorganization of neural connectivity on the level of the visual cortex acts to optimize early perception of specific features indicative of emotional relevance.

  20. Statistical regularities in art: Relations with visual coding and perception.

    PubMed

    Graham, Daniel J; Redies, Christoph

    2010-07-21

    Since at least 1935, vision researchers have used art stimuli to test human response to complex scenes. This is sensible given the "inherent interestingness" of art and its relation to the natural visual world. The use of art stimuli has remained popular, especially in eye tracking studies. Moreover, stimuli in common use by vision scientists are inspired by the work of famous artists (e.g., Mondrians). Artworks are also popular in vision science as illustrations of a host of visual phenomena, such as depth cues and surface properties. However, until recently, there has been scant consideration of the spatial, luminance, and color statistics of artwork, and even less study of ways that regularities in such statistics could affect visual processing. Furthermore, the relationship between regularities in art images and those in natural scenes has received little or no attention. In the past few years, there has been a concerted effort to study statistical regularities in art as they relate to neural coding and visual perception, and art stimuli have begun to be studied in rigorous ways, as natural scenes have been. In this minireview, we summarize quantitative studies of links between regular statistics in artwork and processing in the visual stream. The results of these studies suggest that art is especially germane to understanding human visual coding and perception, and it therefore warrants wider study. Copyright 2010 Elsevier Ltd. All rights reserved.

  1. Understanding the symptoms of schizophrenia using visual scan paths.

    PubMed

    Phillips, M L; David, A S

    1994-11-01

    This paper highlights the role of the visual scan path as a physiological marker of information processing, while investigating positive symptomatology in schizophrenia. The current literature is reviewed using computer search facilities (Medline). Schizophrenics either scan or stare extensively, the latter related to negative symptoms. Schizophrenics particularly scan when viewing human faces. Scan paths in schizophrenics are important when viewing meaningful stimuli such as human faces, because of the relationship between abnormal perception of stimuli and symptomatology in these subjects.

  2. A rodent model for the study of invariant visual object recognition

    PubMed Central

    Zoccolan, Davide; Oertelt, Nadja; DiCarlo, James J.; Cox, David D.

    2009-01-01

    The human visual system is able to recognize objects despite tremendous variation in their appearance on the retina resulting from variation in view, size, lighting, etc. This ability—known as “invariant” object recognition—is central to visual perception, yet its computational underpinnings are poorly understood. Traditionally, nonhuman primates have been the animal model-of-choice for investigating the neuronal substrates of invariant recognition, because their visual systems closely mirror our own. Meanwhile, simpler and more accessible animal models such as rodents have been largely overlooked as possible models of higher-level visual functions, because their brains are often assumed to lack advanced visual processing machinery. As a result, little is known about rodents' ability to process complex visual stimuli in the face of real-world image variation. In the present work, we show that rats possess more advanced visual abilities than previously appreciated. Specifically, we trained pigmented rats to perform a visual task that required them to recognize objects despite substantial variation in their appearance, due to changes in size, view, and lighting. Critically, rats were able to spontaneously generalize to previously unseen transformations of learned objects. These results provide the first systematic evidence for invariant object recognition in rats and argue for an increased focus on rodents as models for studying high-level visual processing. PMID:19429704

  3. The cannabinoid system and visual processing: a review on experimental findings and clinical presumptions.

    PubMed

    Schwitzer, Thomas; Schwan, Raymund; Angioi-Duprez, Karine; Ingster-Moati, Isabelle; Lalanne, Laurence; Giersch, Anne; Laprevote, Vincent

    2015-01-01

    Cannabis is one of the most prevalent drugs used worldwide. Regular cannabis use is associated with impairments in highly integrative cognitive functions such as memory, attention and executive functions. To date, the cerebral mechanisms of these deficits are still poorly understood. Studying the processing of visual information may offer an innovative and relevant approach to evaluate the cerebral impact of exogenous cannabinoids on the human brain. Furthermore, this knowledge is required to understand the impact of cannabis intake in everyday life, and especially in car drivers. Here we review the role of the endocannabinoids in the functioning of the visual system and the potential involvement of cannabis use in visual dysfunctions. This review describes the presence of the endocannabinoids in the critical stages of visual information processing, and their role in the modulation of visual neurotransmission and visual synaptic plasticity, thereby enabling them to alter the transmission of the visual signal. We also review several induced visual changes, together with experimental dysfunctions reported in cannabis users. In the discussion, we consider these results in relation to the existing literature. We argue for more involvement of public health research in the study of visual function in cannabis users, especially because cannabis use is implicated in driving impairments. Copyright © 2014 Elsevier B.V. and ECNP. All rights reserved.

  4. Steady-state visual evoked potentials as a research tool in social affective neuroscience

    PubMed Central

    Wieser, Matthias J.; Miskovic, Vladimir; Keil, Andreas

    2017-01-01

    Like many other primates, humans place a high premium on social information transmission and processing. One important aspect of this information concerns the emotional state of other individuals, conveyed by distinct visual cues such as facial expressions, overt actions, or by cues extracted from the situational context. A rich body of theoretical and empirical work has demonstrated that these socio-emotional cues are processed by the human visual system in a prioritized fashion, in the service of optimizing social behavior. Furthermore, socio-emotional perception is highly dependent on situational contexts and previous experience. Here, we review current issues in this area of research and discuss the utility of the steady-state visual evoked potential (ssVEP) technique for addressing key empirical questions. Methodological advantages and caveats are discussed with particular regard to quantifying time-varying competition among multiple perceptual objects, trial-by-trial analysis of visual cortical activation, functional connectivity, and the control of low-level stimulus features. Studies on facial expression and emotional scene processing are summarized, with an emphasis on viewing faces and other social cues in emotional contexts, or when competing with each other. Further, because the ssVEP technique can be readily accommodated to studying the viewing of complex scenes with multiple elements, it enables researchers to advance theoretical models of socio-emotional perception, based on complex, quasi-naturalistic viewing situations. PMID:27699794

  5. Explaining neural signals in human visual cortex with an associative learning model.

    PubMed

    Jiang, Jiefeng; Schmajuk, Nestor; Egner, Tobias

    2012-08-01

    "Predictive coding" models posit a key role for associative learning in visual cognition, viewing perceptual inference as a process of matching (learned) top-down predictions (or expectations) against bottom-up sensory evidence. At the neural level, these models propose that each region along the visual processing hierarchy entails one set of processing units encoding predictions of bottom-up input, and another set computing mismatches (prediction error or surprise) between predictions and evidence. This contrasts with traditional views of visual neurons operating purely as bottom-up feature detectors. In support of the predictive coding hypothesis, a recent human neuroimaging study (Egner, Monti, & Summerfield, 2010) showed that neural population responses to expected and unexpected face and house stimuli in the "fusiform face area" (FFA) could be well-described as a summation of hypothetical face-expectation and -surprise signals, but not by feature detector responses. Here, we used computer simulations to test whether these imaging data could be formally explained within the broader framework of a mathematical neural network model of associative learning (Schmajuk, Gray, & Lam, 1996). Results show that FFA responses could be fit very closely by model variables coding for conditional predictions (and their violations) of stimuli that unconditionally activate the FFA. These data document that neural population signals in the ventral visual stream that deviate from classic feature detection responses can formally be explained by associative prediction and surprise signals.

  6. The visual white matter: The application of diffusion MRI and fiber tractography to vision science

    PubMed Central

    Rokem, Ariel; Takemura, Hiromasa; Bock, Andrew S.; Scherf, K. Suzanne; Behrmann, Marlene; Wandell, Brian A.; Fine, Ione; Bridge, Holly; Pestilli, Franco

    2017-01-01

    Visual neuroscience has traditionally focused much of its attention on understanding the response properties of single neurons or neuronal ensembles. The visual white matter and the long-range neuronal connections it supports are fundamental in establishing such neuronal response properties and visual function. This review article provides an introduction to measurements and methods to study the human visual white matter using diffusion MRI. These methods allow us to measure the microstructural and macrostructural properties of the white matter in living human individuals; they allow us to trace long-range connections between neurons in different parts of the visual system and to measure the biophysical properties of these connections. We also review a range of findings from recent studies on connections between different visual field maps, the effects of visual impairment on the white matter, and the properties underlying networks that process visual information supporting visual face recognition. Finally, we discuss a few promising directions for future studies. These include new methods for analysis of MRI data, open datasets that are becoming available to study brain connectivity and white matter properties, and open source software for the analysis of these data. PMID:28196374

  7. Visual Positioning Indoors: Human Eyes vs. Smartphone Cameras

    PubMed Central

    Wu, Dewen; Chen, Ruizhi; Chen, Liang

    2017-01-01

    Artificial Intelligence (AI) technologies and their related applications are now developing at a rapid pace. Indoor positioning will be one of the core technologies that enable AI applications because people spend 80% of their time indoors. Humans can locate themselves related to a visually well-defined object, e.g., a door, based on their visual observations. Can a smartphone camera do a similar job when it points to an object? In this paper, a visual positioning solution was developed based on a single image captured from a smartphone camera pointing to a well-defined object. The smartphone camera simulates the process of human eyes for the purpose of relatively locating themselves against a well-defined object. Extensive experiments were conducted with five types of smartphones on three different indoor settings, including a meeting room, a library, and a reading room. Experimental results shown that the average positioning accuracy of the solution based on five smartphone cameras is 30.6 cm, while that for the human-observed solution with 300 samples from 10 different people is 73.1 cm. PMID:29144420

  8. Visual Positioning Indoors: Human Eyes vs. Smartphone Cameras.

    PubMed

    Wu, Dewen; Chen, Ruizhi; Chen, Liang

    2017-11-16

    Artificial Intelligence (AI) technologies and their related applications are now developing at a rapid pace. Indoor positioning will be one of the core technologies that enable AI applications because people spend 80% of their time indoors. Humans can locate themselves related to a visually well-defined object, e.g., a door, based on their visual observations. Can a smartphone camera do a similar job when it points to an object? In this paper, a visual positioning solution was developed based on a single image captured from a smartphone camera pointing to a well-defined object. The smartphone camera simulates the process of human eyes for the purpose of relatively locating themselves against a well-defined object. Extensive experiments were conducted with five types of smartphones on three different indoor settings, including a meeting room, a library, and a reading room. Experimental results shown that the average positioning accuracy of the solution based on five smartphone cameras is 30.6 cm, while that for the human-observed solution with 300 samples from 10 different people is 73.1 cm.

  9. Urinary oxytocin positively correlates with performance in facial visual search in unmarried males, without specific reaction to infant face.

    PubMed

    Saito, Atsuko; Hamada, Hiroki; Kikusui, Takefumi; Mogi, Kazutaka; Nagasawa, Miho; Mitsui, Shohei; Higuchi, Takashi; Hasegawa, Toshikazu; Hiraki, Kazuo

    2014-01-01

    The neuropeptide oxytocin plays a central role in prosocial and parental behavior in non-human mammals as well as humans. It has been suggested that oxytocin may affect visual processing of infant faces and emotional reaction to infants. Healthy male volunteers (N = 13) were tested for their ability to detect infant or adult faces among adult or infant faces (facial visual search task). Urine samples were collected from all participants before the study to measure the concentration of oxytocin. Urinary oxytocin positively correlated with performance in the facial visual search task. However, task performance and its correlation with oxytocin concentration did not differ between infant faces and adult faces. Our data suggests that endogenous oxytocin is related to facial visual cognition, but does not promote infant-specific responses in unmarried men who are not fathers.

  10. Sounds Activate Visual Cortex and Improve Visual Discrimination

    PubMed Central

    Störmer, Viola S.; Martinez, Antigona; McDonald, John J.; Hillyard, Steven A.

    2014-01-01

    A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound. PMID:25031419

  11. An Experimental Analysis of Memory Processing

    PubMed Central

    Wright, Anthony A

    2007-01-01

    Rhesus monkeys were trained and tested in visual and auditory list-memory tasks with sequences of four travel pictures or four natural/environmental sounds followed by single test items. Acquisitions of the visual list-memory task are presented. Visual recency (last item) memory diminished with retention delay, and primacy (first item) memory strengthened. Capuchin monkeys, pigeons, and humans showed similar visual-memory changes. Rhesus learned an auditory memory task and showed octave generalization for some lists of notes—tonal, but not atonal, musical passages. In contrast with visual list memory, auditory primacy memory diminished with delay and auditory recency memory strengthened. Manipulations of interitem intervals, list length, and item presentation frequency revealed proactive and retroactive inhibition among items of individual auditory lists. Repeating visual items from prior lists produced interference (on nonmatching tests) revealing how far back memory extended. The possibility of using the interference function to separate familiarity vs. recollective memory processing is discussed. PMID:18047230

  12. Active visual search in non-stationary scenes: coping with temporal variability and uncertainty

    NASA Astrophysics Data System (ADS)

    Ušćumlić, Marija; Blankertz, Benjamin

    2016-02-01

    Objective. State-of-the-art experiments for studying neural processes underlying visual cognition often constrain sensory inputs (e.g., static images) and our behavior (e.g., fixed eye-gaze, long eye fixations), isolating or simplifying the interaction of neural processes. Motivated by the non-stationarity of our natural visual environment, we investigated the electroencephalography (EEG) correlates of visual recognition while participants overtly performed visual search in non-stationary scenes. We hypothesized that visual effects (such as those typically used in human-computer interfaces) may increase temporal uncertainty (with reference to fixation onset) of cognition-related EEG activity in an active search task and therefore require novel techniques for single-trial detection. Approach. We addressed fixation-related EEG activity in an active search task with respect to stimulus-appearance styles and dynamics. Alongside popping-up stimuli, our experimental study embraces two composite appearance styles based on fading-in, enlarging, and motion effects. Additionally, we explored whether the knowledge obtained in the pop-up experimental setting can be exploited to boost the EEG-based intention-decoding performance when facing transitional changes of visual content. Main results. The results confirmed our initial hypothesis that the dynamic of visual content can increase temporal uncertainty of the cognition-related EEG activity in active search with respect to fixation onset. This temporal uncertainty challenges the pivotal aim to keep the decoding performance constant irrespective of visual effects. Importantly, the proposed approach for EEG decoding based on knowledge transfer between the different experimental settings gave a promising performance. Significance. Our study demonstrates that the non-stationarity of visual scenes is an important factor in the evolution of cognitive processes, as well as in the dynamic of ocular behavior (i.e., dwell time and fixation duration) in an active search task. In addition, our method to improve single-trial detection performance in this adverse scenario is an important step in making brain-computer interfacing technology available for human-computer interaction applications.

  13. Visual short-term memory: activity supporting encoding and maintenance in retinotopic visual cortex.

    PubMed

    Sneve, Markus H; Alnæs, Dag; Endestad, Tor; Greenlee, Mark W; Magnussen, Svein

    2012-10-15

    Recent studies have demonstrated that retinotopic cortex maintains information about visual stimuli during retention intervals. However, the process by which transient stimulus-evoked sensory responses are transformed into enduring memory representations is unknown. Here, using fMRI and short-term visual memory tasks optimized for univariate and multivariate analysis approaches, we report differential involvement of human retinotopic areas during memory encoding of the low-level visual feature orientation. All visual areas show weaker responses when memory encoding processes are interrupted, possibly due to effects in orientation-sensitive primary visual cortex (V1) propagating across extrastriate areas. Furthermore, intermediate areas in both dorsal (V3a/b) and ventral (LO1/2) streams are significantly more active during memory encoding compared with non-memory (active and passive) processing of the same stimulus material. These effects in intermediate visual cortex are also observed during memory encoding of a different stimulus feature (spatial frequency), suggesting that these areas are involved in encoding processes on a higher level of representation. Using pattern-classification techniques to probe the representational content in visual cortex during delay periods, we further demonstrate that simply initiating memory encoding is not sufficient to produce long-lasting memory traces. Rather, active maintenance appears to underlie the observed memory-specific patterns of information in retinotopic cortex. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Act quickly, decide later: long-latency visual processing underlies perceptual decisions but not reflexive behavior.

    PubMed

    Jolij, Jacob; Scholte, H Steven; van Gaal, Simon; Hodgson, Timothy L; Lamme, Victor A F

    2011-12-01

    Humans largely guide their behavior by their visual representation of the world. Recent studies have shown that visual information can trigger behavior within 150 msec, suggesting that visually guided responses to external events, in fact, precede conscious awareness of those events. However, is such a view correct? By using a texture discrimination task, we show that the brain relies on long-latency visual processing in order to guide perceptual decisions. Decreasing stimulus saliency leads to selective changes in long-latency visually evoked potential components reflecting scene segmentation. These latency changes are accompanied by almost equal changes in simple RTs and points of subjective simultaneity. Furthermore, we find a strong correlation between individual RTs and the latencies of scene segmentation related components in the visually evoked potentials, showing that the processes underlying these late brain potentials are critical in triggering a response. However, using the same texture stimuli in an antisaccade task, we found that reflexive, but erroneous, prosaccades, but not antisaccades, can be triggered by earlier visual processes. In other words: The brain can act quickly, but decides late. Differences between our study and earlier findings suggesting that action precedes conscious awareness can be explained by assuming that task demands determine whether a fast and unconscious, or a slower and conscious, representation is used to initiate a visually guided response.

  15. Visual Search Efficiency is Greater for Human Faces Compared to Animal Faces

    PubMed Central

    Simpson, Elizabeth A.; Mertins, Haley L.; Yee, Krysten; Fullerton, Alison; Jakobsen, Krisztina V.

    2015-01-01

    The Animate Monitoring Hypothesis proposes that humans and animals were the most important categories of visual stimuli for ancestral humans to monitor, as they presented important challenges and opportunities for survival and reproduction; however, it remains unknown whether animal faces are located as efficiently as human faces. We tested this hypothesis by examining whether human, primate, and mammal faces elicit similarly efficient searches, or whether human faces are privileged. In the first three experiments, participants located a target (human, primate, or mammal face) among distractors (non-face objects). We found fixations on human faces were faster and more accurate than primate faces, even when controlling for search category specificity. A final experiment revealed that, even when task-irrelevant, human faces slowed searches for non-faces, suggesting some bottom-up processing may be responsible for the human face search efficiency advantage. PMID:24962122

  16. Attention to Color Sharpens Neural Population Tuning via Feedback Processing in the Human Visual Cortex Hierarchy.

    PubMed

    Bartsch, Mandy V; Loewe, Kristian; Merkel, Christian; Heinze, Hans-Jochen; Schoenfeld, Mircea A; Tsotsos, John K; Hopf, Jens-Max

    2017-10-25

    Attention can facilitate the selection of elementary object features such as color, orientation, or motion. This is referred to as feature-based attention and it is commonly attributed to a modulation of the gain and tuning of feature-selective units in visual cortex. Although gain mechanisms are well characterized, little is known about the cortical processes underlying the sharpening of feature selectivity. Here, we show with high-resolution magnetoencephalography in human observers (men and women) that sharpened selectivity for a particular color arises from feedback processing in the human visual cortex hierarchy. To assess color selectivity, we analyze the response to a color probe that varies in color distance from an attended color target. We find that attention causes an initial gain enhancement in anterior ventral extrastriate cortex that is coarsely selective for the target color and transitions within ∼100 ms into a sharper tuned profile in more posterior ventral occipital cortex. We conclude that attention sharpens selectivity over time by attenuating the response at lower levels of the cortical hierarchy to color values neighboring the target in color space. These observations support computational models proposing that attention tunes feature selectivity in visual cortex through backward-propagating attenuation of units less tuned to the target. SIGNIFICANCE STATEMENT Whether searching for your car, a particular item of clothing, or just obeying traffic lights, in everyday life, we must select items based on color. But how does attention allow us to select a specific color? Here, we use high spatiotemporal resolution neuromagnetic recordings to examine how color selectivity emerges in the human brain. We find that color selectivity evolves as a coarse to fine process from higher to lower levels within the visual cortex hierarchy. Our observations support computational models proposing that feature selectivity increases over time by attenuating the responses of less-selective cells in lower-level brain areas. These data emphasize that color perception involves multiple areas across a hierarchy of regions, interacting with each other in a complex, recursive manner. Copyright © 2017 the authors 0270-6474/17/3710346-12$15.00/0.

  17. Residual perception of biological motion in cortical blindness.

    PubMed

    Ruffieux, Nicolas; Ramon, Meike; Lao, Junpeng; Colombo, Françoise; Stacchi, Lisa; Borruat, François-Xavier; Accolla, Ettore; Annoni, Jean-Marie; Caldara, Roberto

    2016-12-01

    From birth, the human visual system shows a remarkable sensitivity for perceiving biological motion. This visual ability relies on a distributed network of brain regions and can be preserved even after damage of high-level ventral visual areas. However, it remains unknown whether this critical biological skill can withstand the loss of vision following bilateral striate damage. To address this question, we tested the categorization of human and animal biological motion in BC, a rare case of cortical blindness after anoxia-induced bilateral striate damage. The severity of his impairment, encompassing various aspects of vision (i.e., color, shape, face, and object recognition) and causing blind-like behavior, contrasts with a residual ability to process motion. We presented BC with static or dynamic point-light displays (PLDs) of human or animal walkers. These stimuli were presented either individually, or in pairs in two alternative forced choice (2AFC) tasks. When confronted with individual PLDs, the patient was unable to categorize the stimuli, irrespective of whether they were static or dynamic. In the 2AFC task, BC exhibited appropriate eye movements towards diagnostic information, but performed at chance level with static PLDs, in stark contrast to his ability to efficiently categorize dynamic biological agents. This striking ability to categorize biological motion provided top-down information is important for at least two reasons. Firstly, it emphasizes the importance of assessing patients' (visual) abilities across a range of task constraints, which can reveal potential residual abilities that may in turn represent a key feature for patient rehabilitation. Finally, our findings reinforce the view that the neural network processing biological motion can efficiently operate despite severely impaired low-level vision, positing our natural predisposition for processing dynamicity in biological agents as a robust feature of human vision. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Artificial retina model for the retinally blind based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Zeng, Yan-an; Song, Xin-qiang; Jiang, Fa-gang; Chang, Da-ding

    2007-01-01

    Artificial retina is aimed for the stimulation of remained retinal neurons in the patients with degenerated photoreceptors. Microelectrode arrays have been developed for this as a part of stimulator. Design such microelectrode arrays first requires a suitable mathematical method for human retinal information processing. In this paper, a flexible and adjustable human visual information extracting model is presented, which is based on the wavelet transform. With the flexible of wavelet transform to image information processing and the consistent to human visual information extracting, wavelet transform theory is applied to the artificial retina model for the retinally blind. The response of the model to synthetic image is shown. The simulated experiment demonstrates that the model behaves in a manner qualitatively similar to biological retinas and thus may serve as a basis for the development of an artificial retina.

  19. Face recognition increases during saccade preparation.

    PubMed

    Lin, Hai; Rizak, Joshua D; Ma, Yuan-ye; Yang, Shang-chuan; Chen, Lin; Hu, Xin-tian

    2014-01-01

    Face perception is integral to human perception system as it underlies social interactions. Saccadic eye movements are frequently made to bring interesting visual information, such as faces, onto the fovea for detailed processing. Just before eye movement onset, the processing of some basic features, such as the orientation, of an object improves at the saccade landing point. Interestingly, there is also evidence that indicates faces are processed in early visual processing stages similar to basic features. However, it is not known whether this early enhancement of processing includes face recognition. In this study, three experiments were performed to map the timing of face presentation to the beginning of the eye movement in order to evaluate pre-saccadic face recognition. Faces were found to be similarly processed as simple objects immediately prior to saccadic movements. Starting ∼ 120 ms before a saccade to a target face, independent of whether or not the face was surrounded by other faces, the face recognition gradually improved and the critical spacing of the crowding decreased as saccade onset was approaching. These results suggest that an upcoming saccade prepares the visual system for new information about faces at the saccade landing site and may reduce the background in a crowd to target the intended face. This indicates an important role of pre-saccadic eye movement signals in human face recognition.

  20. A computational feedforward model predicts categorization of masked emotional body language for longer, but not for shorter, latencies.

    PubMed

    Stienen, Bernard M C; Schindler, Konrad; de Gelder, Beatrice

    2012-07-01

    Given the presence of massive feedback loops in brain networks, it is difficult to disentangle the contribution of feedforward and feedback processing to the recognition of visual stimuli, in this case, of emotional body expressions. The aim of the work presented in this letter is to shed light on how well feedforward processing explains rapid categorization of this important class of stimuli. By means of parametric masking, it may be possible to control the contribution of feedback activity in human participants. A close comparison is presented between human recognition performance and the performance of a computational neural model that exclusively modeled feedforward processing and was engineered to fulfill the computational requirements of recognition. Results show that the longer the stimulus onset asynchrony (SOA), the closer the performance of the human participants was to the values predicted by the model, with an optimum at an SOA of 100 ms. At short SOA latencies, human performance deteriorated, but the categorization of the emotional expressions was still above baseline. The data suggest that, although theoretically, feedback arising from inferotemporal cortex is likely to be blocked when the SOA is 100 ms, human participants still seem to rely on more local visual feedback processing to equal the model's performance.

  1. Attention mechanisms in visual search -- an fMRI study.

    PubMed

    Leonards, U; Sunaert, S; Van Hecke, P; Orban, G A

    2000-01-01

    The human visual system is usually confronted with many different objects at a time, with only some of them reaching consciousness. Reaction-time studies have revealed two different strategies by which objects are selected for further processing: an automatic, efficient search process, and a conscious, so-called inefficient search [Treisman, A. (1991). Search, similarity, and integration of features between and within dimensions. Journal of Experimental Psychology: Human Perception and Performance, 17, 652--676; Treisman, A., & Gelade, G. (1980). A feature integration theory of attention. Cognitive Psychology, 12, 97--136; Wolfe, J. M. (1996). Visual search. In H. Pashler (Ed.), Attention. London: University College London Press]. Two different theories have been proposed to account for these search processes. Parallel theories presume that both types of search are treated by a single mechanism that is modulated by attentional and computational demands. Serial theories, in contrast, propose that parallel processing may underlie efficient search, but inefficient searching requires an additional serial mechanism, an attentional "spotlight" (Treisman, A., 1991) that successively shifts attention to different locations in the visual field. Using functional magnetic resonance imaging (fMRI), we show that the cerebral networks involved in efficient and inefficient search overlap almost completely. Only the superior frontal region, known to be involved in working memory [Courtney, S. M., Petit, L., Maisog, J. M., Ungerleider, L. G., & Haxby, J. V. (1998). An area specialized for spatial working memory in human frontal cortex. Science, 279, 1347--1351], and distinct from the frontal eye fields, that control spatial shifts of attention, was specifically involved in inefficient search. Activity modulations correlated with subjects' behavior best in the extrastriate cortical areas, where the amount of activity depended on the number of distracting elements in the display. Such a correlation was not observed in the parietal and frontal regions, usually assumed as being involved in spatial attention processing. These results can be interpreted in two ways: the most likely is that visual search does not require serial processing, otherwise we must assume the existence of a serial searchlight that operates in the extrastriate cortex but differs from the visuospatial shifts of attention involving the parietal and frontal regions.

  2. The effect of phasic auditory alerting on visual perception.

    PubMed

    Petersen, Anders; Petersen, Annemarie Hilkjær; Bundesen, Claus; Vangkilde, Signe; Habekost, Thomas

    2017-08-01

    Phasic alertness refers to a short-lived change in the preparatory state of the cognitive system following an alerting signal. In the present study, we examined the effect of phasic auditory alerting on distinct perceptual processes, unconfounded by motor components. We combined an alerting/no-alerting design with a pure accuracy-based single-letter recognition task. Computational modeling based on Bundesen's Theory of Visual Attention was used to examine the effect of phasic alertness on visual processing speed and threshold of conscious perception. Results show that phasic auditory alertness affects visual perception by increasing the visual processing speed and lowering the threshold of conscious perception (Experiment 1). By manipulating the intensity of the alerting cue, we further observed a positive relationship between alerting intensity and processing speed, which was not seen for the threshold of conscious perception (Experiment 2). This was replicated in a third experiment, in which pupil size was measured as a physiological marker of alertness. Results revealed that the increase in processing speed was accompanied by an increase in pupil size, substantiating the link between alertness and processing speed (Experiment 3). The implications of these results are discussed in relation to a newly developed mathematical model of the relationship between levels of alertness and the speed with which humans process visual information. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. The Effect of Early Visual Deprivation on the Neural Bases of Auditory Processing.

    PubMed

    Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte

    2016-02-03

    Transient congenital visual deprivation affects visual and multisensory processing. In contrast, the extent to which it affects auditory processing has not been investigated systematically. Research in permanently blind individuals has revealed brain reorganization during auditory processing, involving both intramodal and crossmodal plasticity. The present study investigated the effect of transient congenital visual deprivation on the neural bases of auditory processing in humans. Cataract-reversal individuals and normally sighted controls performed a speech-in-noise task while undergoing functional magnetic resonance imaging. Although there were no behavioral group differences, groups differed in auditory cortical responses: in the normally sighted group, auditory cortex activation increased with increasing noise level, whereas in the cataract-reversal group, no activation difference was observed across noise levels. An auditory activation of visual cortex was not observed at the group level in cataract-reversal individuals. The present data suggest prevailing auditory processing advantages after transient congenital visual deprivation, even many years after sight restoration. The present study demonstrates that people whose sight was restored after a transient period of congenital blindness show more efficient cortical processing of auditory stimuli (here speech), similarly to what has been observed in congenitally permanently blind individuals. These results underscore the importance of early sensory experience in permanently shaping brain function. Copyright © 2016 the authors 0270-6474/16/361620-11$15.00/0.

  4. Visual adaptation and face perception

    PubMed Central

    Webster, Michael A.; MacLeod, Donald I. A.

    2011-01-01

    The appearance of faces can be strongly affected by the characteristics of faces viewed previously. These perceptual after-effects reflect processes of sensory adaptation that are found throughout the visual system, but which have been considered only relatively recently in the context of higher level perceptual judgements. In this review, we explore the consequences of adaptation for human face perception, and the implications of adaptation for understanding the neural-coding schemes underlying the visual representation of faces. The properties of face after-effects suggest that they, in part, reflect response changes at high and possibly face-specific levels of visual processing. Yet, the form of the after-effects and the norm-based codes that they point to show many parallels with the adaptations and functional organization that are thought to underlie the encoding of perceptual attributes like colour. The nature and basis for human colour vision have been studied extensively, and we draw on ideas and principles that have been developed to account for norms and normalization in colour vision to consider potential similarities and differences in the representation and adaptation of faces. PMID:21536555

  5. Visual adaptation and face perception.

    PubMed

    Webster, Michael A; MacLeod, Donald I A

    2011-06-12

    The appearance of faces can be strongly affected by the characteristics of faces viewed previously. These perceptual after-effects reflect processes of sensory adaptation that are found throughout the visual system, but which have been considered only relatively recently in the context of higher level perceptual judgements. In this review, we explore the consequences of adaptation for human face perception, and the implications of adaptation for understanding the neural-coding schemes underlying the visual representation of faces. The properties of face after-effects suggest that they, in part, reflect response changes at high and possibly face-specific levels of visual processing. Yet, the form of the after-effects and the norm-based codes that they point to show many parallels with the adaptations and functional organization that are thought to underlie the encoding of perceptual attributes like colour. The nature and basis for human colour vision have been studied extensively, and we draw on ideas and principles that have been developed to account for norms and normalization in colour vision to consider potential similarities and differences in the representation and adaptation of faces.

  6. Interactions between motion and form processing in the human visual system.

    PubMed

    Mather, George; Pavan, Andrea; Bellacosa Marotti, Rosilari; Campana, Gianluca; Casco, Clara

    2013-01-01

    The predominant view of motion and form processing in the human visual system assumes that these two attributes are handled by separate and independent modules. Motion processing involves filtering by direction-selective sensors, followed by integration to solve the aperture problem. Form processing involves filtering by orientation-selective and size-selective receptive fields, followed by integration to encode object shape. It has long been known that motion signals can influence form processing in the well-known Gestalt principle of common fate; texture elements which share a common motion property are grouped into a single contour or texture region. However, recent research in psychophysics and neuroscience indicates that the influence of form signals on motion processing is more extensive than previously thought. First, the salience and apparent direction of moving lines depends on how the local orientation and direction of motion combine to match the receptive field properties of motion-selective neurons. Second, orientation signals generated by "motion-streaks" influence motion processing; motion sensitivity, apparent direction and adaptation are affected by simultaneously present orientation signals. Third, form signals generated by human body shape influence biological motion processing, as revealed by studies using point-light motion stimuli. Thus, form-motion integration seems to occur at several different levels of cortical processing, from V1 to STS.

  7. Interactions between motion and form processing in the human visual system

    PubMed Central

    Mather, George; Pavan, Andrea; Bellacosa Marotti, Rosilari; Campana, Gianluca; Casco, Clara

    2013-01-01

    The predominant view of motion and form processing in the human visual system assumes that these two attributes are handled by separate and independent modules. Motion processing involves filtering by direction-selective sensors, followed by integration to solve the aperture problem. Form processing involves filtering by orientation-selective and size-selective receptive fields, followed by integration to encode object shape. It has long been known that motion signals can influence form processing in the well-known Gestalt principle of common fate; texture elements which share a common motion property are grouped into a single contour or texture region. However, recent research in psychophysics and neuroscience indicates that the influence of form signals on motion processing is more extensive than previously thought. First, the salience and apparent direction of moving lines depends on how the local orientation and direction of motion combine to match the receptive field properties of motion-selective neurons. Second, orientation signals generated by “motion-streaks” influence motion processing; motion sensitivity, apparent direction and adaptation are affected by simultaneously present orientation signals. Third, form signals generated by human body shape influence biological motion processing, as revealed by studies using point-light motion stimuli. Thus, form-motion integration seems to occur at several different levels of cortical processing, from V1 to STS. PMID:23730286

  8. Human Subject Research Protocol: Computer-Aided Human Centric Cyber Situation Awareness: Understanding Cognitive Processes of Cyber Analysts

    DTIC Science & Technology

    2013-11-01

    by existing cyber-attack detection tools far exceeds the analysts’ cognitive capabilities. Grounded in perceptual and cognitive theory , many visual...Processes Inspired by the sense-making theory discussed earlier, we model the analytical reasoning process of cyber analysts using three key...analyst are called “working hypotheses”); each hypothesis could trigger further actions to confirm or disconfirm it. New actions will lead to new

  9. Cognitive search model and a new query paradigm

    NASA Astrophysics Data System (ADS)

    Xu, Zhonghui

    2001-06-01

    This paper proposes a cognitive model in which people begin to search pictures by using semantic content and find a right picture by judging whether its visual content is a proper visualization of the semantics desired. It is essential that human search is not just a process of matching computation on visual feature but rather a process of visualization of the semantic content known. For people to search electronic images in the way as they manually do in the model, we suggest that querying be a semantic-driven process like design. A query-by-design paradigm is prosed in the sense that what you design is what you find. Unlike query-by-example, query-by-design allows users to specify the semantic content through an iterative and incremental interaction process so that a retrieval can start with association and identification of the given semantic content and get refined while further visual cues are available. An experimental image retrieval system, Kuafu, has been under development using the query-by-design paradigm and an iconic language is adopted.

  10. A number-form area in the blind

    PubMed Central

    Abboud, Sami; Maidenbaum, Shachar; Dehaene, Stanislas; Amedi, Amir

    2015-01-01

    Distinct preference for visual number symbols was recently discovered in the human right inferior temporal gyrus (rITG). It remains unclear how this preference emerges, what is the contribution of shape biases to its formation and whether visual processing underlies it. Here we use congenital blindness as a model for brain development without visual experience. During fMRI, we present blind subjects with shapes encoded using a novel visual-to-music sensory-substitution device (The EyeMusic). Greater activation is observed in the rITG when subjects process symbols as numbers compared with control tasks on the same symbols. Using resting-state fMRI in the blind and sighted, we further show that the areas with preference for numerals and letters exhibit distinct patterns of functional connectivity with quantity and language-processing areas, respectively. Our findings suggest that specificity in the ventral ‘visual’ stream can emerge independently of sensory modality and visual experience, under the influence of distinct connectivity patterns. PMID:25613599

  11. An amodal shared resource model of language-mediated visual attention

    PubMed Central

    Smith, Alastair C.; Monaghan, Padraic; Huettig, Falk

    2013-01-01

    Language-mediated visual attention describes the interaction of two fundamental components of the human cognitive system, language and vision. Within this paper we present an amodal shared resource model of language-mediated visual attention that offers a description of the information and processes involved in this complex multimodal behavior and a potential explanation for how this ability is acquired. We demonstrate that the model is not only sufficient to account for the experimental effects of Visual World Paradigm studies but also that these effects are emergent properties of the architecture of the model itself, rather than requiring separate information processing channels or modular processing systems. The model provides an explicit description of the connection between the modality-specific input from language and vision and the distribution of eye gaze in language-mediated visual attention. The paper concludes by discussing future applications for the model, specifically its potential for investigating the factors driving observed individual differences in language-mediated eye gaze. PMID:23966967

  12. Cognitive and psychological science insights to improve climate change data visualization

    NASA Astrophysics Data System (ADS)

    Harold, Jordan; Lorenzoni, Irene; Shipley, Thomas F.; Coventry, Kenny R.

    2016-12-01

    Visualization of climate data plays an integral role in the communication of climate change findings to both expert and non-expert audiences. The cognitive and psychological sciences can provide valuable insights into how to improve visualization of climate data based on knowledge of how the human brain processes visual and linguistic information. We review four key research areas to demonstrate their potential to make data more accessible to diverse audiences: directing visual attention, visual complexity, making inferences from visuals, and the mapping between visuals and language. We present evidence-informed guidelines to help climate scientists increase the accessibility of graphics to non-experts, and illustrate how the guidelines can work in practice in the context of Intergovernmental Panel on Climate Change graphics.

  13. fMRI evidence for areas that process surface gloss in the human visual cortex

    PubMed Central

    Sun, Hua-Chun; Ban, Hiroshi; Di Luca, Massimiliano; Welchman, Andrew E.

    2015-01-01

    Surface gloss is an important cue to the material properties of objects. Recent progress in the study of macaque’s brain has increased our understating of the areas involved in processing information about gloss, however the homologies with the human brain are not yet fully understood. Here we used human functional magnetic resonance imaging (fMRI) measurements to localize brain areas preferentially responding to glossy objects. We measured cortical activity for thirty-two rendered three-dimensional objects that had either Lambertian or specular surface properties. To control for differences in image structure, we overlaid a grid on the images and scrambled its cells. We found activations related to gloss in the posterior fusiform sulcus (pFs) and in area V3B/KO. Subsequent analysis with Granger causality mapping indicated that V3B/KO processes gloss information differently than pFs. Our results identify a small network of mid-level visual areas whose activity may be important in supporting the perception of surface gloss. PMID:25490434

  14. Inferring the direction of implied motion depends on visual awareness

    PubMed Central

    Faivre, Nathan; Koch, Christof

    2014-01-01

    Visual awareness of an event, object, or scene is, by essence, an integrated experience, whereby different visual features composing an object (e.g., orientation, color, shape) appear as an unified percept and are processed as a whole. Here, we tested in human observers whether perceptual integration of static motion cues depends on awareness by measuring the capacity to infer the direction of motion implied by a static visible or invisible image under continuous flash suppression. Using measures of directional adaptation, we found that visible but not invisible implied motion adaptors biased the perception of real motion probes. In a control experiment, we found that invisible adaptors implying motion primed the perception of subsequent probes when they were identical (i.e., repetition priming), but not when they only shared the same direction (i.e., direction priming). Furthermore, using a model of visual processing, we argue that repetition priming effects are likely to arise as early as in the primary visual cortex. We conclude that although invisible images implying motion undergo some form of nonconscious processing, visual awareness is necessary to make inferences about motion direction. PMID:24706951

  15. Rhythmic Oscillations of Visual Contrast Sensitivity Synchronized with Action

    PubMed Central

    Tomassini, Alice; Spinelli, Donatella; Jacono, Marco; Sandini, Giulio; Morrone, Maria Concetta

    2016-01-01

    It is well known that the motor and the sensory systems structure sensory data collection and cooperate to achieve an efficient integration and exchange of information. Increasing evidence suggests that both motor and sensory functions are regulated by rhythmic processes reflecting alternating states of neuronal excitability, and these may be involved in mediating sensory-motor interactions. Here we show an oscillatory fluctuation in early visual processing time locked with the execution of voluntary action, and, crucially, even for visual stimuli irrelevant to the motor task. Human participants were asked to perform a reaching movement toward a display and judge the orientation of a Gabor patch, near contrast threshold, briefly presented at random times before and during the reaching movement. When the data are temporally aligned to the onset of movement, visual contrast sensitivity oscillates with periodicity within the theta band. Importantly, the oscillations emerge during the motor planning stage, ~500 ms before movement onset. We suggest that brain oscillatory dynamics may mediate an automatic coupling between early motor planning and early visual processing, possibly instrumental in linking and closing up the visual-motor control loop. PMID:25948254

  16. Inferring the direction of implied motion depends on visual awareness.

    PubMed

    Faivre, Nathan; Koch, Christof

    2014-04-04

    Visual awareness of an event, object, or scene is, by essence, an integrated experience, whereby different visual features composing an object (e.g., orientation, color, shape) appear as an unified percept and are processed as a whole. Here, we tested in human observers whether perceptual integration of static motion cues depends on awareness by measuring the capacity to infer the direction of motion implied by a static visible or invisible image under continuous flash suppression. Using measures of directional adaptation, we found that visible but not invisible implied motion adaptors biased the perception of real motion probes. In a control experiment, we found that invisible adaptors implying motion primed the perception of subsequent probes when they were identical (i.e., repetition priming), but not when they only shared the same direction (i.e., direction priming). Furthermore, using a model of visual processing, we argue that repetition priming effects are likely to arise as early as in the primary visual cortex. We conclude that although invisible images implying motion undergo some form of nonconscious processing, visual awareness is necessary to make inferences about motion direction.

  17. Robotic Attention Processing And Its Application To Visual Guidance

    NASA Astrophysics Data System (ADS)

    Barth, Matthew; Inoue, Hirochika

    1988-03-01

    This paper describes a method of real-time visual attention processing for robots performing visual guidance. This robot attention processing is based on a novel vision processor, the multi-window vision system that was developed at the University of Tokyo. The multi-window vision system is unique in that it only processes visual information inside local area windows. These local area windows are quite flexible in their ability to move anywhere on the visual screen, change their size and shape, and alter their pixel sampling rate. By using these windows for specific attention tasks, it is possible to perform high speed attention processing. The primary attention skills of detecting motion, tracking an object, and interpreting an image are all performed at high speed on the multi-window vision system. A basic robotic attention scheme using the attention skills was developed. The attention skills involved detection and tracking of salient visual features. The tracking and motion information thus obtained was utilized in producing the response to the visual stimulus. The response of the attention scheme was quick enough to be applicable to the real-time vision processing tasks of playing a video 'pong' game, and later using an automobile driving simulator. By detecting the motion of a 'ball' on a video screen and then tracking the movement, the attention scheme was able to control a 'paddle' in order to keep the ball in play. The response was faster than that of a human's, allowing the attention scheme to play the video game at higher speeds. Further, in the application to the driving simulator, the attention scheme was able to control both direction and velocity of a simulated vehicle following a lead car. These two applications show the potential of local visual processing in its use for robotic attention processing.

  18. Beyond visualization of big data: a multi-stage data exploration approach using visualization, sonification, and storification

    NASA Astrophysics Data System (ADS)

    Rimland, Jeffrey; Ballora, Mark; Shumaker, Wade

    2013-05-01

    As the sheer volume of data grows exponentially, it becomes increasingly difficult for existing visualization techniques to keep pace. The sonification field attempts to address this issue by enlisting our auditory senses to detect anomalies or complex events that are difficult to detect via visualization alone. Storification attempts to improve analyst understanding by converting data streams into organized narratives describing the data at a higher level of abstraction than the input stream that they area derived from. While these techniques hold a great deal of promise, they also each have a unique set of challenges that must be overcome. Sonification techniques must represent a broad variety of distributed heterogeneous data and present it to the analyst/listener in a manner that doesn't require extended listening - as visual "snapshots" are useful but auditory sounds only exist over time. Storification still faces many human-computer interface (HCI) challenges as well as technical hurdles related to automatically generating a logical narrative from lower-level data streams. This paper proposes a novel approach that utilizes a service oriented architecture (SOA)-based hybrid visualization/ sonification / storification framework to enable distributed human-in-the-loop processing of data in a manner that makes optimized usage of both visual and auditory processing pathways while also leveraging the value of narrative explication of data streams. It addresses the benefits and shortcomings of each processing modality and discusses information infrastructure and data representation concerns required with their utilization in a distributed environment. We present a generalizable approach with a broad range of applications including cyber security, medical informatics, facilitation of energy savings in "smart" buildings, and detection of natural and man-made disasters.

  19. Multiplicative processes in visual cognition

    NASA Astrophysics Data System (ADS)

    Credidio, H. F.; Teixeira, E. N.; Reis, S. D. S.; Moreira, A. A.; Andrade, J. S.

    2014-03-01

    The Central Limit Theorem (CLT) is certainly one of the most important results in the field of statistics. The simple fact that the addition of many random variables can generate the same probability curve, elucidated the underlying process for a broad spectrum of natural systems, ranging from the statistical distribution of human heights to the distribution of measurement errors, to mention a few. An extension of the CLT can be applied to multiplicative processes, where a given measure is the result of the product of many random variables. The statistical signature of these processes is rather ubiquitous, appearing in a diverse range of natural phenomena, including the distributions of incomes, body weights, rainfall, and fragment sizes in a rock crushing process. Here we corroborate results from previous studies which indicate the presence of multiplicative processes in a particular type of visual cognition task, namely, the visual search for hidden objects. Precisely, our results from eye-tracking experiments show that the distribution of fixation times during visual search obeys a log-normal pattern, while the fixational radii of gyration follow a power-law behavior.

  20. Tracking the Spatiotemporal Neural Dynamics of Real-world Object Size and Animacy in the Human Brain.

    PubMed

    Khaligh-Razavi, Seyed-Mahdi; Cichy, Radoslaw Martin; Pantazis, Dimitrios; Oliva, Aude

    2018-06-07

    Animacy and real-world size are properties that describe any object and thus bring basic order into our perception of the visual world. Here, we investigated how the human brain processes real-world size and animacy. For this, we applied representational similarity to fMRI and MEG data to yield a view of brain activity with high spatial and temporal resolutions, respectively. Analysis of fMRI data revealed that a distributed and partly overlapping set of cortical regions extending from occipital to ventral and medial temporal cortex represented animacy and real-world size. Within this set, parahippocampal cortex stood out as the region representing animacy and size stronger than most other regions. Further analysis of the detailed representational format revealed differences among regions involved in processing animacy. Analysis of MEG data revealed overlapping temporal dynamics of animacy and real-world size processing starting at around 150 msec and provided the first neuromagnetic signature of real-world object size processing. Finally, to investigate the neural dynamics of size and animacy processing simultaneously in space and time, we combined MEG and fMRI with a novel extension of MEG-fMRI fusion by representational similarity. This analysis revealed partly overlapping and distributed spatiotemporal dynamics, with parahippocampal cortex singled out as a region that represented size and animacy persistently when other regions did not. Furthermore, the analysis highlighted the role of early visual cortex in representing real-world size. A control analysis revealed that the neural dynamics of processing animacy and size were distinct from the neural dynamics of processing low-level visual features. Together, our results provide a detailed spatiotemporal view of animacy and size processing in the human brain.

  1. Design and Construction of a Portable Oculometer for Use in Transportation Oriented Human Factors Studies

    DOT National Transportation Integrated Search

    1971-08-01

    THE REPORT DESCRIBES DEVELOPMENT OF AN INSTRUMENT DESIGNED TO ACQUIRE AND PROCESS INFORMATION ABOUT HUMAN VISUAL PERFORMANCE. THE INSTRUMENT HAS THE FOLLOWING FEATURES: IT CAN BE OPERATED IN A VARIETY OF TRANSPORTATION ENVIRONMENTS INCLUDING SIMULATO...

  2. Image gathering, coding, and processing: End-to-end optimization for efficient and robust acquisition of visual information

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.

    1990-01-01

    Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.

  3. Cognitive measure on different profiles.

    PubMed

    Spindola, Marilda; Carra, Giovani; Balbinot, Alexandre; Zaro, Milton A

    2010-01-01

    Based on neurology and cognitive science many studies are developed to understand the human model mental, getting to know how human cognition works, especially about learning processes that involve complex contents and spatial-logical reasoning. Event Related Potential - ERP - is a basic and non-invasive method of electrophysiological investigation. It can be used to assess aspects of human cognitive processing by changing the rhythm of the frequency bands brain indicate that some type of processing or neuronal behavior. This paper focuses on ERP technique to help understand cognitive pathway in subjects from different areas of knowledge when they are exposed to an external visual stimulus. In the experiment we used 2D and 3D visual stimulus in the same picture. The signals were captured using 10 (ten) Electroencephalogram - EEG - channel system developed for this project and interfaced in a ADC (Analogical Digital System) board with LabVIEW system - National Instruments. That research was performed using project of experiments technique - DOE. The signal processing were done (math and statistical techniques) showing the relationship between cognitive pathway by groups and intergroups.

  4. Modulation of human extrastriate visual processing by selective attention to colours and words.

    PubMed

    Nobre, A C; Allison, T; McCarthy, G

    1998-07-01

    The present study investigated the effect of visual selective attention upon neural processing within functionally specialized regions of the human extrastriate visual cortex. Field potentials were recorded directly from the inferior surface of the temporal lobes in subjects with epilepsy. The experimental task required subjects to focus attention on words from one of two competing texts. Words were presented individually and foveally. Texts were interleaved randomly and were distinguishable on the basis of word colour. Focal field potentials were evoked by words in the posterior part of the fusiform gyrus. Selective attention strongly modulated long-latency potentials evoked by words. The attention effect co-localized with word-related potentials in the posterior fusiform gyrus, and was independent of stimulus colour. The results demonstrated that stimuli receive differential processing within specialized regions of the extrastriate cortex as a function of attention. The late onset of the attention effect and its co-localization with letter string-related potentials but not with colour-related potentials recorded from nearby regions of the fusiform gyrus suggest that the attention effect is due to top-down influences from downstream regions involved in word processing.

  5. The influence of spontaneous activity on stimulus processing in primary visual cortex.

    PubMed

    Schölvinck, M L; Friston, K J; Rees, G

    2012-02-01

    Spontaneous activity in the resting human brain has been studied extensively; however, how such activity affects the local processing of a sensory stimulus is relatively unknown. Here, we examined the impact of spontaneous activity in primary visual cortex on neuronal and behavioural responses to a simple visual stimulus, using functional MRI. Stimulus-evoked responses remained essentially unchanged by spontaneous fluctuations, combining with them in a largely linear fashion (i.e., with little evidence for an interaction). However, interactions between spontaneous fluctuations and stimulus-evoked responses were evident behaviourally; high levels of spontaneous activity tended to be associated with increased stimulus detection at perceptual threshold. Our results extend those found in studies of spontaneous fluctuations in motor cortex and higher order visual areas, and suggest a fundamental role for spontaneous activity in stimulus processing. Copyright © 2011. Published by Elsevier Inc.

  6. Influence of visual path information on human heading perception during rotation.

    PubMed

    Li, Li; Chen, Jing; Peng, Xiaozhe

    2009-03-31

    How does visual path information influence people's perception of their instantaneous direction of self-motion (heading)? We have previously shown that humans can perceive heading without direct access to visual path information. Here we vary two key parameters for estimating heading from optic flow, the field of view (FOV) and the depth range of environmental points, to investigate the conditions under which visual path information influences human heading perception. The display simulated an observer traveling on a circular path. Observers used a joystick to rotate their line of sight until deemed aligned with true heading. Four FOV sizes (110 x 94 degrees, 48 x 41 degrees, 16 x 14 degrees, 8 x 7 degrees) and depth ranges (6-50 m, 6-25 m, 6-12.5 m, 6-9 m) were tested. Consistent with our computational modeling results, heading bias increased with the reduction of FOV or depth range when the display provided a sequence of velocity fields but no direct path information. When the display provided path information, heading bias was not influenced as much by the reduction of FOV or depth range. We conclude that human heading and path perception involve separate visual processes. Path helps heading perception when the display does not contain enough optic-flow information for heading estimation during rotation.

  7. Perceptual learning in a non-human primate model of artificial vision

    PubMed Central

    Killian, Nathaniel J.; Vurro, Milena; Keith, Sarah B.; Kyada, Margee J.; Pezaris, John S.

    2016-01-01

    Visual perceptual grouping, the process of forming global percepts from discrete elements, is experience-dependent. Here we show that the learning time course in an animal model of artificial vision is predicted primarily from the density of visual elements. Three naïve adult non-human primates were tasked with recognizing the letters of the Roman alphabet presented at variable size and visualized through patterns of discrete visual elements, specifically, simulated phosphenes mimicking a thalamic visual prosthesis. The animals viewed a spatially static letter using a gaze-contingent pattern and then chose, by gaze fixation, between a matching letter and a non-matching distractor. Months of learning were required for the animals to recognize letters using simulated phosphene vision. Learning rates increased in proportion to the mean density of the phosphenes in each pattern. Furthermore, skill acquisition transferred from trained to untrained patterns, not depending on the precise retinal layout of the simulated phosphenes. Taken together, the findings suggest that learning of perceptual grouping in a gaze-contingent visual prosthesis can be described simply by the density of visual activation. PMID:27874058

  8. Feature-based attention elicits surround suppression in feature space.

    PubMed

    Störmer, Viola S; Alvarez, George A

    2014-09-08

    It is known that focusing attention on a particular feature (e.g., the color red) facilitates the processing of all objects in the visual field containing that feature [1-7]. Here, we show that such feature-based attention not only facilitates processing but also actively inhibits processing of similar, but not identical, features globally across the visual field. We combined behavior and electrophysiological recordings of frequency-tagged potentials in human observers to measure this inhibitory surround in feature space. We found that sensory signals of an attended color (e.g., red) were enhanced, whereas sensory signals of colors similar to the target color (e.g., orange) were suppressed relative to colors more distinct from the target color (e.g., yellow). Importantly, this inhibitory effect spreads globally across the visual field, thus operating independently of location. These findings suggest that feature-based attention comprises an excitatory peak surrounded by a narrow inhibitory zone in color space to attenuate the most distracting and potentially confusable stimuli during visual perception. This selection profile is akin to what has been reported for location-based attention [8-10] and thus suggests that such center-surround mechanisms are an overarching principle of attention across different domains in the human brain. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Feedforward and recurrent processing in scene segmentation: electroencephalography and functional magnetic resonance imaging.

    PubMed

    Scholte, H Steven; Jolij, Jacob; Fahrenfort, Johannes J; Lamme, Victor A F

    2008-11-01

    In texture segregation, an example of scene segmentation, we can discern two different processes: texture boundary detection and subsequent surface segregation [Lamme, V. A. F., Rodriguez-Rodriguez, V., & Spekreijse, H. Separate processing dynamics for texture elements, boundaries and surfaces in primary visual cortex of the macaque monkey. Cerebral Cortex, 9, 406-413, 1999]. Neural correlates of texture boundary detection have been found in monkey V1 [Sillito, A. M., Grieve, K. L., Jones, H. E., Cudeiro, J., & Davis, J. Visual cortical mechanisms detecting focal orientation discontinuities. Nature, 378, 492-496, 1995; Grosof, D. H., Shapley, R. M., & Hawken, M. J. Macaque-V1 neurons can signal illusory contours. Nature, 365, 550-552, 1993], but whether surface segregation occurs in monkey V1 [Rossi, A. F., Desimone, R., & Ungerleider, L. G. Contextual modulation in primary visual cortex of macaques. Journal of Neuroscience, 21, 1698-1709, 2001; Lamme, V. A. F. The neurophysiology of figure ground segregation in primary visual-cortex. Journal of Neuroscience, 15, 1605-1615, 1995], and whether boundary detection or surface segregation signals can also be measured in human V1, is more controversial [Kastner, S., De Weerd, P., & Ungerleider, L. G. Texture segregation in the human visual cortex: A functional MRI study. Journal of Neurophysiology, 83, 2453-2457, 2000]. Here we present electroencephalography (EEG) and functional magnetic resonance imaging data that have been recorded with a paradigm that makes it possible to differentiate between boundary detection and scene segmentation in humans. In this way, we were able to show with EEG that neural correlates of texture boundary detection are first present in the early visual cortex around 92 msec and then spread toward the parietal and temporal lobes. Correlates of surface segregation first appear in temporal areas (around 112 msec) and from there appear to spread to parietal, and back to occipital areas. After 208 msec, correlates of surface segregation and boundary detection also appear in more frontal areas. Blood oxygenation level-dependent magnetic resonance imaging results show correlates of boundary detection and surface segregation in all early visual areas including V1. We conclude that texture boundaries are detected in a feedforward fashion and are represented at increasing latencies in higher visual areas. Surface segregation, on the other hand, is represented in "reverse hierarchical" fashion and seems to arise from feedback signals toward early visual areas such as V1.

  10. Dissimilar processing of emotional facial expressions in human and monkey temporal cortex

    PubMed Central

    Zhu, Qi; Nelissen, Koen; Van den Stock, Jan; De Winter, François-Laurent; Pauwels, Karl; de Gelder, Beatrice; Vanduffel, Wim; Vandenbulcke, Mathieu

    2013-01-01

    Emotional facial expressions play an important role in social communication across primates. Despite major progress made in our understanding of categorical information processing such as for objects and faces, little is known, however, about how the primate brain evolved to process emotional cues. In this study, we used functional magnetic resonance imaging (fMRI) to compare the processing of emotional facial expressions between monkeys and humans. We used a 2 × 2 × 2 factorial design with species (human and monkey), expression (fear and chewing) and configuration (intact versus scrambled) as factors. At the whole brain level, selective neural responses to conspecific emotional expressions were anatomically confined to the superior temporal sulcus (STS) in humans. Within the human STS, we found functional subdivisions with a face-selective right posterior STS area that also responded selectively to emotional expressions of other species and a more anterior area in the right middle STS that responded specifically to human emotions. Hence, we argue that the latter region does not show a mere emotion-dependent modulation of activity but is primarily driven by human emotional facial expressions. Conversely, in monkeys, emotional responses appeared in earlier visual cortex and outside face-selective regions in inferior temporal cortex that responded also to multiple visual categories. Within monkey IT, we also found areas that were more responsive to conspecific than to non-conspecific emotional expressions but these responses were not as specific as in human middle STS. Overall, our results indicate that human STS may have developed unique properties to deal with social cues such as emotional expressions. PMID:23142071

  11. Integration and Visualization of Translational Medicine Data for Better Understanding of Human Diseases

    PubMed Central

    Satagopam, Venkata; Gu, Wei; Eifes, Serge; Gawron, Piotr; Ostaszewski, Marek; Gebel, Stephan; Barbosa-Silva, Adriano; Balling, Rudi; Schneider, Reinhard

    2016-01-01

    Abstract Translational medicine is a domain turning results of basic life science research into new tools and methods in a clinical environment, for example, as new diagnostics or therapies. Nowadays, the process of translation is supported by large amounts of heterogeneous data ranging from medical data to a whole range of -omics data. It is not only a great opportunity but also a great challenge, as translational medicine big data is difficult to integrate and analyze, and requires the involvement of biomedical experts for the data processing. We show here that visualization and interoperable workflows, combining multiple complex steps, can address at least parts of the challenge. In this article, we present an integrated workflow for exploring, analysis, and interpretation of translational medicine data in the context of human health. Three Web services—tranSMART, a Galaxy Server, and a MINERVA platform—are combined into one big data pipeline. Native visualization capabilities enable the biomedical experts to get a comprehensive overview and control over separate steps of the workflow. The capabilities of tranSMART enable a flexible filtering of multidimensional integrated data sets to create subsets suitable for downstream processing. A Galaxy Server offers visually aided construction of analytical pipelines, with the use of existing or custom components. A MINERVA platform supports the exploration of health and disease-related mechanisms in a contextualized analytical visualization system. We demonstrate the utility of our workflow by illustrating its subsequent steps using an existing data set, for which we propose a filtering scheme, an analytical pipeline, and a corresponding visualization of analytical results. The workflow is available as a sandbox environment, where readers can work with the described setup themselves. Overall, our work shows how visualization and interfacing of big data processing services facilitate exploration, analysis, and interpretation of translational medicine data. PMID:27441714

  12. Identification and intensity of disgust: Distinguishing visual, linguistic and facial expressions processing in Parkinson disease.

    PubMed

    Sedda, Anna; Petito, Sara; Guarino, Maria; Stracciari, Andrea

    2017-07-14

    Most of the studies since now show an impairment for facial displays of disgust recognition in Parkinson disease. A general impairment in disgust processing in patients with Parkinson disease might adversely affect their social interactions, given the relevance of this emotion for human relations. However, despite the importance of faces, disgust is also expressed through other format of visual stimuli such as sentences and visual images. The aim of our study was to explore disgust processing in a sample of patients affected by Parkinson disease, by means of various tests tackling not only facial recognition but also other format of visual stimuli through which disgust can be recognized. Our results confirm that patients are impaired in recognizing facial displays of disgust. Further analyses show that patients are also impaired and slower for other facial expressions, with the only exception of happiness. Notably however, patients with Parkinson disease processed visual images and sentences as controls. Our findings show a dissociation within different formats of visual stimuli of disgust, suggesting that Parkinson disease is not characterized by a general compromising of disgust processing, as often suggested. The involvement of the basal ganglia-frontal cortex system might spare some cognitive components of emotional processing, related to memory and culture, at least for disgust. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Within-Hemifield Competition in Early Visual Areas Limits the Ability to Track Multiple Objects with Attention

    PubMed Central

    Alvarez, George A.; Cavanagh, Patrick

    2014-01-01

    It is much easier to divide attention across the left and right visual hemifields than within the same visual hemifield. Here we investigate whether this benefit of dividing attention across separate visual fields is evident at early cortical processing stages. We measured the steady-state visual evoked potential, an oscillatory response of the visual cortex elicited by flickering stimuli, of moving targets and distractors while human observers performed a tracking task. The amplitude of responses at the target frequencies was larger than that of the distractor frequencies when participants tracked two targets in separate hemifields, indicating that attention can modulate early visual processing when it is divided across hemifields. However, these attentional modulations disappeared when both targets were tracked within the same hemifield. These effects were not due to differences in task performance, because accuracy was matched across the tracking conditions by adjusting target speed (with control conditions ruling out effects due to speed alone). To investigate later processing stages, we examined the P3 component over central-parietal scalp sites that was elicited by the test probe at the end of the trial. The P3 amplitude was larger for probes on targets than on distractors, regardless of whether attention was divided across or within a hemifield, indicating that these higher-level processes were not constrained by visual hemifield. These results suggest that modulating early processing stages enables more efficient target tracking, and that within-hemifield competition limits the ability to modulate multiple target representations within the hemifield maps of the early visual cortex. PMID:25164651

  14. Functional correlates of the anterolateral processing hierarchy in human auditory cortex.

    PubMed

    Chevillet, Mark; Riesenhuber, Maximilian; Rauschecker, Josef P

    2011-06-22

    Converging evidence supports the hypothesis that an anterolateral processing pathway mediates sound identification in auditory cortex, analogous to the role of the ventral cortical pathway in visual object recognition. Studies in nonhuman primates have characterized the anterolateral auditory pathway as a processing hierarchy, composed of three anatomically and physiologically distinct initial stages: core, belt, and parabelt. In humans, potential homologs of these regions have been identified anatomically, but reliable and complete functional distinctions between them have yet to be established. Because the anatomical locations of these fields vary across subjects, investigations of potential homologs between monkeys and humans require these fields to be defined in single subjects. Using functional MRI, we presented three classes of sounds (tones, band-passed noise bursts, and conspecific vocalizations), equivalent to those used in previous monkey studies. In each individual subject, three regions showing functional similarities to macaque core, belt, and parabelt were readily identified. Furthermore, the relative sizes and locations of these regions were consistent with those reported in human anatomical studies. Our results demonstrate that the functional organization of the anterolateral processing pathway in humans is largely consistent with that of nonhuman primates. Because our scanning sessions last only 15 min/subject, they can be run in conjunction with other scans. This will enable future studies to characterize functional modules in human auditory cortex at a level of detail previously possible only in visual cortex. Furthermore, the approach of using identical schemes in both humans and monkeys will aid with establishing potential homologies between them.

  15. Objective evaluation of the visual acuity in human eyes

    NASA Astrophysics Data System (ADS)

    Rosales, M. A.; López-Olazagasti, E.; Ramírez-Zavaleta, G.; Varillas, G.; Tepichín, E.

    2009-08-01

    Traditionally, the quality of the human vision is evaluated by a subjective test in which the examiner asks the patient to read a series of characters of different sizes, located at a certain distance of the patient. Typically, we need to ensure a subtended angle of vision of 5 minutes, which implies an object of 8.8 mm high located at 6 meters (normal or 20/20 visual acuity). These characters constitute what is known as the Snellen chart, universally used to evaluate the spatial resolution of the human eyes. The mentioned process of identification of characters is carried out by means of the eye - brain system, giving an evaluation of the subjective visual performance. In this work we consider the eye as an isolated image-forming system, and show that it is possible to isolate the function of the eye from that of the brain in this process. By knowing the impulse response of the eye´s system we can obtain, in advance, the image of the Snellen chart simultaneously. From this information, we obtain the objective performance of the eye as the optical system under test. This type of results might help to detect anomalous situations of the human vision, like the so called "cerebral myopia".

  16. Steady-state visually evoked potential correlates of human body perception.

    PubMed

    Giabbiconi, Claire-Marie; Jurilj, Verena; Gruber, Thomas; Vocks, Silja

    2016-11-01

    In cognitive neuroscience, interest in the neuronal basis underlying the processing of human bodies is steadily increasing. Based on functional magnetic resonance imaging studies, it is assumed that the processing of pictures of human bodies is anchored in a network of specialized brain areas comprising the extrastriate and the fusiform body area (EBA, FBA). An alternative to examine the dynamics within these networks is electroencephalography, more specifically so-called steady-state visually evoked potentials (SSVEPs). In SSVEP tasks, a visual stimulus is presented repetitively at a predefined flickering rate and typically elicits a continuous oscillatory brain response at this frequency. This brain response is characterized by an excellent signal-to-noise ratio-a major advantage for source reconstructions. The main goal of present study was to demonstrate the feasibility of this method to study human body perception. To that end, we presented pictures of bodies and contrasted the resulting SSVEPs to two control conditions, i.e., non-objects and pictures of everyday objects (chairs). We found specific SSVEPs amplitude differences between bodies and both control conditions. Source reconstructions localized the SSVEP generators to a network of temporal, occipital and parietal areas. Interestingly, only body perception resulted in activity differences in middle temporal and lateral occipitotemporal areas, most likely reflecting the EBA/FBA.

  17. Supramodal parametric working memory processing in humans.

    PubMed

    Spitzer, Bernhard; Blankenburg, Felix

    2012-03-07

    Previous studies of delayed-match-to-sample (DMTS) frequency discrimination in animals and humans have succeeded in delineating the neural signature of frequency processing in somatosensory working memory (WM). During retention of vibrotactile frequencies, stimulus-dependent single-cell and population activity in prefrontal cortex was found to reflect the task-relevant memory content, whereas increases in occipital alpha activity signaled the disengagement of areas not relevant for the tactile task. Here, we recorded EEG from human participants to determine the extent to which these mechanisms can be generalized to frequency retention in the visual and auditory domains. Subjects performed analogous variants of a DMTS frequency discrimination task, with the frequency information presented either visually, auditorily, or by vibrotactile stimulation. Examining oscillatory EEG activity during frequency retention, we found characteristic topographical distributions of alpha power over visual, auditory, and somatosensory cortices, indicating systematic patterns of inhibition and engagement of early sensory areas, depending on stimulus modality. The task-relevant frequency information, in contrast, was found to be represented in right prefrontal cortex, independent of presentation mode. In each of the three modality conditions, parametric modulations of prefrontal upper beta activity (20-30 Hz) emerged, in a very similar manner as recently found in vibrotactile tasks. Together, the findings corroborate a view of parametric WM as supramodal internal scaling of abstract quantity information and suggest strong relevance of previous evidence from vibrotactile work for a more general framework of quantity processing in human working memory.

  18. Spatial attention increases high-frequency gamma synchronisation in human medial visual cortex.

    PubMed

    Koelewijn, Loes; Rich, Anina N; Muthukumaraswamy, Suresh D; Singh, Krish D

    2013-10-01

    Visual information processing involves the integration of stimulus and goal-driven information, requiring neuronal communication. Gamma synchronisation is linked to neuronal communication, and is known to be modulated in visual cortex both by stimulus properties and voluntarily-directed attention. Stimulus-driven modulations of gamma activity are particularly associated with early visual areas such as V1, whereas attentional effects are generally localised to higher visual areas such as V4. The absence of a gamma increase in early visual cortex is at odds with robust attentional enhancements found with other measures of neuronal activity in this area. Here we used magnetoencephalography (MEG) to explore the effect of spatial attention on gamma activity in human early visual cortex using a highly effective gamma-inducing stimulus and strong attentional manipulation. In separate blocks, subjects tracked either a parafoveal grating patch that induced gamma activity in contralateral medial visual cortex, or a small line at fixation, effectively attending away from the gamma-inducing grating. Both items were always present, but rotated unpredictably and independently of each other. The rotating grating induced gamma synchronisation in medial visual cortex at 30-70 Hz, and in lateral visual cortex at 60-90 Hz, regardless of whether it was attended. Directing spatial attention to the grating increased gamma synchronisation in medial visual cortex, but only at 60-90 Hz. These results suggest that the generally found increase in gamma activity by spatial attention can be localised to early visual cortex in humans, and that stimulus and goal-driven modulations may be mediated at different frequencies within the gamma range. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Semantics of the visual environment encoded in parahippocampal cortex

    PubMed Central

    Bonner, Michael F.; Price, Amy Rose; Peelle, Jonathan E.; Grossman, Murray

    2016-01-01

    Semantic representations capture the statistics of experience and store this information in memory. A fundamental component of this memory system is knowledge of the visual environment, including knowledge of objects and their associations. Visual semantic information underlies a range of behaviors, from perceptual categorization to cognitive processes such as language and reasoning. Here we examine the neuroanatomic system that encodes visual semantics. Across three experiments, we found converging evidence indicating that knowledge of verbally mediated visual concepts relies on information encoded in a region of the ventral-medial temporal lobe centered on parahippocampal cortex. In an fMRI study, this region was strongly engaged by the processing of concepts relying on visual knowledge but not by concepts relying on other sensory modalities. In a study of patients with the semantic variant of primary progressive aphasia (semantic dementia), atrophy that encompassed this region was associated with a specific impairment in verbally mediated visual semantic knowledge. Finally, in a structural study of healthy adults from the fMRI experiment, gray matter density in this region related to individual variability in the processing of visual concepts. The anatomic location of these findings aligns with recent work linking the ventral-medial temporal lobe with high-level visual representation, contextual associations, and reasoning through imagination. Together this work suggests a critical role for parahippocampal cortex in linking the visual environment with knowledge systems in the human brain. PMID:26679216

  20. Semantics of the Visual Environment Encoded in Parahippocampal Cortex.

    PubMed

    Bonner, Michael F; Price, Amy Rose; Peelle, Jonathan E; Grossman, Murray

    2016-03-01

    Semantic representations capture the statistics of experience and store this information in memory. A fundamental component of this memory system is knowledge of the visual environment, including knowledge of objects and their associations. Visual semantic information underlies a range of behaviors, from perceptual categorization to cognitive processes such as language and reasoning. Here we examine the neuroanatomic system that encodes visual semantics. Across three experiments, we found converging evidence indicating that knowledge of verbally mediated visual concepts relies on information encoded in a region of the ventral-medial temporal lobe centered on parahippocampal cortex. In an fMRI study, this region was strongly engaged by the processing of concepts relying on visual knowledge but not by concepts relying on other sensory modalities. In a study of patients with the semantic variant of primary progressive aphasia (semantic dementia), atrophy that encompassed this region was associated with a specific impairment in verbally mediated visual semantic knowledge. Finally, in a structural study of healthy adults from the fMRI experiment, gray matter density in this region related to individual variability in the processing of visual concepts. The anatomic location of these findings aligns with recent work linking the ventral-medial temporal lobe with high-level visual representation, contextual associations, and reasoning through imagination. Together, this work suggests a critical role for parahippocampal cortex in linking the visual environment with knowledge systems in the human brain.

  1. Complex for monitoring visual acuity and its application for evaluation of human psycho-physiological state

    NASA Astrophysics Data System (ADS)

    Sorokoumov, P. S.; Khabibullin, T. R.; Tolstaya, A. M.

    2017-01-01

    The existing psychological theories associate the movement of a human eye with its reactions to external change: what we see, hear and feel. By analyzing the glance, we can compare the external human response (which shows the behavior of a person), and the natural reaction (that they actually feels). This article describes the complex for detection of visual activity and its application for evaluation of the psycho-physiological state of a person. The glasses with a camera capture all the movements of the human eye in real time. The data recorded by the camera are transmitted to the computer for processing implemented with the help of the software developed by the authors. The result is given in an informative and an understandable report, which can be used for further analysis. The complex shows a high efficiency and stable operation and can be used both, for the pedagogic personnel recruitment and for testing students during the educational process.

  2. Temporal dynamics of the knowledge-mediated visual disambiguation process in humans: a magnetoencephalography study.

    PubMed

    Urakawa, Tomokazu; Ogata, Katsuya; Kimura, Takahiro; Kume, Yuko; Tobimatsu, Shozo

    2015-01-01

    Disambiguation of a noisy visual scene with prior knowledge is an indispensable task of the visual system. To adequately adapt to a dynamically changing visual environment full of noisy visual scenes, the implementation of knowledge-mediated disambiguation in the brain is imperative and essential for proceeding as fast as possible under the limited capacity of visual image processing. However, the temporal profile of the disambiguation process has not yet been fully elucidated in the brain. The present study attempted to determine how quickly knowledge-mediated disambiguation began to proceed along visual areas after the onset of a two-tone ambiguous image using magnetoencephalography with high temporal resolution. Using the predictive coding framework, we focused on activity reduction for the two-tone ambiguous image as an index of the implementation of disambiguation. Source analysis revealed that a significant activity reduction was observed in the lateral occipital area at approximately 120 ms after the onset of the ambiguous image, but not in preceding activity (about 115 ms) in the cuneus when participants perceptually disambiguated the ambiguous image with prior knowledge. These results suggested that knowledge-mediated disambiguation may be implemented as early as approximately 120 ms following an ambiguous visual scene, at least in the lateral occipital area, and provided an insight into the temporal profile of the disambiguation process of a noisy visual scene with prior knowledge. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  3. Differential modulation of visual object processing in dorsal and ventral stream by stimulus visibility.

    PubMed

    Ludwig, Karin; Sterzer, Philipp; Kathmann, Norbert; Hesselmann, Guido

    2016-10-01

    As a functional organization principle in cortical visual information processing, the influential 'two visual systems' hypothesis proposes a division of labor between a dorsal "vision-for-action" and a ventral "vision-for-perception" stream. A core assumption of this model is that the two visual streams are differentially involved in visual awareness: ventral stream processing is closely linked to awareness while dorsal stream processing is not. In this functional magnetic resonance imaging (fMRI) study with human observers, we directly probed the stimulus-related information encoded in fMRI response patterns in both visual streams as a function of stimulus visibility. We parametrically modulated the visibility of face and tool stimuli by varying the contrasts of the masks in a continuous flash suppression (CFS) paradigm. We found that visibility - operationalized by objective and subjective measures - decreased proportionally with increasing log CFS mask contrast. Neuronally, this relationship was closely matched by ventral visual areas, showing a linear decrease of stimulus-related information with increasing mask contrast. Stimulus-related information in dorsal areas also showed a dependency on mask contrast, but the decrease rather followed a step function instead of a linear function. Together, our results suggest that both the ventral and the dorsal visual stream are linked to visual awareness, but neural activity in ventral areas more closely reflects graded differences in awareness compared to dorsal areas. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Detection and recognition of simple spatial forms

    NASA Technical Reports Server (NTRS)

    Watson, A. B.

    1983-01-01

    A model of human visual sensitivity to spatial patterns is constructed. The model predicts the visibility and discriminability of arbitrary two-dimensional monochrome images. The image is analyzed by a large array of linear feature sensors, which differ in spatial frequency, phase, orientation, and position in the visual field. All sensors have one octave frequency bandwidths, and increase in size linearly with eccentricity. Sensor responses are processed by an ideal Bayesian classifier, subject to uncertainty. The performance of the model is compared to that of the human observer in detecting and discriminating some simple images.

  5. Robot Command Interface Using an Audio-Visual Speech Recognition System

    NASA Astrophysics Data System (ADS)

    Ceballos, Alexánder; Gómez, Juan; Prieto, Flavio; Redarce, Tanneguy

    In recent years audio-visual speech recognition has emerged as an active field of research thanks to advances in pattern recognition, signal processing and machine vision. Its ultimate goal is to allow human-computer communication using voice, taking into account the visual information contained in the audio-visual speech signal. This document presents a command's automatic recognition system using audio-visual information. The system is expected to control the laparoscopic robot da Vinci. The audio signal is treated using the Mel Frequency Cepstral Coefficients parametrization method. Besides, features based on the points that define the mouth's outer contour according to the MPEG-4 standard are used in order to extract the visual speech information.

  6. A specialized face-processing model inspired by the organization of monkey face patches explains several face-specific phenomena observed in humans.

    PubMed

    Farzmahdi, Amirhossein; Rajaei, Karim; Ghodrati, Masoud; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi

    2016-04-26

    Converging reports indicate that face images are processed through specialized neural networks in the brain -i.e. face patches in monkeys and the fusiform face area (FFA) in humans. These studies were designed to find out how faces are processed in visual system compared to other objects. Yet, the underlying mechanism of face processing is not completely revealed. Here, we show that a hierarchical computational model, inspired by electrophysiological evidence on face processing in primates, is able to generate representational properties similar to those observed in monkey face patches (posterior, middle and anterior patches). Since the most important goal of sensory neuroscience is linking the neural responses with behavioral outputs, we test whether the proposed model, which is designed to account for neural responses in monkey face patches, is also able to predict well-documented behavioral face phenomena observed in humans. We show that the proposed model satisfies several cognitive face effects such as: composite face effect and the idea of canonical face views. Our model provides insights about the underlying computations that transfer visual information from posterior to anterior face patches.

  7. A specialized face-processing model inspired by the organization of monkey face patches explains several face-specific phenomena observed in humans

    PubMed Central

    Farzmahdi, Amirhossein; Rajaei, Karim; Ghodrati, Masoud; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi

    2016-01-01

    Converging reports indicate that face images are processed through specialized neural networks in the brain –i.e. face patches in monkeys and the fusiform face area (FFA) in humans. These studies were designed to find out how faces are processed in visual system compared to other objects. Yet, the underlying mechanism of face processing is not completely revealed. Here, we show that a hierarchical computational model, inspired by electrophysiological evidence on face processing in primates, is able to generate representational properties similar to those observed in monkey face patches (posterior, middle and anterior patches). Since the most important goal of sensory neuroscience is linking the neural responses with behavioral outputs, we test whether the proposed model, which is designed to account for neural responses in monkey face patches, is also able to predict well-documented behavioral face phenomena observed in humans. We show that the proposed model satisfies several cognitive face effects such as: composite face effect and the idea of canonical face views. Our model provides insights about the underlying computations that transfer visual information from posterior to anterior face patches. PMID:27113635

  8. Sexual motivation is reflected by stimulus-dependent motor cortex excitability.

    PubMed

    Schecklmann, Martin; Engelhardt, Kristina; Konzok, Julian; Rupprecht, Rainer; Greenlee, Mark W; Mokros, Andreas; Langguth, Berthold; Poeppl, Timm B

    2015-08-01

    Sexual behavior involves motivational processes. Findings from both animal models and neuroimaging in humans suggest that the recruitment of neural motor networks is an integral part of the sexual response. However, no study so far has directly linked sexual motivation to physiologically measurable changes in cerebral motor systems in humans. Using transcranial magnetic stimulation in hetero- and homosexual men, we here show that sexual motivation modulates cortical excitability. More specifically, our results demonstrate that visual sexual stimuli corresponding with one's sexual orientation, compared with non-corresponding visual sexual stimuli, increase the excitability of the motor cortex. The reflection of sexual motivation in motor cortex excitability provides evidence for motor preparation processes in sexual behavior in humans. Moreover, such interrelationship links theoretical models and previous neuroimaging findings of sexual behavior. © The Author (2015). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  9. Two processes support visual recognition memory in rhesus monkeys.

    PubMed

    Guderian, Sebastian; Brigham, Danielle; Mishkin, Mortimer

    2011-11-29

    A large body of evidence in humans suggests that recognition memory can be supported by both recollection and familiarity. Recollection-based recognition is characterized by the retrieval of contextual information about the episode in which an item was previously encountered, whereas familiarity-based recognition is characterized instead by knowledge only that the item had been encountered previously in the absence of any context. To date, it is unknown whether monkeys rely on similar mnemonic processes to perform recognition memory tasks. Here, we present evidence from the analysis of receiver operating characteristics, suggesting that visual recognition memory in rhesus monkeys also can be supported by two separate processes and that these processes have features considered to be characteristic of recollection and familiarity. Thus, the present study provides converging evidence across species for a dual process model of recognition memory and opens up the possibility of studying the neural mechanisms of recognition memory in nonhuman primates on tasks that are highly similar to the ones used in humans.

  10. Two processes support visual recognition memory in rhesus monkeys

    PubMed Central

    Guderian, Sebastian; Brigham, Danielle; Mishkin, Mortimer

    2011-01-01

    A large body of evidence in humans suggests that recognition memory can be supported by both recollection and familiarity. Recollection-based recognition is characterized by the retrieval of contextual information about the episode in which an item was previously encountered, whereas familiarity-based recognition is characterized instead by knowledge only that the item had been encountered previously in the absence of any context. To date, it is unknown whether monkeys rely on similar mnemonic processes to perform recognition memory tasks. Here, we present evidence from the analysis of receiver operating characteristics, suggesting that visual recognition memory in rhesus monkeys also can be supported by two separate processes and that these processes have features considered to be characteristic of recollection and familiarity. Thus, the present study provides converging evidence across species for a dual process model of recognition memory and opens up the possibility of studying the neural mechanisms of recognition memory in nonhuman primates on tasks that are highly similar to the ones used in humans. PMID:22084079

  11. Bayesian networks and information theory for audio-visual perception modeling.

    PubMed

    Besson, Patricia; Richiardi, Jonas; Bourdin, Christophe; Bringoux, Lionel; Mestre, Daniel R; Vercher, Jean-Louis

    2010-09-01

    Thanks to their different senses, human observers acquire multiple information coming from their environment. Complex cross-modal interactions occur during this perceptual process. This article proposes a framework to analyze and model these interactions through a rigorous and systematic data-driven process. This requires considering the general relationships between the physical events or factors involved in the process, not only in quantitative terms, but also in term of the influence of one factor on another. We use tools from information theory and probabilistic reasoning to derive relationships between the random variables of interest, where the central notion is that of conditional independence. Using mutual information analysis to guide the model elicitation process, a probabilistic causal model encoded as a Bayesian network is obtained. We exemplify the method by using data collected in an audio-visual localization task for human subjects, and we show that it yields a well-motivated model with good predictive ability. The model elicitation process offers new prospects for the investigation of the cognitive mechanisms of multisensory perception.

  12. Haptic perception and body representation in lateral and medial occipito-temporal cortices.

    PubMed

    Costantini, Marcello; Urgesi, Cosimo; Galati, Gaspare; Romani, Gian Luca; Aglioti, Salvatore M

    2011-04-01

    Although vision is the primary sensory modality that humans and other primates use to identify objects in the environment, we can recognize crucial object features (e.g., shape, size) using the somatic modality. Previous studies have shown that the occipito-temporal areas dedicated to the visual processing of object forms, faces and bodies also show category-selective responses when the preferred stimuli are haptically explored out of view. Visual processing of human bodies engages specific areas in lateral (extrastriate body area, EBA) and medial (fusiform body area, FBA) occipito-temporal cortex. This study aimed at exploring the relative involvement of EBA and FBA in the haptic exploration of body parts. During fMRI scanning, participants were asked to haptically explore either real-size fake body parts or objects. We found a selective activation of right and left EBA, but not of right FBA, while participants haptically explored body parts as compared to real objects. This suggests that EBA may integrate visual body representations with somatosensory information regarding body parts and form a multimodal representation of the body. Furthermore, both left and right EBA showed a comparable level of body selectivity during haptic perception and visual imagery. However, right but not left EBA was more activated during haptic exploration than visual imagery of body parts, ruling out that the response to haptic body exploration was entirely due to the use of visual imagery. Overall, the results point to the existence of different multimodal body representations in the occipito-temporal cortex which are activated during perception and imagery of human body parts. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Using Complex Event Processing (CEP) and vocal synthesis techniques to improve comprehension of sonified human-centric data

    NASA Astrophysics Data System (ADS)

    Rimland, Jeff; Ballora, Mark

    2014-05-01

    The field of sonification, which uses auditory presentation of data to replace or augment visualization techniques, is gaining popularity and acceptance for analysis of "big data" and for assisting analysts who are unable to utilize traditional visual approaches due to either: 1) visual overload caused by existing displays; 2) concurrent need to perform critical visually intensive tasks (e.g. operating a vehicle or performing a medical procedure); or 3) visual impairment due to either temporary environmental factors (e.g. dense smoke) or biological causes. Sonification tools typically map data values to sound attributes such as pitch, volume, and localization to enable them to be interpreted via human listening. In more complex problems, the challenge is in creating multi-dimensional sonifications that are both compelling and listenable, and that have enough discrete features that can be modulated in ways that allow meaningful discrimination by a listener. We propose a solution to this problem that incorporates Complex Event Processing (CEP) with speech synthesis. Some of the more promising sonifications to date use speech synthesis, which is an "instrument" that is amenable to extended listening, and can also provide a great deal of subtle nuance. These vocal nuances, which can represent a nearly limitless number of expressive meanings (via a combination of pitch, inflection, volume, and other acoustic factors), are the basis of our daily communications, and thus have the potential to engage the innate human understanding of these sounds. Additionally, recent advances in CEP have facilitated the extraction of multi-level hierarchies of information, which is necessary to bridge the gap between raw data and this type of vocal synthesis. We therefore propose that CEP-enabled sonifications based on the sound of human utterances could be considered the next logical step in human-centric "big data" compression and transmission.

  14. Mapping the “What” and “Where” Visual Cortices and Their Atrophy in Alzheimer's Disease: Combined Activation Likelihood Estimation with Voxel-Based Morphometry

    PubMed Central

    Deng, Yanjia; Shi, Lin; Lei, Yi; Liang, Peipeng; Li, Kuncheng; Chu, Winnie C. W.; Wang, Defeng

    2016-01-01

    The human cortical regions for processing high-level visual (HLV) functions of different categories remain ambiguous, especially in terms of their conjunctions and specifications. Moreover, the neurobiology of declined HLV functions in patients with Alzheimer's disease (AD) has not been fully investigated. This study provides a functionally sorted overview of HLV cortices for processing “what” and “where” visual perceptions and it investigates their atrophy in AD and MCI patients. Based upon activation likelihood estimation (ALE), brain regions responsible for processing five categories of visual perceptions included in “what” and “where” visions (i.e., object, face, word, motion, and spatial visions) were analyzed, and subsequent contrast analyses were performed to show regions with conjunctive and specific activations for processing these visual functions. Next, based on the resulting ALE maps, the atrophy of HLV cortices in AD and MCI patients was evaluated using voxel-based morphometry. Our ALE results showed brain regions for processing visual perception across the five categories, as well as areas of conjunction and specification. Our comparisons of gray matter (GM) volume demonstrated atrophy of three “where” visual cortices in late MCI group and extensive atrophy of HLV cortices (25 regions in both “what” and “where” visual cortices) in AD group. In addition, the GM volume of atrophied visual cortices in AD and MCI subjects was found to be correlated to the deterioration of overall cognitive status and to the cognitive performances related to memory, execution, and object recognition functions. In summary, these findings may add to our understanding of HLV network organization and of the evolution of visual perceptual dysfunction in AD as the disease progresses. PMID:27445770

  15. Perception and Processing of Faces in the Human Brain Is Tuned to Typical Feature Locations

    PubMed Central

    Schwarzkopf, D. Samuel; Alvarez, Ivan; Lawson, Rebecca P.; Henriksson, Linda; Kriegeskorte, Nikolaus; Rees, Geraint

    2016-01-01

    Faces are salient social stimuli whose features attract a stereotypical pattern of fixations. The implications of this gaze behavior for perception and brain activity are largely unknown. Here, we characterize and quantify a retinotopic bias implied by typical gaze behavior toward faces, which leads to eyes and mouth appearing most often in the upper and lower visual field, respectively. We found that the adult human visual system is tuned to these contingencies. In two recognition experiments, recognition performance for isolated face parts was better when they were presented at typical, rather than reversed, visual field locations. The recognition cost of reversed locations was equal to ∼60% of that for whole face inversion in the same sample. Similarly, an fMRI experiment showed that patterns of activity evoked by eye and mouth stimuli in the right inferior occipital gyrus could be separated with significantly higher accuracy when these features were presented at typical, rather than reversed, visual field locations. Our findings demonstrate that human face perception is determined not only by the local position of features within a face context, but by whether features appear at the typical retinotopic location given normal gaze behavior. Such location sensitivity may reflect fine-tuning of category-specific visual processing to retinal input statistics. Our findings further suggest that retinotopic heterogeneity might play a role for face inversion effects and for the understanding of conditions affecting gaze behavior toward faces, such as autism spectrum disorders and congenital prosopagnosia. SIGNIFICANCE STATEMENT Faces attract our attention and trigger stereotypical patterns of visual fixations, concentrating on inner features, like eyes and mouth. Here we show that the visual system represents face features better when they are shown at retinal positions where they typically fall during natural vision. When facial features were shown at typical (rather than reversed) visual field locations, they were discriminated better by humans and could be decoded with higher accuracy from brain activity patterns in the right occipital face area. This suggests that brain representations of face features do not cover the visual field uniformly. It may help us understand the well-known face-inversion effect and conditions affecting gaze behavior toward faces, such as prosopagnosia and autism spectrum disorders. PMID:27605606

  16. Resolving the neural dynamics of visual and auditory scene processing in the human brain: a methodological approach

    PubMed Central

    Teng, Santani

    2017-01-01

    In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044019

  17. Contrast Enhancement Algorithm Based on Gap Adjustment for Histogram Equalization

    PubMed Central

    Chiu, Chung-Cheng; Ting, Chih-Chung

    2016-01-01

    Image enhancement methods have been widely used to improve the visual effects of images. Owing to its simplicity and effectiveness histogram equalization (HE) is one of the methods used for enhancing image contrast. However, HE may result in over-enhancement and feature loss problems that lead to unnatural look and loss of details in the processed images. Researchers have proposed various HE-based methods to solve the over-enhancement problem; however, they have largely ignored the feature loss problem. Therefore, a contrast enhancement algorithm based on gap adjustment for histogram equalization (CegaHE) is proposed. It refers to a visual contrast enhancement algorithm based on histogram equalization (VCEA), which generates visually pleasing enhanced images, and improves the enhancement effects of VCEA. CegaHE adjusts the gaps between two gray values based on the adjustment equation, which takes the properties of human visual perception into consideration, to solve the over-enhancement problem. Besides, it also alleviates the feature loss problem and further enhances the textures in the dark regions of the images to improve the quality of the processed images for human visual perception. Experimental results demonstrate that CegaHE is a reliable method for contrast enhancement and that it significantly outperforms VCEA and other methods. PMID:27338412

  18. Resolving the neural dynamics of visual and auditory scene processing in the human brain: a methodological approach.

    PubMed

    Cichy, Radoslaw Martin; Teng, Santani

    2017-02-19

    In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.

  19. Lateralized Cognition: Asymmetrical and Complementary Strategies of Pigeons during Discrimination of the "Human Concept"

    ERIC Educational Resources Information Center

    Yamazaki, Y.; Aust, U.; Huber, L.; Hausmann, M.; Gunturkun, O.

    2007-01-01

    This study was aimed at revealing which cognitive processes are lateralized in visual categorizations of "humans" by pigeons. To this end, pigeons were trained to categorize pictures of humans and then tested binocularly or monocularly (left or right eye) on the learned categorization and for transfer to novel exemplars (Experiment 1). Subsequent…

  20. Visual Contrast Sensitivity Improvement by Right Frontal High-Beta Activity Is Mediated by Contrast Gain Mechanisms and Influenced by Fronto-Parietal White Matter Microstructure

    PubMed Central

    Quentin, Romain; Elkin Frankston, Seth; Vernet, Marine; Toba, Monica N.; Bartolomeo, Paolo; Chanes, Lorena; Valero-Cabré, Antoni

    2016-01-01

    Behavioral and electrophysiological studies in humans and non-human primates have correlated frontal high-beta activity with the orienting of endogenous attention and shown the ability of the latter function to modulate visual performance. We here combined rhythmic transcranial magnetic stimulation (TMS) and diffusion imaging to study the relation between frontal oscillatory activity and visual performance, and we associated these phenomena to a specific set of white matter pathways that in humans subtend attentional processes. High-beta rhythmic activity on the right frontal eye field (FEF) was induced with TMS and its causal effects on a contrast sensitivity function were recorded to explore its ability to improve visual detection performance across different stimulus contrast levels. Our results show that frequency-specific activity patterns engaged in the right FEF have the ability to induce a leftward shift of the psychometric function. This increase in visual performance across different levels of stimulus contrast is likely mediated by a contrast gain mechanism. Interestingly, microstructural measures of white matter connectivity suggest a strong implication of right fronto-parietal connectivity linking the FEF and the intraparietal sulcus in propagating high-beta rhythmic signals across brain networks and subtending top-down frontal influences on visual performance. PMID:25899709

  1. Self-organizing neural integration of pose-motion features for human action recognition

    PubMed Central

    Parisi, German I.; Weber, Cornelius; Wermter, Stefan

    2015-01-01

    The visual recognition of complex, articulated human movements is fundamental for a wide range of artificial systems oriented toward human-robot communication, action classification, and action-driven perception. These challenging tasks may generally involve the processing of a huge amount of visual information and learning-based mechanisms for generalizing a set of training actions and classifying new samples. To operate in natural environments, a crucial property is the efficient and robust recognition of actions, also under noisy conditions caused by, for instance, systematic sensor errors and temporarily occluded persons. Studies of the mammalian visual system and its outperforming ability to process biological motion information suggest separate neural pathways for the distinct processing of pose and motion features at multiple levels and the subsequent integration of these visual cues for action perception. We present a neurobiologically-motivated approach to achieve noise-tolerant action recognition in real time. Our model consists of self-organizing Growing When Required (GWR) networks that obtain progressively generalized representations of sensory inputs and learn inherent spatio-temporal dependencies. During the training, the GWR networks dynamically change their topological structure to better match the input space. We first extract pose and motion features from video sequences and then cluster actions in terms of prototypical pose-motion trajectories. Multi-cue trajectories from matching action frames are subsequently combined to provide action dynamics in the joint feature space. Reported experiments show that our approach outperforms previous results on a dataset of full-body actions captured with a depth sensor, and ranks among the best results for a public benchmark of domestic daily actions. PMID:26106323

  2. Terminology model discovery using natural language processing and visualization techniques.

    PubMed

    Zhou, Li; Tao, Ying; Cimino, James J; Chen, Elizabeth S; Liu, Hongfang; Lussier, Yves A; Hripcsak, George; Friedman, Carol

    2006-12-01

    Medical terminologies are important for unambiguous encoding and exchange of clinical information. The traditional manual method of developing terminology models is time-consuming and limited in the number of phrases that a human developer can examine. In this paper, we present an automated method for developing medical terminology models based on natural language processing (NLP) and information visualization techniques. Surgical pathology reports were selected as the testing corpus for developing a pathology procedure terminology model. The use of a general NLP processor for the medical domain, MedLEE, provides an automated method for acquiring semantic structures from a free text corpus and sheds light on a new high-throughput method of medical terminology model development. The use of an information visualization technique supports the summarization and visualization of the large quantity of semantic structures generated from medical documents. We believe that a general method based on NLP and information visualization will facilitate the modeling of medical terminologies.

  3. Left hemispheric advantage for numerical abilities in the bottlenose dolphin.

    PubMed

    Kilian, Annette; von Fersen, Lorenzo; Güntürkün, Onur

    2005-02-28

    In a two-choice discrimination paradigm, a bottlenose dolphin discriminated relational dimensions between visual numerosity stimuli under monocular viewing conditions. After prior binocular acquisition of the task, two monocular test series with different number stimuli were conducted. In accordance with recent studies on visual lateralization in the bottlenose dolphin, our results revealed an overall advantage of the right visual field. Due to the complete decussation of the optic nerve fibers, this suggests a specialization of the left hemisphere for analysing relational features between stimuli as required in tests for numerical abilities. These processes are typically right hemisphere-based in other mammals (including humans) and birds. The present data provide further evidence for a general right visual field advantage in bottlenose dolphins for visual information processing. It is thus assumed that dolphins possess a unique functional architecture of their cerebral asymmetries. (c) 2004 Elsevier B.V. All rights reserved.

  4. Sounds activate visual cortex and improve visual discrimination.

    PubMed

    Feng, Wenfeng; Störmer, Viola S; Martinez, Antigona; McDonald, John J; Hillyard, Steven A

    2014-07-16

    A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound. Copyright © 2014 the authors 0270-6474/14/349817-08$15.00/0.

  5. Visualization of migration of human cortical neurons generated from induced pluripotent stem cells.

    PubMed

    Bamba, Yohei; Kanemura, Yonehiro; Okano, Hideyuki; Yamasaki, Mami

    2017-09-01

    Neuronal migration is considered a key process in human brain development. However, direct observation of migrating human cortical neurons in the fetal brain is accompanied by ethical concerns and is a major obstacle in investigating human cortical neuronal migration. We established a novel system that enables direct visualization of migrating cortical neurons generated from human induced pluripotent stem cells (hiPSCs). We observed the migration of cortical neurons generated from hiPSCs derived from a control and from a patient with lissencephaly. Our system needs no viable brain tissue, which is usually used in slice culture. Migratory behavior of human cortical neuron can be observed more easily and more vividly by its fluorescence and glial scaffold than that by earlier methods. Our in vitro experimental system provides a new platform for investigating development of the human central nervous system and brain malformation. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Communicating Numerical Risk: Human Factors That Aid Understanding in Health Care

    PubMed Central

    Brust-Renck, Priscila G.; Royer, Caisa E.; Reyna, Valerie F.

    2014-01-01

    In this chapter, we review evidence from the human factors literature that verbal and visual formats can help increase the understanding of numerical risk information in health care. These visual representations of risk are grounded in empirically supported theory. As background, we first review research showing that people often have difficulty understanding numerical risks and benefits in health information. In particular, we discuss how understanding the meanings of numbers results in healthier decisions. Then, we discuss the processes that determine how communication of numerical risks can enhance (or degrade) health judgments and decisions. Specifically, we examine two different approaches to risk communication: a traditional approach and fuzzy-trace theory. Applying research on the complications of understanding and communicating risks, we then highlight how different visual representations are best suited to communicating different risk messages (i.e., their gist). In particular, we review verbal and visual messages that highlight gist representations that can better communicate health information and improve informed decision making. This discussion is informed by human factors theories and methods, which involve the study of how to maximize the interaction between humans and the tools they use. Finally, we present implications and recommendations for future research on human factors in health care. PMID:24999307

  7. Luminance gradient at object borders communicates object location to the human oculomotor system.

    PubMed

    Kilpeläinen, Markku; Georgeson, Mark A

    2018-01-25

    The locations of objects in our environment constitute arguably the most important piece of information our visual system must convey to facilitate successful visually guided behaviour. However, the relevant objects are usually not point-like and do not have one unique location attribute. Relatively little is known about how the visual system represents the location of such large objects as visual processing is, both on neural and perceptual level, highly edge dominated. In this study, human observers made saccades to the centres of luminance defined squares (width 4 deg), which appeared at random locations (8 deg eccentricity). The phase structure of the square was manipulated such that the points of maximum luminance gradient at the square's edges shifted from trial to trial. The average saccade endpoints of all subjects followed those shifts in remarkable quantitative agreement. Further experiments showed that the shifts were caused by the edge manipulations, not by changes in luminance structure near the centre of the square or outside the square. We conclude that the human visual system programs saccades to large luminance defined square objects based on edge locations derived from the points of maximum luminance gradients at the square's edges.

  8. Bio-inspired display of polarization information using selected visual cues

    NASA Astrophysics Data System (ADS)

    Yemelyanov, Konstantin M.; Lin, Shih-Schon; Luis, William Q.; Pugh, Edward N., Jr.; Engheta, Nader

    2003-12-01

    For imaging systems the polarization of electromagnetic waves carries much potentially useful information about such features of the world as the surface shape, material contents, local curvature of objects, as well as about the relative locations of the source, object and imaging system. The imaging system of the human eye however, is "polarization-blind", and cannot utilize the polarization of light without the aid of an artificial, polarization-sensitive instrument. Therefore, polarization information captured by a man-made polarimetric imaging system must be displayed to a human observer in the form of visual cues that are naturally processed by the human visual system, while essentially preserving the other important non-polarization information (such as spectral and intensity information) in an image. In other words, some forms of sensory substitution are needed for representing polarization "signals" without affecting other visual information such as color and brightness. We are investigating several bio-inspired representational methodologies for mapping polarization information into visual cues readily perceived by the human visual system, and determining which mappings are most suitable for specific applications such as object detection, navigation, sensing, scene classifications, and surface deformation. The visual cues and strategies we are exploring are the use of coherently moving dots superimposed on image to represent various range of polarization signals, overlaying textures with spatial and/or temporal signatures to segregate regions of image with differing polarization, modulating luminance and/or color contrast of scenes in terms of certain aspects of polarization values, and fusing polarization images into intensity-only images. In this talk, we will present samples of our findings in this area.

  9. Honeybees can discriminate between Monet and Picasso paintings.

    PubMed

    Wu, Wen; Moreno, Antonio M; Tangen, Jason M; Reinhard, Judith

    2013-01-01

    Honeybees (Apis mellifera) have remarkable visual learning and discrimination abilities that extend beyond learning simple colours, shapes or patterns. They can discriminate landscape scenes, types of flowers, and even human faces. This suggests that in spite of their small brain, honeybees have a highly developed capacity for processing complex visual information, comparable in many respects to vertebrates. Here, we investigated whether this capacity extends to complex images that humans distinguish on the basis of artistic style: Impressionist paintings by Monet and Cubist paintings by Picasso. We show that honeybees learned to simultaneously discriminate between five different Monet and Picasso paintings, and that they do not rely on luminance, colour, or spatial frequency information for discrimination. When presented with novel paintings of the same style, the bees even demonstrated some ability to generalize. This suggests that honeybees are able to discriminate Monet paintings from Picasso ones by extracting and learning the characteristic visual information inherent in each painting style. Our study further suggests that discrimination of artistic styles is not a higher cognitive function that is unique to humans, but simply due to the capacity of animals-from insects to humans-to extract and categorize the visual characteristics of complex images.

  10. Differences in the Visual Perception of Symmetric Patterns in Orangutans (Pongo pygmaeus abelii) and Two Human Cultural Groups: A Comparative Eye-Tracking Study.

    PubMed

    Mühlenbeck, Cordelia; Liebal, Katja; Pritsch, Carla; Jacobsen, Thomas

    2016-01-01

    Symmetric structures are of importance in relation to aesthetic preference. To investigate whether the preference for symmetric patterns is unique to humans, independent of their cultural background, we compared two human populations with distinct cultural backgrounds (Namibian hunter-gatherers and German town dwellers) with one species of non-human great apes (Orangutans) in their viewing behavior regarding symmetric and asymmetric patterns in two levels of complexity. In addition, the human participants were asked to give their aesthetic evaluation of a subset of the presented patterns. The results showed that humans of both cultural groups fixated on symmetric patterns for a longer period of time, regardless of the pattern's complexity. On the contrary, Orangutans did not clearly differentiate between symmetric and asymmetric patterns, but were much faster in processing the presented stimuli and scanned the complete screen, while both human groups rested on the symmetric pattern after a short scanning time. The aesthetic evaluation test revealed that the fixation preference for symmetric patterns did not match with the aesthetic evaluation in the Hai//om group, whereas in the German group aesthetic evaluation was in accordance with the fixation preference in 60 percent of the cases. It can be concluded that humans prefer well-ordered structures in visual processing tasks, most likely because of a positive processing bias for symmetry, which Orangutans did not show in this task, and that, in humans, an aesthetic preference does not necessarily accompany the fixation preference.

  11. Categorisation of visualisation methods to support the design of Human-Computer Interaction Systems.

    PubMed

    Li, Katie; Tiwari, Ashutosh; Alcock, Jeffrey; Bermell-Garcia, Pablo

    2016-07-01

    During the design of Human-Computer Interaction (HCI) systems, the creation of visual artefacts forms an important part of design. On one hand producing a visual artefact has a number of advantages: it helps designers to externalise their thought and acts as a common language between different stakeholders. On the other hand, if an inappropriate visualisation method is employed it could hinder the design process. To support the design of HCI systems, this paper reviews the categorisation of visualisation methods used in HCI. A keyword search is conducted to identify a) current HCI design methods, b) approaches of selecting these methods. The resulting design methods are filtered to create a list of just visualisation methods. These are then categorised using the approaches identified in (b). As a result 23 HCI visualisation methods are identified and categorised in 5 selection approaches (The Recipient, Primary Purpose, Visual Archetype, Interaction Type, and The Design Process). Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. A neural model of motion processing and visual navigation by cortical area MST.

    PubMed

    Grossberg, S; Mingolla, E; Pack, C

    1999-12-01

    Cells in the dorsal medial superior temporal cortex (MSTd) process optic flow generated by self-motion during visually guided navigation. A neural model shows how interactions between well-known neural mechanisms (log polar cortical magnification, Gaussian motion-sensitive receptive fields, spatial pooling of motion-sensitive signals and subtractive extraretinal eye movement signals) lead to emergent properties that quantitatively simulate neurophysiological data about MSTd cell properties and psychophysical data about human navigation. Model cells match MSTd neuron responses to optic flow stimuli placed in different parts of the visual field, including position invariance, tuning curves, preferred spiral directions, direction reversals, average response curves and preferred locations for stimulus motion centers. The model shows how the preferred motion direction of the most active MSTd cells can explain human judgments of self-motion direction (heading), without using complex heading templates. The model explains when extraretinal eye movement signals are needed for accurate heading perception, and when retinal input is sufficient, and how heading judgments depend on scene layouts and rotation rates.

  13. Out of sight but not out of mind: the neurophysiology of iconic memory in the superior temporal sulcus.

    PubMed

    Keysers, C; Xiao, D-K; Foldiak, P; Perrett, D I

    2005-05-01

    Iconic memory, the short-lasting visual memory of a briefly flashed stimulus, is an important component of most models of visual perception. Here we investigate what physiological mechanisms underlie this capacity by showing rapid serial visual presentation (RSVP) sequences with and without interstimulus gaps to human observers and macaque monkeys. For gaps of up to 93 ms between consecutive images, human observers and neurones in the temporal cortex of macaque monkeys were found to continue processing a stimulus as if it was still present on the screen. The continued firing of neurones in temporal cortex may therefore underlie iconic memory. Based on these findings, a neurophysiological vision of iconic memory is presented.

  14. Comparison of dogs and humans in visual scanning of social interaction.

    PubMed

    Törnqvist, Heini; Somppi, Sanni; Koskela, Aija; Krause, Christina M; Vainio, Outi; Kujala, Miiamaaria V

    2015-09-01

    Previous studies have demonstrated similarities in gazing behaviour of dogs and humans, but comparisons under similar conditions are rare, and little is known about dogs' visual attention to social scenes. Here, we recorded the eye gaze of dogs while they viewed images containing two humans or dogs either interacting socially or facing away: the results were compared with equivalent data measured from humans. Furthermore, we compared the gazing behaviour of two dog and two human populations with different social experiences: family and kennel dogs; dog experts and non-experts. Dogs' gazing behaviour was similar to humans: both species gazed longer at the actors in social interaction than in non-social images. However, humans gazed longer at the actors in dog than human social interaction images, whereas dogs gazed longer at the actors in human than dog social interaction images. Both species also made more saccades between actors in images representing non-conspecifics, which could indicate that processing social interaction of non-conspecifics may be more demanding. Dog experts and non-experts viewed the images very similarly. Kennel dogs viewed images less than family dogs, but otherwise their gazing behaviour did not differ, indicating that the basic processing of social stimuli remains similar regardless of social experiences.

  15. Dissociable neural responses to hands and non-hand body parts in human left extrastriate visual cortex.

    PubMed

    Bracci, Stefania; Ietswaart, Magdalena; Peelen, Marius V; Cavina-Pratesi, Cristiana

    2010-06-01

    Accumulating evidence points to a map of visual regions encoding specific categories of objects. For example, a region in the human extrastriate visual cortex, the extrastriate body area (EBA), has been implicated in the visual processing of bodies and body parts. Although in the monkey, neurons selective for hands have been reported, in humans it is unclear whether areas selective for individual body parts such as the hand exist. Here, we conducted two functional MRI experiments to test for hand-preferring responses in the human extrastriate visual cortex. We found evidence for a hand-preferring region in left lateral occipitotemporal cortex in all 14 participants. This region, located in the lateral occipital sulcus, partially overlapped with left EBA, but could be functionally and anatomically dissociated from it. In experiment 2, we further investigated the functional profile of hand- and body-preferring regions by measuring responses to hands, fingers, feet, assorted body parts (arms, legs, torsos), and non-biological handlike stimuli such as robotic hands. The hand-preferring region responded most strongly to hands, followed by robotic hands, fingers, and feet, whereas its response to assorted body parts did not significantly differ from baseline. By contrast, EBA responded most strongly to body parts, followed by hands and feet, and did not significantly respond to robotic hands or fingers. Together, these results provide evidence for a representation of the hand in extrastriate visual cortex that is distinct from the representation of other body parts.

  16. Dissociable Neural Responses to Hands and Non-Hand Body Parts in Human Left Extrastriate Visual Cortex

    PubMed Central

    Ietswaart, Magdalena; Peelen, Marius V.; Cavina-Pratesi, Cristiana

    2010-01-01

    Accumulating evidence points to a map of visual regions encoding specific categories of objects. For example, a region in the human extrastriate visual cortex, the extrastriate body area (EBA), has been implicated in the visual processing of bodies and body parts. Although in the monkey, neurons selective for hands have been reported, in humans it is unclear whether areas selective for individual body parts such as the hand exist. Here, we conducted two functional MRI experiments to test for hand-preferring responses in the human extrastriate visual cortex. We found evidence for a hand-preferring region in left lateral occipitotemporal cortex in all 14 participants. This region, located in the lateral occipital sulcus, partially overlapped with left EBA, but could be functionally and anatomically dissociated from it. In experiment 2, we further investigated the functional profile of hand- and body-preferring regions by measuring responses to hands, fingers, feet, assorted body parts (arms, legs, torsos), and non-biological handlike stimuli such as robotic hands. The hand-preferring region responded most strongly to hands, followed by robotic hands, fingers, and feet, whereas its response to assorted body parts did not significantly differ from baseline. By contrast, EBA responded most strongly to body parts, followed by hands and feet, and did not significantly respond to robotic hands or fingers. Together, these results provide evidence for a representation of the hand in extrastriate visual cortex that is distinct from the representation of other body parts. PMID:20393066

  17. Overview of Human-Centric Space Situational Awareness (SSA) Science and Technology (S&T)

    NASA Astrophysics Data System (ADS)

    Ianni, J.; Aleva, D.; Ellis, S.

    2012-09-01

    A number of organizations, within the government, industry, and academia, are researching ways to help humans understand and react to events in space. The problem is both helped and complicated by the fact that there are numerous data sources that need to be planned (i.e., tasked), collected, processed, analyzed, and disseminated. A large part of the research is in support of the Joint Space Operational Center (JSpOC), National Air and Space Intelligence Center (NASIC), and similar organizations. Much recent research has been specifically targeting the JSpOC Mission System (JMS) which has provided a unifying software architecture. This paper will first outline areas of science and technology (S&T) related to human-centric space situational awareness (SSA) and space command and control (C2) including: 1. Object visualization - especially data fused from disparate sources. Also satellite catalog visualizations that convey the physical relationships between space objects. 2. Data visualization - improve data trend analysis as in visual analytics and interactive visualization; e.g., satellite anomaly trends over time, space weather visualization, dynamic visualizations 3. Workflow support - human-computer interfaces that encapsulate multiple computer services (i.e., algorithms, programs, applications) into a 4. Command and control - e.g., tools that support course of action (COA) development and selection, tasking for satellites and sensors, etc. 5. Collaboration - improve individuals or teams ability to work with others; e.g., video teleconferencing, shared virtual spaces, file sharing, virtual white-boards, chat, and knowledge search. 6. Hardware/facilities - e.g., optimal layouts for operations centers, ergonomic workstations, immersive displays, interaction technologies, and mobile computing. Secondly we will provide a survey of organizations working these areas and suggest where more attention may be needed. Although no detailed master plan exists for human-centric SSA and C2, we see little redundancy among the groups supporting SSA human factors at this point.

  18. Scene and human face recognition in the central vision of patients with glaucoma

    PubMed Central

    Aptel, Florent; Attye, Arnaud; Guyader, Nathalie; Boucart, Muriel; Chiquet, Christophe; Peyrin, Carole

    2018-01-01

    Primary open-angle glaucoma (POAG) firstly mainly affects peripheral vision. Current behavioral studies support the idea that visual defects of patients with POAG extend into parts of the central visual field classified as normal by static automated perimetry analysis. This is particularly true for visual tasks involving processes of a higher level than mere detection. The purpose of this study was to assess visual abilities of POAG patients in central vision. Patients were assigned to two groups following a visual field examination (Humphrey 24–2 SITA-Standard test). Patients with both peripheral and central defects and patients with peripheral but no central defect, as well as age-matched controls, participated in the experiment. All participants had to perform two visual tasks where low-contrast stimuli were presented in the central 6° of the visual field. A categorization task of scene images and human face images assessed high-level visual recognition abilities. In contrast, a detection task using the same stimuli assessed low-level visual function. The difference in performance between detection and categorization revealed the cost of high-level visual processing. Compared to controls, patients with a central visual defect showed a deficit in both detection and categorization of all low-contrast images. This is consistent with the abnormal retinal sensitivity as assessed by perimetry. However, the deficit was greater for categorization than detection. Patients without a central defect showed similar performances to the controls concerning the detection and categorization of faces. However, while the detection of scene images was well-maintained, these patients showed a deficit in their categorization. This suggests that the simple loss of peripheral vision could be detrimental to scene recognition, even when the information is displayed in central vision. This study revealed subtle defects in the central visual field of POAG patients that cannot be predicted by static automated perimetry assessment using Humphrey 24–2 SITA-Standard test. PMID:29481572

  19. Simultaneous chromatic and luminance human electroretinogram responses.

    PubMed

    Parry, Neil R A; Murray, Ian J; Panorgias, Athanasios; McKeefry, Declan J; Lee, Barry B; Kremers, Jan

    2012-07-01

    The parallel processing of information forms an important organisational principle of the primate visual system. Here we describe experiments which use a novel chromatic–achromatic temporal compound stimulus to simultaneously identify colour and luminance specific signals in the human electroretinogram (ERG). Luminance and chromatic components are separated in the stimulus; the luminance modulation has twice the temporal frequency of the chromatic modulation. ERGs were recorded from four trichromatic and two dichromatic subjects (1 deuteranope and 1 protanope). At isoluminance, the fundamental (first harmonic) response was elicited by the chromatic component in the stimulus. The trichromatic ERGs possessed low-pass temporal tuning characteristics, reflecting the activity of parvocellular post-receptoral mechanisms. There was very little first harmonic response in the dichromats' ERGs. The second harmonic response was elicited by the luminance modulation in the compound stimulus and showed, in all subjects, band-pass temporal tuning characteristic of magnocellular activity. Thus it is possible to concurrently elicit ERG responses from the human retina which reflect processing in both chromatic and luminance pathways. As well as providing a clear demonstration of the parallel nature of chromatic and luminance processing in the human retina, the differences that exist between ERGs from trichromatic and dichromatic subjects point to the existence of interactions between afferent post-receptoral pathways that are in operation from the earliest stages of visual processing.

  20. Interdigitated Color- and Disparity-Selective Columns within Human Visual Cortical Areas V2 and V3

    PubMed Central

    Polimeni, Jonathan R.; Tootell, Roger B.H.

    2016-01-01

    In nonhuman primates (NHPs), secondary visual cortex (V2) is composed of repeating columnar stripes, which are evident in histological variations of cytochrome oxidase (CO) levels. Distinctive “thin” and “thick” stripes of dark CO staining reportedly respond selectively to stimulus variations in color and binocular disparity, respectively. Here, we first tested whether similar color-selective or disparity-selective stripes exist in human V2. If so, available evidence predicts that such stripes should (1) radiate “outward” from the V1–V2 border, (2) interdigitate, (3) differ from each other in both thickness and length, (4) be spaced ∼3.5–4 mm apart (center-to-center), and, perhaps, (5) have segregated functional connections. Second, we tested whether analogous segregated columns exist in a “next-higher” tier area, V3. To answer these questions, we used high-resolution fMRI (1 × 1 × 1 mm3) at high field (7 T), presenting color-selective or disparity-selective stimuli, plus extensive signal averaging across multiple scan sessions and cortical surface-based analysis. All hypotheses were confirmed. V2 stripes and V3 columns were reliably localized in all subjects. The two stripe/column types were largely interdigitated (e.g., nonoverlapping) in both V2 and V3. Color-selective stripes differed from disparity-selective stripes in both width (thickness) and length. Analysis of resting-state functional connections (eyes closed) showed a stronger correlation between functionally alike (compared with functionally unlike) stripes/columns in V2 and V3. These results revealed a fine-scale segregation of color-selective or disparity-selective streams within human areas V2 and V3. Together with prior evidence from NHPs, this suggests that two parallel processing streams extend from visual subcortical regions through V1, V2, and V3. SIGNIFICANCE STATEMENT In current textbooks and reviews, diagrams of cortical visual processing highlight two distinct neural-processing streams within the first and second cortical areas in monkeys. Two major streams consist of segregated cortical columns that are selectively activated by either color or ocular interactions. Because such cortical columns are so small, they were not revealed previously by conventional imaging techniques in humans. Here we demonstrate that such segregated columnar systems exist in humans. We find that, in humans, color versus binocular disparity columns extend one full area further, into the third visual area. Our approach can be extended to reveal and study additional types of columns in human cortex, perhaps including columns underlying more cognitive functions. PMID:26865609

  1. Progressive Recruitment of Mesenchymal Progenitors Reveals a Time-Dependent Process of Cell Fate Acquisition in Mouse and Human Nephrogenesis.

    PubMed

    Lindström, Nils O; De Sena Brandine, Guilherme; Tran, Tracy; Ransick, Andrew; Suh, Gio; Guo, Jinjin; Kim, Albert D; Parvez, Riana K; Ruffins, Seth W; Rutledge, Elisabeth A; Thornton, Matthew E; Grubbs, Brendan; McMahon, Jill A; Smith, Andrew D; McMahon, Andrew P

    2018-06-04

    Mammalian nephrons arise from a limited nephron progenitor pool through a reiterative inductive process extending over days (mouse) or weeks (human) of kidney development. Here, we present evidence that human nephron patterning reflects a time-dependent process of recruitment of mesenchymal progenitors into an epithelial nephron precursor. Progressive recruitment predicted from high-resolution image analysis and three-dimensional reconstruction of human nephrogenesis was confirmed through direct visualization and cell fate analysis of mouse kidney organ cultures. Single-cell RNA sequencing of the human nephrogenic niche provided molecular insights into these early patterning processes and predicted developmental trajectories adopted by nephron progenitor cells in forming segment-specific domains of the human nephron. The temporal-recruitment model for nephron polarity and patterning suggested by direct analysis of human kidney development provides a framework for integrating signaling pathways driving mammalian nephrogenesis. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Visual Prediction Error Spreads Across Object Features in Human Visual Cortex

    PubMed Central

    Summerfield, Christopher; Egner, Tobias

    2016-01-01

    Visual cognition is thought to rely heavily on contextual expectations. Accordingly, previous studies have revealed distinct neural signatures for expected versus unexpected stimuli in visual cortex. However, it is presently unknown how the brain combines multiple concurrent stimulus expectations such as those we have for different features of a familiar object. To understand how an unexpected object feature affects the simultaneous processing of other expected feature(s), we combined human fMRI with a task that independently manipulated expectations for color and motion features of moving-dot stimuli. Behavioral data and neural signals from visual cortex were then interrogated to adjudicate between three possible ways in which prediction error (surprise) in the processing of one feature might affect the concurrent processing of another, expected feature: (1) feature processing may be independent; (2) surprise might “spread” from the unexpected to the expected feature, rendering the entire object unexpected; or (3) pairing a surprising feature with an expected feature might promote the inference that the two features are not in fact part of the same object. To formalize these rival hypotheses, we implemented them in a simple computational model of multifeature expectations. Across a range of analyses, behavior and visual neural signals consistently supported a model that assumes a mixing of prediction error signals across features: surprise in one object feature spreads to its other feature(s), thus rendering the entire object unexpected. These results reveal neurocomputational principles of multifeature expectations and indicate that objects are the unit of selection for predictive vision. SIGNIFICANCE STATEMENT We address a key question in predictive visual cognition: how does the brain combine multiple concurrent expectations for different features of a single object such as its color and motion trajectory? By combining a behavioral protocol that independently varies expectation of (and attention to) multiple object features with computational modeling and fMRI, we demonstrate that behavior and fMRI activity patterns in visual cortex are best accounted for by a model in which prediction error in one object feature spreads to other object features. These results demonstrate how predictive vision forms object-level expectations out of multiple independent features. PMID:27810936

  3. Fluctuation scaling in the visual cortex at threshold

    NASA Astrophysics Data System (ADS)

    Medina, José M.; Díaz, José A.

    2016-05-01

    Fluctuation scaling relates trial-to-trial variability to the average response by a power function in many physical processes. Here we address whether fluctuation scaling holds in sensory psychophysics and its functional role in visual processing. We report experimental evidence of fluctuation scaling in human color vision and form perception at threshold. Subjects detected thresholds in a psychophysical masking experiment that is considered a standard reference for studying suppression between neurons in the visual cortex. For all subjects, the analysis of threshold variability that results from the masking task indicates that fluctuation scaling is a global property that modulates detection thresholds with a scaling exponent that departs from 2, β =2.48 ±0.07 . We also examine a generalized version of fluctuation scaling between the sample kurtosis K and the sample skewness S of threshold distributions. We find that K and S are related and follow a unique quadratic form K =(1.19 ±0.04 ) S2+(2.68 ±0.06 ) that departs from the expected 4/3 power function regime. A random multiplicative process with weak additive noise is proposed based on a Langevin-type equation. The multiplicative process provides a unifying description of fluctuation scaling and the quadratic S -K relation and is related to on-off intermittency in sensory perception. Our findings provide an insight into how the human visual system interacts with the external environment. The theoretical methods open perspectives for investigating fluctuation scaling and intermittency effects in a wide variety of natural, economic, and cognitive phenomena.

  4. Model Cortical Association Fields Account for the Time Course and Dependence on Target Complexity of Human Contour Perception

    PubMed Central

    Gintautas, Vadas; Ham, Michael I.; Kunsberg, Benjamin; Barr, Shawn; Brumby, Steven P.; Rasmussen, Craig; George, John S.; Nemenman, Ilya; Bettencourt, Luís M. A.; Kenyon, Garret T.

    2011-01-01

    Can lateral connectivity in the primary visual cortex account for the time dependence and intrinsic task difficulty of human contour detection? To answer this question, we created a synthetic image set that prevents sole reliance on either low-level visual features or high-level context for the detection of target objects. Rendered images consist of smoothly varying, globally aligned contour fragments (amoebas) distributed among groups of randomly rotated fragments (clutter). The time course and accuracy of amoeba detection by humans was measured using a two-alternative forced choice protocol with self-reported confidence and variable image presentation time (20-200 ms), followed by an image mask optimized so as to interrupt visual processing. Measured psychometric functions were well fit by sigmoidal functions with exponential time constants of 30-91 ms, depending on amoeba complexity. Key aspects of the psychophysical experiments were accounted for by a computational network model, in which simulated responses across retinotopic arrays of orientation-selective elements were modulated by cortical association fields, represented as multiplicative kernels computed from the differences in pairwise edge statistics between target and distractor images. Comparing the experimental and the computational results suggests that each iteration of the lateral interactions takes at least ms of cortical processing time. Our results provide evidence that cortical association fields between orientation selective elements in early visual areas can account for important temporal and task-dependent aspects of the psychometric curves characterizing human contour perception, with the remaining discrepancies postulated to arise from the influence of higher cortical areas. PMID:21998562

  5. Perceptually lossless fractal image compression

    NASA Astrophysics Data System (ADS)

    Lin, Huawu; Venetsanopoulos, Anastasios N.

    1996-02-01

    According to the collage theorem, the encoding distortion for fractal image compression is directly related to the metric used in the encoding process. In this paper, we introduce a perceptually meaningful distortion measure based on the human visual system's nonlinear response to luminance and the visual masking effects. Blackwell's psychophysical raw data on contrast threshold are first interpolated as a function of background luminance and visual angle, and are then used as an error upper bound for perceptually lossless image compression. For a variety of images, experimental results show that the algorithm produces a compression ratio of 8:1 to 10:1 without introducing visual artifacts.

  6. Lightness computation by the human visual system

    NASA Astrophysics Data System (ADS)

    Rudd, Michael E.

    2017-05-01

    A model of achromatic color computation by the human visual system is presented, which is shown to account in an exact quantitative way for a large body of appearance matching data collected with simple visual displays. The model equations are closely related to those of the original Retinex model of Land and McCann. However, the present model differs in important ways from Land and McCann's theory in that it invokes additional biological and perceptual mechanisms, including contrast gain control, different inherent neural gains for incremental, and decremental luminance steps, and two types of top-down influence on the perceptual weights applied to local luminance steps in the display: edge classification and spatial integration attentional windowing. Arguments are presented to support the claim that these various visual processes must be instantiated by a particular underlying neural architecture. By pointing to correspondences between the architecture of the model and findings from visual neurophysiology, this paper suggests that edge classification involves a top-down gating of neural edge responses in early visual cortex (cortical areas V1 and/or V2) while spatial integration windowing occurs in cortical area V4 or beyond.

  7. Camouflage and visual perception

    PubMed Central

    Troscianko, Tom; Benton, Christopher P.; Lovell, P. George; Tolhurst, David J.; Pizlo, Zygmunt

    2008-01-01

    How does an animal conceal itself from visual detection by other animals? This review paper seeks to identify general principles that may apply in this broad area. It considers mechanisms of visual encoding, of grouping and object encoding, and of search. In most cases, the evidence base comes from studies of humans or species whose vision approximates to that of humans. The effort is hampered by a relatively sparse literature on visual function in natural environments and with complex foraging tasks. However, some general constraints emerge as being potentially powerful principles in understanding concealment—a ‘constraint’ here means a set of simplifying assumptions. Strategies that disrupt the unambiguous encoding of discontinuities of intensity (edges), and of other key visual attributes, such as motion, are key here. Similar strategies may also defeat grouping and object-encoding mechanisms. Finally, the paper considers how we may understand the processes of search for complex targets in complex scenes. The aim is to provide a number of pointers towards issues, which may be of assistance in understanding camouflage and concealment, particularly with reference to how visual systems can detect the shape of complex, concealed objects. PMID:18990671

  8. Generic decoding of seen and imagined objects using hierarchical visual features.

    PubMed

    Horikawa, Tomoyasu; Kamitani, Yukiyasu

    2017-05-22

    Object recognition is a key function in both human and machine vision. While brain decoding of seen and imagined objects has been achieved, the prediction is limited to training examples. We present a decoding approach for arbitrary objects using the machine vision principle that an object category is represented by a set of features rendered invariant through hierarchical processing. We show that visual features, including those derived from a deep convolutional neural network, can be predicted from fMRI patterns, and that greater accuracy is achieved for low-/high-level features with lower-/higher-level visual areas, respectively. Predicted features are used to identify seen/imagined object categories (extending beyond decoder training) from a set of computed features for numerous object images. Furthermore, decoding of imagined objects reveals progressive recruitment of higher-to-lower visual representations. Our results demonstrate a homology between human and machine vision and its utility for brain-based information retrieval.

  9. When a Dog Has a Pen for a Tail: The Time Course of Creative Object Processing

    ERIC Educational Resources Information Center

    Wang, Botao; Duan, Haijun; Qi, Senqing; Hu, Weiping; Zhang, Huan

    2017-01-01

    Creative objects differ from ordinary objects in that they are created by human beings to contain novel, creative information. Previous research has demonstrated that ordinary object processing involves both a perceptual process for analyzing different features of the visual input and a higher-order process for evaluating the relevance of this…

  10. Miniature Brain Decision Making in Complex Visual Environments

    DTIC Science & Technology

    2008-07-18

    release; distribution unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT The grantee investigated, using the honeybee ( Apis mellifera ) as a model...successful for understanding face processing in both human adults and infants. Individual honeybees ( Apis mellifera ) were trained with...for 30 bees (group 3) of the target stimuli. Bernard J, Stach S, Giurfa M (2007) Categorization of visual stimuli in the honeybee Apis mellifera

  11. Where Similarity Beats Redundancy: The Importance of Context, Higher Order Similarity, and Response Assignment

    ERIC Educational Resources Information Center

    Eidels, Ami; Townsend, James T.; Pomerantz, James R.

    2008-01-01

    People are especially efficient in processing certain visual stimuli such as human faces or good configurations. It has been suggested that topology and geometry play important roles in configural perception. Visual search is one area in which configurality seems to matter. When either of 2 target features leads to a correct response and the…

  12. The footprints of visual attention in the Posner cueing paradigm revealed by classification images

    NASA Technical Reports Server (NTRS)

    Eckstein, Miguel P.; Shimozaki, Steven S.; Abbey, Craig K.

    2002-01-01

    In the Posner cueing paradigm, observers' performance in detecting a target is typically better in trials in which the target is present at the cued location than in trials in which the target appears at the uncued location. This effect can be explained in terms of a Bayesian observer where visual attention simply weights the information differently at the cued (attended) and uncued (unattended) locations without a change in the quality of processing at each location. Alternatively, it could also be explained in terms of visual attention changing the shape of the perceptual filter at the cued location. In this study, we use the classification image technique to compare the human perceptual filters at the cued and uncued locations in a contrast discrimination task. We did not find statistically significant differences between the shapes of the inferred perceptual filters across the two locations, nor did the observed differences account for the measured cueing effects in human observers. Instead, we found a difference in the magnitude of the classification images, supporting the idea that visual attention changes the weighting of information at the cued and uncued location, but does not change the quality of processing at each individual location.

  13. Near-instant automatic access to visually presented words in the human neocortex: neuromagnetic evidence.

    PubMed

    Shtyrov, Yury; MacGregor, Lucy J

    2016-05-24

    Rapid and efficient processing of external information by the brain is vital to survival in a highly dynamic environment. The key channel humans use to exchange information is language, but the neural underpinnings of its processing are still not fully understood. We investigated the spatio-temporal dynamics of neural access to word representations in the brain by scrutinising the brain's activity elicited in response to psycholinguistically, visually and phonologically matched groups of familiar words and meaningless pseudowords. Stimuli were briefly presented on the visual-field periphery to experimental participants whose attention was occupied with a non-linguistic visual feature-detection task. The neural activation elicited by these unattended orthographic stimuli was recorded using multi-channel whole-head magnetoencephalography, and the timecourse of lexically-specific neuromagnetic responses was assessed in sensor space as well as at the level of cortical sources, estimated using individual MR-based distributed source reconstruction. Our results demonstrate a neocortical signature of automatic near-instant access to word representations in the brain: activity in the perisylvian language network characterised by specific activation enhancement for familiar words, starting as early as ~70 ms after the onset of unattended word stimuli and underpinned by temporal and inferior-frontal cortices.

  14. Exploring associations between gaze patterns and putative human mirror neuron system activity.

    PubMed

    Donaldson, Peter H; Gurvich, Caroline; Fielding, Joanne; Enticott, Peter G

    2015-01-01

    The human mirror neuron system (MNS) is hypothesized to be crucial to social cognition. Given that key MNS-input regions such as the superior temporal sulcus are involved in biological motion processing, and mirror neuron activity in monkeys has been shown to vary with visual attention, aberrant MNS function may be partly attributable to atypical visual input. To examine the relationship between gaze pattern and interpersonal motor resonance (IMR; an index of putative MNS activity), healthy right-handed participants aged 18-40 (n = 26) viewed videos of transitive grasping actions or static hands, whilst the left primary motor cortex received transcranial magnetic stimulation. Motor-evoked potentials recorded in contralateral hand muscles were used to determine IMR. Participants also underwent eyetracking analysis to assess gaze patterns whilst viewing the same videos. No relationship was observed between predictive gaze and IMR. However, IMR was positively associated with fixation counts in areas of biological motion in the videos, and negatively associated with object areas. These findings are discussed with reference to visual influences on the MNS, and the possibility that MNS atypicalities might be influenced by visual processes such as aberrant gaze pattern.

  15. Functional neuroanatomy of visual masking deficits in schizophrenia.

    PubMed

    Green, Michael F; Lee, Junghee; Cohen, Mark S; Engel, Steven A; Korb, Alexander S; Nuechterlein, Keith H; Wynn, Jonathan K; Glahn, David C

    2009-12-01

    Visual masking procedures assess the earliest stages of visual processing. Patients with schizophrenia reliably show deficits on visual masking, and these procedures have been used to explore vulnerability to schizophrenia, probe underlying neural circuits, and help explain functional outcome. To identify and compare regional brain activity associated with one form of visual masking (ie, backward masking) in schizophrenic patients and healthy controls. Subjects received functional magnetic resonance imaging scans. While in the scanner, subjects performed a backward masking task and were given 3 functional localizer activation scans to identify early visual processing regions of interest (ROIs). University of California, Los Angeles, and the Department of Veterans Affairs Greater Los Angeles Healthcare System. Nineteen patients with schizophrenia and 19 healthy control subjects. Main Outcome Measure The magnitude of the functional magnetic resonance imaging signal during backward masking. Two ROIs (lateral occipital complex [LO] and the human motion selective cortex [hMT+]) showed sensitivity to the effects of masking, meaning that signal in these areas increased as the target became more visible. Patients had lower activation than controls in LO across all levels of visibility but did not differ in other visual processing ROIs. Using whole-brain analyses, we also identified areas outside the ROIs that were sensitive to masking effects (including bilateral inferior parietal lobe and thalamus), but groups did not differ in signal magnitude in these areas. The study results support a key role in LO for visual masking, consistent with previous studies in healthy controls. The current results indicate that patients fail to activate LO to the same extent as controls during visual processing regardless of stimulus visibility, suggesting a neural basis for the visual masking deficit, and possibly other visual integration deficits, in schizophrenia.

  16. Do bees like Van Gogh's Sunflowers?

    NASA Astrophysics Data System (ADS)

    Chittka, Lars; Walker, Julian

    2006-06-01

    Flower colours have evolved over 100 million years to address the colour vision of their bee pollinators. In a much more rapid process, cultural (and horticultural) evolution has produced images of flowers that stimulate aesthetic responses in human observers. The colour vision and analysis of visual patterns differ in several respects between humans and bees. Here, a behavioural ecologist and an installation artist present bumblebees with reproductions of paintings highly appreciated in Western society, such as Van Gogh's Sunflowers. We use this unconventional approach in the hope to raise awareness for between-species differences in visual perception, and to provoke thinking about the implications of biology in human aesthetics and the relationship between object representation and its biological connotations.

  17. Visualizing Human Migration Trhough Space and Time

    NASA Astrophysics Data System (ADS)

    Zambotti, G.; Guan, W.; Gest, J.

    2015-07-01

    Human migration has been an important activity in human societies since antiquity. Since 1890, approximately three percent of the world's population has lived outside of their country of origin. As globalization intensifies in the modern era, human migration persists even as governments seek to more stringently regulate flows. Understanding this phenomenon, its causes, processes and impacts often starts from measuring and visualizing its spatiotemporal patterns. This study builds a generic online platform for users to interactively visualize human migration through space and time. This entails quickly ingesting human migration data in plain text or tabular format; matching the records with pre-established geographic features such as administrative polygons; symbolizing the migration flow by circular arcs of varying color and weight based on the flow attributes; connecting the centroids of the origin and destination polygons; and allowing the user to select either an origin or a destination feature to display all flows in or out of that feature through time. The method was first developed using ArcGIS Server for world-wide cross-country migration, and later applied to visualizing domestic migration patterns within China between provinces, and between states in the United States, all through multiple years. The technical challenges of this study include simplifying the shapes of features to enhance user interaction, rendering performance and application scalability; enabling the temporal renderers to provide time-based rendering of features and the flow among them; and developing a responsive web design (RWD) application to provide an optimal viewing experience. The platform is available online for the public to use, and the methodology is easily adoptable to visualizing any flow, not only human migration but also the flow of goods, capital, disease, ideology, etc., between multiple origins and destinations across space and time.

  18. Conflicting Demands of Abstract and Specific Visual Object Processing Resolved by Fronto-Parietal Networks

    PubMed Central

    McMenamin, Brenton W.; Marsolek, Chad J.; Morseth, Brianna K.; Speer, MacKenzie F.; Burton, Philip C.; Burgund, E. Darcy

    2016-01-01

    Object categorization and exemplar identification place conflicting demands on the visual system, yet humans easily perform these fundamentally contradictory tasks. Previous studies suggest the existence of dissociable visual processing subsystems to accomplish the two abilities – an abstract category (AC) subsystem that operates effectively in the left hemisphere, and a specific exemplar (SE) subsystem that operates effectively in the right hemisphere. This multiple subsystems theory explains a range of visual abilities, but previous studies have not explored what mechanisms exist for coordinating the function of multiple subsystems and/or resolving the conflicts that would arise between them. We collected functional MRI data while participants performed two variants of a cue-probe working memory task that required AC or SE processing. During the maintenance phase of the task, the bilateral intraparietal sulcus (IPS) exhibited hemispheric asymmetries in functional connectivity consistent with exerting proactive control over the two visual subsystems: greater connectivity to the left hemisphere during the AC task, and greater connectivity to the right hemisphere during the SE task. Moreover, probe-evoked activation revealed activity in a broad fronto-parietal network (containing IPS) associated with reactive control when the two visual subsystems were in conflict, and variations in this conflict signal across trials was related to the visual similarity of the cue/probe stimulus pairs. Although many studies have confirmed the existence of multiple visual processing subsystems, this study is the first to identify the mechanisms responsible for coordinating their operations. PMID:26883940

  19. Conflicting demands of abstract and specific visual object processing resolved by frontoparietal networks.

    PubMed

    McMenamin, Brenton W; Marsolek, Chad J; Morseth, Brianna K; Speer, MacKenzie F; Burton, Philip C; Burgund, E Darcy

    2016-06-01

    Object categorization and exemplar identification place conflicting demands on the visual system, yet humans easily perform these fundamentally contradictory tasks. Previous studies suggest the existence of dissociable visual processing subsystems to accomplish the two abilities-an abstract category (AC) subsystem that operates effectively in the left hemisphere and a specific exemplar (SE) subsystem that operates effectively in the right hemisphere. This multiple subsystems theory explains a range of visual abilities, but previous studies have not explored what mechanisms exist for coordinating the function of multiple subsystems and/or resolving the conflicts that would arise between them. We collected functional MRI data while participants performed two variants of a cue-probe working memory task that required AC or SE processing. During the maintenance phase of the task, the bilateral intraparietal sulcus (IPS) exhibited hemispheric asymmetries in functional connectivity consistent with exerting proactive control over the two visual subsystems: greater connectivity to the left hemisphere during the AC task, and greater connectivity to the right hemisphere during the SE task. Moreover, probe-evoked activation revealed activity in a broad frontoparietal network (containing IPS) associated with reactive control when the two visual subsystems were in conflict, and variations in this conflict signal across trials was related to the visual similarity of the cue-probe stimulus pairs. Although many studies have confirmed the existence of multiple visual processing subsystems, this study is the first to identify the mechanisms responsible for coordinating their operations.

  20. Within-hemifield competition in early visual areas limits the ability to track multiple objects with attention.

    PubMed

    Störmer, Viola S; Alvarez, George A; Cavanagh, Patrick

    2014-08-27

    It is much easier to divide attention across the left and right visual hemifields than within the same visual hemifield. Here we investigate whether this benefit of dividing attention across separate visual fields is evident at early cortical processing stages. We measured the steady-state visual evoked potential, an oscillatory response of the visual cortex elicited by flickering stimuli, of moving targets and distractors while human observers performed a tracking task. The amplitude of responses at the target frequencies was larger than that of the distractor frequencies when participants tracked two targets in separate hemifields, indicating that attention can modulate early visual processing when it is divided across hemifields. However, these attentional modulations disappeared when both targets were tracked within the same hemifield. These effects were not due to differences in task performance, because accuracy was matched across the tracking conditions by adjusting target speed (with control conditions ruling out effects due to speed alone). To investigate later processing stages, we examined the P3 component over central-parietal scalp sites that was elicited by the test probe at the end of the trial. The P3 amplitude was larger for probes on targets than on distractors, regardless of whether attention was divided across or within a hemifield, indicating that these higher-level processes were not constrained by visual hemifield. These results suggest that modulating early processing stages enables more efficient target tracking, and that within-hemifield competition limits the ability to modulate multiple target representations within the hemifield maps of the early visual cortex. Copyright © 2014 the authors 0270-6474/14/3311526-08$15.00/0.

  1. Brain network involved in visual processing of movement stimuli used in upper limb robotic training: an fMRI study.

    PubMed

    Nocchi, Federico; Gazzellini, Simone; Grisolia, Carmela; Petrarca, Maurizio; Cannatà, Vittorio; Cappa, Paolo; D'Alessio, Tommaso; Castelli, Enrico

    2012-07-24

    The potential of robot-mediated therapy and virtual reality in neurorehabilitation is becoming of increasing importance. However, there is limited information, using neuroimaging, on the neural networks involved in training with these technologies. This study was intended to detect the brain network involved in the visual processing of movement during robotic training. The main aim was to investigate the existence of a common cerebral network able to assimilate biological (human upper limb) and non-biological (abstract object) movements, hence testing the suitability of the visual non-biological feedback provided by the InMotion2 Robot. A visual functional Magnetic Resonance Imaging (fMRI) task was administered to 22 healthy subjects. The task required observation and retrieval of motor gestures and of the visual feedback used in robotic training. Functional activations of both biological and non-biological movements were examined to identify areas activated in both conditions, along with differential activity in upper limb vs. abstract object trials. Control of response was also tested by administering trials with congruent and incongruent reaching movements. The observation of upper limb and abstract object movements elicited similar patterns of activations according to a caudo-rostral pathway for the visual processing of movements (including specific areas of the occipital, temporal, parietal, and frontal lobes). Similarly, overlapping activations were found for the subsequent retrieval of the observed movement. Furthermore, activations of frontal cortical areas were associated with congruent trials more than with the incongruent ones. This study identified the neural pathway associated with visual processing of movement stimuli used in upper limb robot-mediated training and investigated the brain's ability to assimilate abstract object movements with human motor gestures. In both conditions, activations were elicited in cerebral areas involved in visual perception, sensory integration, recognition of movement, re-mapping on the somatosensory and motor cortex, storage in memory, and response control. Results from the congruent vs. incongruent trials revealed greater activity for the former condition than the latter in a network including cingulate cortex, right inferior and middle frontal gyrus that are involved in the go-signal and in decision control. Results on healthy subjects would suggest the appropriateness of an abstract visual feedback provided during motor training. The task contributes to highlight the potential of fMRI in improving the understanding of visual motor processes and may also be useful in detecting brain reorganisation during training.

  2. Good vibrations: tactile feedback in support of attention allocation and human-automation coordination in event-driven domains.

    PubMed

    Sklar, A E; Sarter, N B

    1999-12-01

    Observed breakdowns in human-machine communication can be explained, in part, by the nature of current automation feedback, which relies heavily on focal visual attention. Such feedback is not well suited for capturing attention in case of unexpected changes and events or for supporting the parallel processing of large amounts of data in complex domains. As suggested by multiple-resource theory, one possible solution to this problem is to distribute information across various sensory modalities. A simulator study was conducted to compare the effectiveness of visual, tactile, and redundant visual and tactile cues for indicating unexpected changes in the status of an automated cockpit system. Both tactile conditions resulted in higher detection rates for, and faster response times to, uncommanded mode transitions. Tactile feedback did not interfere with, nor was its effectiveness affected by, the performance of concurrent visual tasks. The observed improvement in task-sharing performance indicates that the introduction of tactile feedback is a promising avenue toward better supporting human-machine communication in event-driven, information-rich domains.

  3. First comparative approach to touchscreen-based visual object-location paired-associates learning in humans (Homo sapiens) and a nonhuman primate (Microcebus murinus).

    PubMed

    Schmidtke, Daniel; Ammersdörfer, Sandra; Joly, Marine; Zimmermann, Elke

    2018-05-10

    A recent study suggests that a specific, touchscreen-based task on visual object-location paired-associates learning (PAL), the so-called Different PAL (dPAL) task, allows effective translation from animal models to humans. Here, we adapted the task to a nonhuman primate (NHP), the gray mouse lemur, and provide first evidence for the successful comparative application of the task to humans and NHPs. Young human adults reach the learning criterion after considerably less sessions (one order of magnitude) than young, adult NHPs, which is likely due to faster and voluntary rejection of ineffective learning strategies in humans and almost immediate rule generalization. At criterion, however, all human subjects solved the task by either applying a visuospatial rule or, more rarely, by memorizing all possible stimulus combinations and responding correctly based on global visual information. An error-profile analysis in humans and NHPs suggests that successful learning in NHPs is comparably based either on the formation of visuospatial associative links or on more reflexive, visually guided stimulus-response learning. The classification in the NHPs is further supported by an analysis of the individual response latencies, which are considerably higher in NHPs classified as spatial learners. Our results, therefore, support the high translational potential of the standardized, touchscreen-based dPAL task by providing first empirical and comparable evidence for two different cognitive processes underlying dPAL performance in primates. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  4. Competition for Left Hemisphere Resources: Right Hemisphere Superiority at Abstract Verbal Information Processing.

    ERIC Educational Resources Information Center

    Polson, Martha C.; And Others

    A study tested a multiple-resources model of human information processing wherein the two cerebral hemispheres are assumed to have separate, limited-capacity pools of undifferentiated resources. The subjects were five right-handed males who had demonstrated right visual field-left hemisphere (RVF-LH) superiority for processing a centrally…

  5. Transcranial Random Noise Stimulation of Visual Cortex: Stochastic Resonance Enhances Central Mechanisms of Perception.

    PubMed

    van der Groen, Onno; Wenderoth, Nicole

    2016-05-11

    Random noise enhances the detectability of weak signals in nonlinear systems, a phenomenon known as stochastic resonance (SR). Though counterintuitive at first, SR has been demonstrated in a variety of naturally occurring processes, including human perception, where it has been shown that adding noise directly to weak visual, tactile, or auditory stimuli enhances detection performance. These results indicate that random noise can push subthreshold receptor potentials across the transfer threshold, causing action potentials in an otherwise silent afference. Despite the wealth of evidence demonstrating SR for noise added to a stimulus, relatively few studies have explored whether or not noise added directly to cortical networks enhances sensory detection. Here we administered transcranial random noise stimulation (tRNS; 100-640 Hz zero-mean Gaussian white noise) to the occipital region of human participants. For increasing tRNS intensities (ranging from 0 to 1.5 mA), the detection accuracy of a visual stimuli changed according to an inverted-U-shaped function, typical of the SR phenomenon. When the optimal level of noise was added to visual cortex, detection performance improved significantly relative to a zero noise condition (9.7 ± 4.6%) and to a similar extent as optimal noise added to the visual stimuli (11.2 ± 4.7%). Our results demonstrate that adding noise to cortical networks can improve human behavior and that tRNS is an appropriate tool to exploit this mechanism. Our findings suggest that neural processing at the network level exhibits nonlinear system properties that are sensitive to the stochastic resonance phenomenon and highlight the usefulness of tRNS as a tool to modulate human behavior. Since tRNS can be applied to all cortical areas, exploiting the SR phenomenon is not restricted to the perceptual domain, but can be used for other functions that depend on nonlinear neural dynamics (e.g., decision making, task switching, response inhibition, and many other processes). This will open new avenues for using tRNS to investigate brain function and enhance the behavior of healthy individuals or patients. Copyright © 2016 the authors 0270-6474/16/365289-10$15.00/0.

  6. Individual differences in visual motion perception and neurotransmitter concentrations in the human brain.

    PubMed

    Takeuchi, Tatsuto; Yoshimoto, Sanae; Shimada, Yasuhiro; Kochiyama, Takanori; Kondo, Hirohito M

    2017-02-19

    Recent studies have shown that interindividual variability can be a rich source of information regarding the mechanism of human visual perception. In this study, we examined the mechanisms underlying interindividual variability in the perception of visual motion, one of the fundamental components of visual scene analysis, by measuring neurotransmitter concentrations using magnetic resonance spectroscopy. First, by psychophysically examining two types of motion phenomena-motion assimilation and contrast-we found that, following the presentation of the same stimulus, some participants perceived motion assimilation, while others perceived motion contrast. Furthermore, we found that the concentration of the excitatory neurotransmitter glutamate-glutamine (Glx) in the dorsolateral prefrontal cortex (Brodmann area 46) was positively correlated with the participant's tendency to motion assimilation over motion contrast; however, this effect was not observed in the visual areas. The concentration of the inhibitory neurotransmitter γ-aminobutyric acid had only a weak effect compared with that of Glx. We conclude that excitatory process in the suprasensory area is important for an individual's tendency to determine antagonistically perceived visual motion phenomena.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).

  7. The sequence of cortical activity inferred by response latency variability in the human ventral pathway of face processing.

    PubMed

    Lin, Jo-Fu Lotus; Silva-Pereyra, Juan; Chou, Chih-Che; Lin, Fa-Hsuan

    2018-04-11

    Variability in neuronal response latency has been typically considered caused by random noise. Previous studies of single cells and large neuronal populations have shown that the temporal variability tends to increase along the visual pathway. Inspired by these previous studies, we hypothesized that functional areas at later stages in the visual pathway of face processing would have larger variability in the response latency. To test this hypothesis, we used magnetoencephalographic data collected when subjects were presented with images of human faces. Faces are known to elicit a sequence of activity from the primary visual cortex to the fusiform gyrus. Our results revealed that the fusiform gyrus showed larger variability in the response latency compared to the calcarine fissure. Dynamic and spectral analyses of the latency variability indicated that the response latency in the fusiform gyrus was more variable than in the calcarine fissure between 70 ms and 200 ms after the stimulus onset and between 4 Hz and 40 Hz, respectively. The sequential processing of face information from the calcarine sulcus to the fusiform sulcus was more reliably detected based on sizes of the response variability than instants of the maximal response peaks. With two areas in the ventral visual pathway, we show that the variability in response latency across brain areas can be used to infer the sequence of cortical activity.

  8. Visual Working Memory Enhances the Neural Response to Matching Visual Input.

    PubMed

    Gayet, Surya; Guggenmos, Matthias; Christophel, Thomas B; Haynes, John-Dylan; Paffen, Chris L E; Van der Stigchel, Stefan; Sterzer, Philipp

    2017-07-12

    Visual working memory (VWM) is used to maintain visual information available for subsequent goal-directed behavior. The content of VWM has been shown to affect the behavioral response to concurrent visual input, suggesting that visual representations originating from VWM and from sensory input draw upon a shared neural substrate (i.e., a sensory recruitment stance on VWM storage). Here, we hypothesized that visual information maintained in VWM would enhance the neural response to concurrent visual input that matches the content of VWM. To test this hypothesis, we measured fMRI BOLD responses to task-irrelevant stimuli acquired from 15 human participants (three males) performing a concurrent delayed match-to-sample task. In this task, observers were sequentially presented with two shape stimuli and a retro-cue indicating which of the two shapes should be memorized for subsequent recognition. During the retention interval, a task-irrelevant shape (the probe) was briefly presented in the peripheral visual field, which could either match or mismatch the shape category of the memorized stimulus. We show that this probe stimulus elicited a stronger BOLD response, and allowed for increased shape-classification performance, when it matched rather than mismatched the concurrently memorized content, despite identical visual stimulation. Our results demonstrate that VWM enhances the neural response to concurrent visual input in a content-specific way. This finding is consistent with the view that neural populations involved in sensory processing are recruited for VWM storage, and it provides a common explanation for a plethora of behavioral studies in which VWM-matching visual input elicits a stronger behavioral and perceptual response. SIGNIFICANCE STATEMENT Humans heavily rely on visual information to interact with their environment and frequently must memorize such information for later use. Visual working memory allows for maintaining such visual information in the mind's eye after termination of its retinal input. It is hypothesized that information maintained in visual working memory relies on the same neural populations that process visual input. Accordingly, the content of visual working memory is known to affect our conscious perception of concurrent visual input. Here, we demonstrate for the first time that visual input elicits an enhanced neural response when it matches the content of visual working memory, both in terms of signal strength and information content. Copyright © 2017 the authors 0270-6474/17/376638-10$15.00/0.

  9. Development of the Visual Word Form Area Requires Visual Experience: Evidence from Blind Braille Readers.

    PubMed

    Kim, Judy S; Kanjlia, Shipra; Merabet, Lotfi B; Bedny, Marina

    2017-11-22

    Learning to read causes the development of a letter- and word-selective region known as the visual word form area (VWFA) within the human ventral visual object stream. Why does a reading-selective region develop at this anatomical location? According to one hypothesis, the VWFA develops at the nexus of visual inputs from retinotopic cortices and linguistic input from the frontotemporal language network because reading involves extracting linguistic information from visual symbols. Surprisingly, the anatomical location of the VWFA is also active when blind individuals read Braille by touch, suggesting that vision is not required for the development of the VWFA. In this study, we tested the alternative prediction that VWFA development is in fact influenced by visual experience. We predicted that in the absence of vision, the "VWFA" is incorporated into the frontotemporal language network and participates in high-level language processing. Congenitally blind ( n = 10, 9 female, 1 male) and sighted control ( n = 15, 9 female, 6 male), male and female participants each took part in two functional magnetic resonance imaging experiments: (1) word reading (Braille for blind and print for sighted participants), and (2) listening to spoken sentences of different grammatical complexity (both groups). We find that in blind, but not sighted participants, the anatomical location of the VWFA responds both to written words and to the grammatical complexity of spoken sentences. This suggests that in blindness, this region takes on high-level linguistic functions, becoming less selective for reading. More generally, the current findings suggest that experience during development has a major effect on functional specialization in the human cortex. SIGNIFICANCE STATEMENT The visual word form area (VWFA) is a region in the human cortex that becomes specialized for the recognition of written letters and words. Why does this particular brain region become specialized for reading? We tested the hypothesis that the VWFA develops within the ventral visual stream because reading involves extracting linguistic information from visual symbols. Consistent with this hypothesis, we find that in congenitally blind Braille readers, but not sighted readers of print, the VWFA region is active during grammatical processing of spoken sentences. These results suggest that visual experience contributes to VWFA specialization, and that different neural implementations of reading are possible. Copyright © 2017 the authors 0270-6474/17/3711495-10$15.00/0.

  10. Development of the Visual Word Form Area Requires Visual Experience: Evidence from Blind Braille Readers

    PubMed Central

    Kanjlia, Shipra; Merabet, Lotfi B.

    2017-01-01

    Learning to read causes the development of a letter- and word-selective region known as the visual word form area (VWFA) within the human ventral visual object stream. Why does a reading-selective region develop at this anatomical location? According to one hypothesis, the VWFA develops at the nexus of visual inputs from retinotopic cortices and linguistic input from the frontotemporal language network because reading involves extracting linguistic information from visual symbols. Surprisingly, the anatomical location of the VWFA is also active when blind individuals read Braille by touch, suggesting that vision is not required for the development of the VWFA. In this study, we tested the alternative prediction that VWFA development is in fact influenced by visual experience. We predicted that in the absence of vision, the “VWFA” is incorporated into the frontotemporal language network and participates in high-level language processing. Congenitally blind (n = 10, 9 female, 1 male) and sighted control (n = 15, 9 female, 6 male), male and female participants each took part in two functional magnetic resonance imaging experiments: (1) word reading (Braille for blind and print for sighted participants), and (2) listening to spoken sentences of different grammatical complexity (both groups). We find that in blind, but not sighted participants, the anatomical location of the VWFA responds both to written words and to the grammatical complexity of spoken sentences. This suggests that in blindness, this region takes on high-level linguistic functions, becoming less selective for reading. More generally, the current findings suggest that experience during development has a major effect on functional specialization in the human cortex. SIGNIFICANCE STATEMENT The visual word form area (VWFA) is a region in the human cortex that becomes specialized for the recognition of written letters and words. Why does this particular brain region become specialized for reading? We tested the hypothesis that the VWFA develops within the ventral visual stream because reading involves extracting linguistic information from visual symbols. Consistent with this hypothesis, we find that in congenitally blind Braille readers, but not sighted readers of print, the VWFA region is active during grammatical processing of spoken sentences. These results suggest that visual experience contributes to VWFA specialization, and that different neural implementations of reading are possible. PMID:29061700

  11. [Visual Texture Agnosia in Humans].

    PubMed

    Suzuki, Kyoko

    2015-06-01

    Visual object recognition requires the processing of both geometric and surface properties. Patients with occipital lesions may have visual agnosia, which is impairment in the recognition and identification of visually presented objects primarily through their geometric features. An analogous condition involving the failure to recognize an object by its texture may exist, which can be called visual texture agnosia. Here we present two cases with visual texture agnosia. Case 1 had left homonymous hemianopia and right upper quadrantanopia, along with achromatopsia, prosopagnosia, and texture agnosia, because of damage to his left ventromedial occipitotemporal cortex and right lateral occipito-temporo-parietal cortex due to multiple cerebral embolisms. Although he showed difficulty matching and naming textures of real materials, he could readily name visually presented objects by their contours. Case 2 had right lower quadrantanopia, along with impairment in stereopsis and recognition of texture in 2D images, because of subcortical hemorrhage in the left occipitotemporal region. He failed to recognize shapes based on texture information, whereas shape recognition based on contours was well preserved. Our findings, along with those of three reported cases with texture agnosia, indicate that there are separate channels for processing texture, color, and geometric features, and that the regions around the left collateral sulcus are crucial for texture processing.

  12. Emotional and movement-related body postures modulate visual processing

    PubMed Central

    Borhani, Khatereh; Làdavas, Elisabetta; Maier, Martin E.; Avenanti, Alessio

    2015-01-01

    Human body postures convey useful information for understanding others’ emotions and intentions. To investigate at which stage of visual processing emotional and movement-related information conveyed by bodies is discriminated, we examined event-related potentials elicited by laterally presented images of bodies with static postures and implied-motion body images with neutral, fearful or happy expressions. At the early stage of visual structural encoding (N190), we found a difference in the sensitivity of the two hemispheres to observed body postures. Specifically, the right hemisphere showed a N190 modulation both for the motion content (i.e. all the observed postures implying body movements elicited greater N190 amplitudes compared with static postures) and for the emotional content (i.e. fearful postures elicited the largest N190 amplitude), while the left hemisphere showed a modulation only for the motion content. In contrast, at a later stage of perceptual representation, reflecting selective attention to salient stimuli, an increased early posterior negativity was observed for fearful stimuli in both hemispheres, suggesting an enhanced processing of motivationally relevant stimuli. The observed modulations, both at the early stage of structural encoding and at the later processing stage, suggest the existence of a specialized perceptual mechanism tuned to emotion- and action-related information conveyed by human body postures. PMID:25556213

  13. Selective attention to task-irrelevant emotional distractors is unaffected by the perceptual load associated with a foreground task.

    PubMed

    Hindi Attar, Catherine; Müller, Matthias M

    2012-01-01

    A number of studies have shown that emotionally arousing stimuli are preferentially processed in the human brain. Whether or not this preference persists under increased perceptual load associated with a task at hand remains an open question. Here we manipulated two possible determinants of the attentional selection process, perceptual load associated with a foreground task and the emotional valence of concurrently presented task-irrelevant distractors. As a direct measure of sustained attentional resource allocation in early visual cortex we used steady-state visual evoked potentials (SSVEPs) elicited by distinct flicker frequencies of task and distractor stimuli. Subjects either performed a detection (low load) or discrimination (high load) task at a centrally presented symbol stream that flickered at 8.6 Hz while task-irrelevant neutral or unpleasant pictures from the International Affective Picture System (IAPS) flickered at a frequency of 12 Hz in the background of the stream. As reflected in target detection rates and SSVEP amplitudes to both task and distractor stimuli, unpleasant relative to neutral background pictures more strongly withdrew processing resources from the foreground task. Importantly, this finding was unaffected by the factor 'load' which turned out to be a weak modulator of attentional processing in human visual cortex.

  14. Color in the Cortex—single- and double-opponent cells

    PubMed Central

    Shapley, Robert; Hawken, Michael

    2011-01-01

    This is a review of the research during the past 25 years on cortical processing of color signals. At the beginning of the period the modular view of cortical processing predominated. However, at present an alternative view, that color and form are linked inextricably in visual cortical processing, is more persuasive than it seemed in 1985. Also, the role of the primary visual cortex, V1, in color processing now seems much larger than it did in 1985. The re-evaluation of the important role of V1 in color vision was caused in part by investigations of human V1 responses to color, measured with functional magnetic resonance imaging, fMRI, and in part by the results of numerous studies of single-unit neurophysiology in non-human primates. The neurophysiological results have highlighted the importance of double-opponent cells in V1. Another new concept is population coding of hue, saturation, and brightness in cortical neuronal population activity. PMID:21333672

  15. Audio-Visual Integration in a Redundant Target Paradigm: A Comparison between Rhesus Macaque and Man

    PubMed Central

    Bremen, Peter; Massoudi, Rooholla; Van Wanrooij, Marc M.; Van Opstal, A. J.

    2017-01-01

    The mechanisms underlying multi-sensory interactions are still poorly understood despite considerable progress made since the first neurophysiological recordings of multi-sensory neurons. While the majority of single-cell neurophysiology has been performed in anesthetized or passive-awake laboratory animals, the vast majority of behavioral data stems from studies with human subjects. Interpretation of neurophysiological data implicitly assumes that laboratory animals exhibit perceptual phenomena comparable or identical to those observed in human subjects. To explicitly test this underlying assumption, we here characterized how two rhesus macaques and four humans detect changes in intensity of auditory, visual, and audio-visual stimuli. These intensity changes consisted of a gradual envelope modulation for the sound, and a luminance step for the LED. Subjects had to detect any perceived intensity change as fast as possible. By comparing the monkeys' results with those obtained from the human subjects we found that (1) unimodal reaction times differed across modality, acoustic modulation frequency, and species, (2) the largest facilitation of reaction times with the audio-visual stimuli was observed when stimulus onset asynchronies were such that the unimodal reactions would occur at the same time (response, rather than physical synchrony), and (3) the largest audio-visual reaction-time facilitation was observed when unimodal auditory stimuli were difficult to detect, i.e., at slow unimodal reaction times. We conclude that despite marked unimodal heterogeneity, similar multisensory rules applied to both species. Single-cell neurophysiology in the rhesus macaque may therefore yield valuable insights into the mechanisms governing audio-visual integration that may be informative of the processes taking place in the human brain. PMID:29238295

  16. Audio-Visual Integration in a Redundant Target Paradigm: A Comparison between Rhesus Macaque and Man.

    PubMed

    Bremen, Peter; Massoudi, Rooholla; Van Wanrooij, Marc M; Van Opstal, A J

    2017-01-01

    The mechanisms underlying multi-sensory interactions are still poorly understood despite considerable progress made since the first neurophysiological recordings of multi-sensory neurons. While the majority of single-cell neurophysiology has been performed in anesthetized or passive-awake laboratory animals, the vast majority of behavioral data stems from studies with human subjects. Interpretation of neurophysiological data implicitly assumes that laboratory animals exhibit perceptual phenomena comparable or identical to those observed in human subjects. To explicitly test this underlying assumption, we here characterized how two rhesus macaques and four humans detect changes in intensity of auditory, visual, and audio-visual stimuli. These intensity changes consisted of a gradual envelope modulation for the sound, and a luminance step for the LED. Subjects had to detect any perceived intensity change as fast as possible. By comparing the monkeys' results with those obtained from the human subjects we found that (1) unimodal reaction times differed across modality, acoustic modulation frequency, and species, (2) the largest facilitation of reaction times with the audio-visual stimuli was observed when stimulus onset asynchronies were such that the unimodal reactions would occur at the same time (response, rather than physical synchrony), and (3) the largest audio-visual reaction-time facilitation was observed when unimodal auditory stimuli were difficult to detect, i.e., at slow unimodal reaction times. We conclude that despite marked unimodal heterogeneity, similar multisensory rules applied to both species. Single-cell neurophysiology in the rhesus macaque may therefore yield valuable insights into the mechanisms governing audio-visual integration that may be informative of the processes taking place in the human brain.

  17. Differential Visual Processing of Animal Images, with and without Conscious Awareness

    PubMed Central

    Zhu, Weina; Drewes, Jan; Peatfield, Nicholas A.; Melcher, David

    2016-01-01

    The human visual system can quickly and efficiently extract categorical information from a complex natural scene. The rapid detection of animals in a scene is one compelling example of this phenomenon, and it suggests the automatic processing of at least some types of categories with little or no attentional requirements (Li et al., 2002, 2005). The aim of this study is to investigate whether the remarkable capability to categorize complex natural scenes exist in the absence of awareness, based on recent reports that “invisible” stimuli, which do not reach conscious awareness, can still be processed by the human visual system (Pasley et al., 2004; Williams et al., 2004; Fang and He, 2005; Jiang et al., 2006, 2007; Kaunitz et al., 2011a). In two experiments, we recorded event-related potentials (ERPs) in response to animal and non-animal/vehicle stimuli in both aware and unaware conditions in a continuous flash suppression (CFS) paradigm. Our results indicate that even in the “unseen” condition, the brain responds differently to animal and non-animal/vehicle images, consistent with rapid activation of animal-selective feature detectors prior to, or outside of, suppression by the CFS mask. PMID:27790106

  18. Differential Visual Processing of Animal Images, with and without Conscious Awareness.

    PubMed

    Zhu, Weina; Drewes, Jan; Peatfield, Nicholas A; Melcher, David

    2016-01-01

    The human visual system can quickly and efficiently extract categorical information from a complex natural scene. The rapid detection of animals in a scene is one compelling example of this phenomenon, and it suggests the automatic processing of at least some types of categories with little or no attentional requirements (Li et al., 2002, 2005). The aim of this study is to investigate whether the remarkable capability to categorize complex natural scenes exist in the absence of awareness, based on recent reports that "invisible" stimuli, which do not reach conscious awareness, can still be processed by the human visual system (Pasley et al., 2004; Williams et al., 2004; Fang and He, 2005; Jiang et al., 2006, 2007; Kaunitz et al., 2011a). In two experiments, we recorded event-related potentials (ERPs) in response to animal and non-animal/vehicle stimuli in both aware and unaware conditions in a continuous flash suppression (CFS) paradigm. Our results indicate that even in the "unseen" condition, the brain responds differently to animal and non-animal/vehicle images, consistent with rapid activation of animal-selective feature detectors prior to, or outside of, suppression by the CFS mask.

  19. The selectivity of responses to red-green colour and achromatic contrast in the human visual cortex: an fMRI adaptation study.

    PubMed

    Mullen, Kathy T; Chang, Dorita H F; Hess, Robert F

    2015-12-01

    There is controversy as to how responses to colour in the human brain are organized within the visual pathways. A key issue is whether there are modular pathways that respond selectively to colour or whether there are common neural substrates for both colour and achromatic (Ach) contrast. We used functional magnetic resonance imaging (fMRI) adaptation to investigate the responses of early and extrastriate visual areas to colour and Ach contrast. High-contrast red-green (RG) and Ach sinewave rings (0.5 cycles/degree, 2 Hz) were used as both adapting stimuli and test stimuli in a block design. We found robust adaptation to RG or Ach contrast in all visual areas. Cross-adaptation between RG and Ach contrast occurred in all areas indicating the presence of integrated, colour and Ach responses. Notably, we revealed contrasting trends for the two test stimuli. For the RG test, unselective processing (robust adaptation to both RG and Ach contrast) was most evident in the early visual areas (V1 and V2), but selective responses, revealed as greater adaptation between the same stimuli than cross-adaptation between different stimuli, emerged in the ventral cortex, in V4 and VO in particular. For the Ach test, unselective responses were again most evident in early visual areas but Ach selectivity emerged in the dorsal cortex (V3a and hMT+). Our findings support a strong presence of integrated mechanisms for colour and Ach contrast across the visual hierarchy, with a progression towards selective processing in extrastriate visual areas. © 2015 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  20. Integration of genomic and medical data into a 3D atlas of human anatomy.

    PubMed

    Turinsky, Andrei L; Fanea, Elena; Trinh, Quang; Dong, Xiaoli; Stromer, Julie N; Shu, Xueling; Wat, Stephen; Hallgrímsson, Benedikt; Hill, Jonathan W; Edwards, Carol; Grosenick, Brenda; Yajima, Masumi; Sensen, Christoph W

    2008-01-01

    We have developed a framework for the visual integration and exploration of multi-scale biomedical data, which includes anatomical and molecular components. We have also created a Java-based software system that integrates molecular information, such as gene expression data, into a three-dimensional digital atlas of the male adult human anatomy. Our atlas is structured according to the Terminologia Anatomica. The underlying data-indexing mechanism uses open standards and semantic ontology-processing tools to establish the associations between heterogeneous data types. The software system makes an extensive use of virtual reality visualization.

  1. Alpha-band rhythm modulation under the condition of subliminal face presentation: MEG study.

    PubMed

    Sakuraba, Satoshi; Kobayashi, Hana; Sakai, Shinya; Yokosawa, Koichi

    2013-01-01

    The human brain has two streams to process visual information: a dorsal stream and a ventral stream. Negative potential N170 or its magnetic counterpart M170 is known as the face-specific signal originating from the ventral stream. It is possible to present a visual image unconsciously by using continuous flash suppression (CFS), which is a visual masking technique adopting binocular rivalry. In this work, magnetoencephalograms were recorded during presentation of the three invisible images: face images, which are processed by the ventral stream; tool images, which could be processed by the dorsal stream, and a blank image. Alpha-band activities detected by sensors that are sensitive to M170 were compared. The alpha-band rhythm was suppressed more during presentation of face images than during presentation of the blank image (p=.028). The suppression remained for about 1 s after ending presentations. However, no significant difference was observed between tool and other images. These results suggest that alpha-band rhythm can be modulated also by unconscious visual images.

  2. Focal and Ambient Processing of Built Environments: Intellectual and Atmospheric Experiences of Architecture

    PubMed Central

    Rooney, Kevin K.; Condia, Robert J.; Loschky, Lester C.

    2017-01-01

    Neuroscience has well established that human vision divides into the central and peripheral fields of view. Central vision extends from the point of gaze (where we are looking) out to about 5° of visual angle (the width of one’s fist at arm’s length), while peripheral vision is the vast remainder of the visual field. These visual fields project to the parvo and magno ganglion cells, which process distinctly different types of information from the world around us and project that information to the ventral and dorsal visual streams, respectively. Building on the dorsal/ventral stream dichotomy, we can further distinguish between focal processing of central vision, and ambient processing of peripheral vision. Thus, our visual processing of and attention to objects and scenes depends on how and where these stimuli fall on the retina. The built environment is no exception to these dependencies, specifically in terms of how focal object perception and ambient spatial perception create different types of experiences we have with built environments. We argue that these foundational mechanisms of the eye and the visual stream are limiting parameters of architectural experience. We hypothesize that people experience architecture in two basic ways based on these visual limitations; by intellectually assessing architecture consciously through focal object processing and assessing architecture in terms of atmosphere through pre-conscious ambient spatial processing. Furthermore, these separate ways of processing architectural stimuli operate in parallel throughout the visual perceptual system. Thus, a more comprehensive understanding of architecture must take into account that built environments are stimuli that are treated differently by focal and ambient vision, which enable intellectual analysis of architectural experience versus the experience of architectural atmosphere, respectively. We offer this theoretical model to help advance a more precise understanding of the experience of architecture, which can be tested through future experimentation. (298 words) PMID:28360867

  3. Focal and Ambient Processing of Built Environments: Intellectual and Atmospheric Experiences of Architecture.

    PubMed

    Rooney, Kevin K; Condia, Robert J; Loschky, Lester C

    2017-01-01

    Neuroscience has well established that human vision divides into the central and peripheral fields of view. Central vision extends from the point of gaze (where we are looking) out to about 5° of visual angle (the width of one's fist at arm's length), while peripheral vision is the vast remainder of the visual field. These visual fields project to the parvo and magno ganglion cells, which process distinctly different types of information from the world around us and project that information to the ventral and dorsal visual streams, respectively. Building on the dorsal/ventral stream dichotomy, we can further distinguish between focal processing of central vision, and ambient processing of peripheral vision. Thus, our visual processing of and attention to objects and scenes depends on how and where these stimuli fall on the retina. The built environment is no exception to these dependencies, specifically in terms of how focal object perception and ambient spatial perception create different types of experiences we have with built environments. We argue that these foundational mechanisms of the eye and the visual stream are limiting parameters of architectural experience. We hypothesize that people experience architecture in two basic ways based on these visual limitations; by intellectually assessing architecture consciously through focal object processing and assessing architecture in terms of atmosphere through pre-conscious ambient spatial processing. Furthermore, these separate ways of processing architectural stimuli operate in parallel throughout the visual perceptual system. Thus, a more comprehensive understanding of architecture must take into account that built environments are stimuli that are treated differently by focal and ambient vision, which enable intellectual analysis of architectural experience versus the experience of architectural atmosphere, respectively. We offer this theoretical model to help advance a more precise understanding of the experience of architecture, which can be tested through future experimentation. (298 words).

  4. A survey of Applied Psychological Services' models of the human operator

    NASA Technical Reports Server (NTRS)

    Siegel, A. I.; Wolf, J. J.

    1979-01-01

    A historical perspective is presented in terms of the major features and status of two families of computer simulation models in which the human operator plays the primary role. Both task oriented and message oriented models are included. Two other recent efforts are summarized which deal with visual information processing. They involve not whole model development but a family of subroutines customized to add the human aspects to existing models. A global diagram of the generalized model development/validation process is presented and related to 15 criteria for model evaluation.

  5. Short-Term Memory for Space and Time Flexibly Recruit Complementary Sensory-Biased Frontal Lobe Attention Networks.

    PubMed

    Michalka, Samantha W; Kong, Lingqiang; Rosen, Maya L; Shinn-Cunningham, Barbara G; Somers, David C

    2015-08-19

    The frontal lobes control wide-ranging cognitive functions; however, functional subdivisions of human frontal cortex are only coarsely mapped. Here, functional magnetic resonance imaging reveals two distinct visual-biased attention regions in lateral frontal cortex, superior precentral sulcus (sPCS) and inferior precentral sulcus (iPCS), anatomically interdigitated with two auditory-biased attention regions, transverse gyrus intersecting precentral sulcus (tgPCS) and caudal inferior frontal sulcus (cIFS). Intrinsic functional connectivity analysis demonstrates that sPCS and iPCS fall within a broad visual-attention network, while tgPCS and cIFS fall within a broad auditory-attention network. Interestingly, we observe that spatial and temporal short-term memory (STM), respectively, recruit visual and auditory attention networks in the frontal lobe, independent of sensory modality. These findings not only demonstrate that both sensory modality and information domain influence frontal lobe functional organization, they also demonstrate that spatial processing co-localizes with visual processing and that temporal processing co-localizes with auditory processing in lateral frontal cortex. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Dissociation of neural mechanisms underlying orientation processing in humans

    PubMed Central

    Ling, Sam; Pearson, Joel; Blake, Randolph

    2009-01-01

    Summary Orientation selectivity is a fundamental, emergent property of neurons in early visual cortex, and discovery of that property [1, 2] dramatically shaped how we conceptualize visual processing [3–6]. However, much remains unknown about the neural substrates of these basic building blocks of perception, and what is known primarily stems from animal physiology studies. To probe the neural concomitants of orientation processing in humans, we employed repetitive transcranial magnetic stimulation (rTMS) to attenuate neural responses evoked by stimuli presented within a local region of the visual field. Previous physiological studies have shown that rTMS can significantly suppress the neuronal spiking activity, hemodynamic responses, and local field potentials within a focused cortical region [7, 8]. By suppressing neural activity with rTMS, we were able to dissociate components of the neural circuitry underlying two distinct aspects of orientation processing: selectivity and contextual effects. Orientation selectivity gauged by masking was unchanged by rTMS, whereas an otherwise robust orientation repulsion illusion was weakened following rTMS. This dissociation implies that orientation processing relies on distinct mechanisms, only one of which was impacted by rTMS. These results are consistent with models positing that orientation selectivity is largely governed by the patterns of convergence of thalamic afferents onto cortical neurons, with intracortical activity then shaping population responses contained within those orientation-selective cortical neurons. PMID:19682905

  7. Statistical modeling for visualization evaluation through data fusion.

    PubMed

    Chen, Xiaoyu; Jin, Ran

    2017-11-01

    There is a high demand of data visualization providing insights to users in various applications. However, a consistent, online visualization evaluation method to quantify mental workload or user preference is lacking, which leads to an inefficient visualization and user interface design process. Recently, the advancement of interactive and sensing technologies makes the electroencephalogram (EEG) signals, eye movements as well as visualization logs available in user-centered evaluation. This paper proposes a data fusion model and the application procedure for quantitative and online visualization evaluation. 15 participants joined the study based on three different visualization designs. The results provide a regularized regression model which can accurately predict the user's evaluation of task complexity, and indicate the significance of all three types of sensing data sets for visualization evaluation. This model can be widely applied to data visualization evaluation, and other user-centered designs evaluation and data analysis in human factors and ergonomics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Stereoscopic visual fatigue assessment and modeling

    NASA Astrophysics Data System (ADS)

    Wang, Danli; Wang, Tingting; Gong, Yue

    2014-03-01

    Evaluation of stereoscopic visual fatigue is one of the focuses in the user experience research. It is measured in either subjective or objective methods. Objective measures are more preferred for their capability to quantify the degree of human visual fatigue without being affected by individual variation. However, little research has been conducted on the integration of objective indicators, or the sensibility of each objective indicator in reflecting subjective fatigue. The paper proposes a simply effective method to evaluate visual fatigue more objectively. The stereoscopic viewing process is divided into series of sessions, after each of which viewers rate their visual fatigue with subjective scores (SS) according to a five-grading scale, followed by tests of the punctum maximum accommodation (PMA) and visual reaction time (VRT). Throughout the entire viewing process, their eye movements are recorded by an infrared camera. The pupil size (PS) and percentage of eyelid closure over the pupil over time (PERCLOS) are extracted from the videos processed by the algorithm. Based on the method, an experiment with 14 subjects was conducted to assess visual fatigue induced by 3D images on polarized 3D display. The experiment consisted of 10 sessions (5min per session), each containing the same 75 images displayed randomly. The results show that PMA, VRT and PERCLOS are the most efficient indicators of subjective visual fatigue and finally a predictive model is derived from the stepwise multiple regressions.

  9. Cortical network differences in the sighted versus early blind for recognition of human-produced action sounds

    PubMed Central

    Lewis, James W.; Frum, Chris; Brefczynski-Lewis, Julie A.; Talkington, William J.; Walker, Nathan A.; Rapuano, Kristina M.; Kovach, Amanda L.

    2012-01-01

    Both sighted and blind individuals can readily interpret meaning behind everyday real-world sounds. In sighted listeners, we previously reported that regions along the bilateral posterior superior temporal sulci (pSTS) and middle temporal gyri (pMTG) are preferentially activated when presented with recognizable action sounds. These regions have generally been hypothesized to represent primary loci for complex motion processing, including visual biological motion processing and audio-visual integration. However, it remained unclear whether, or to what degree, life-long visual experience might impact functions related to hearing perception or memory of sound-source actions. Using functional magnetic resonance imaging (fMRI), we compared brain regions activated in congenitally blind versus sighted listeners in response to hearing a wide range of recognizable human-produced action sounds (excluding vocalizations) versus unrecognized, backward-played versions of those sounds. Here we show that recognized human action sounds commonly evoked activity in both groups along most of the left pSTS/pMTG complex, though with relatively greater activity in the right pSTS/pMTG by the blind group. These results indicate that portions of the postero-lateral temporal cortices contain domain-specific hubs for biological and/or complex motion processing independent of sensory-modality experience. Contrasting the two groups, the sighted listeners preferentially activated bilateral parietal plus medial and lateral frontal networks, while the blind listeners preferentially activated left anterior insula plus bilateral anterior calcarine and medial occipital regions, including what would otherwise have been visual-related cortex. These global-level network differences suggest that blind and sighted listeners may preferentially use different memory retrieval strategies when attempting to recognize action sounds. PMID:21305666

  10. Human Connectome Project Informatics: quality control, database services, and data visualization

    PubMed Central

    Marcus, Daniel S.; Harms, Michael P.; Snyder, Abraham Z.; Jenkinson, Mark; Wilson, J Anthony; Glasser, Matthew F.; Barch, Deanna M.; Archie, Kevin A.; Burgess, Gregory C.; Ramaratnam, Mohana; Hodge, Michael; Horton, William; Herrick, Rick; Olsen, Timothy; McKay, Michael; House, Matthew; Hileman, Michael; Reid, Erin; Harwell, John; Coalson, Timothy; Schindler, Jon; Elam, Jennifer S.; Curtiss, Sandra W.; Van Essen, David C.

    2013-01-01

    The Human Connectome Project (HCP) has developed protocols, standard operating and quality control procedures, and a suite of informatics tools to enable high throughput data collection, data sharing, automated data processing and analysis, and data mining and visualization. Quality control procedures include methods to maintain data collection consistency over time, to measure head motion, and to establish quantitative modality-specific overall quality assessments. Database services developed as customizations of the XNAT imaging informatics platform support both internal daily operations and open access data sharing. The Connectome Workbench visualization environment enables user interaction with HCP data and is increasingly integrated with the HCP's database services. Here we describe the current state of these procedures and tools and their application in the ongoing HCP study. PMID:23707591

  11. The nature of visual self-recognition.

    PubMed

    Suddendorf, Thomas; Butler, David L

    2013-03-01

    Visual self-recognition is often controversially cited as an indicator of self-awareness and assessed with the mirror-mark test. Great apes and humans, unlike small apes and monkeys, have repeatedly passed mirror tests, suggesting that the underlying brain processes are homologous and evolved 14-18 million years ago. However, neuroscientific, developmental, and clinical dissociations show that the medium used for self-recognition (mirror vs photograph vs video) significantly alters behavioral and brain responses, likely due to perceptual differences among the different media and prior experience. On the basis of this evidence and evolutionary considerations, we argue that the visual self-recognition skills evident in humans and great apes are a byproduct of a general capacity to collate representations, and need not index other aspects of self-awareness. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Advanced Computer Image Generation Techniques Exploiting Perceptual Characteristics

    DTIC Science & Technology

    1981-08-01

    the capabilities/limitations of the human visual perceptual processing system and improve the training effectiveness of visual simulation systems...Myron Braunstein of the University of California at Irvine performed all the work in the perceptual area. Mr. Timothy A. Zimmerlin contributed the... work . Thus, while some areas are related, each is resolved independently in order to focus on the basic perceptual limitation. In addition, the

  13. 21 CFR 640.24 - Processing.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 7 2010-04-01 2010-04-01 false Processing. 640.24 Section 640.24 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) BIOLOGICS ADDITIONAL... Platelets shall be colorless and transparent to permit visual inspection of the contents; any closure shall...

  14. Contributions of low- and high-level properties to neural processing of visual scenes in the human brain.

    PubMed

    Groen, Iris I A; Silson, Edward H; Baker, Chris I

    2017-02-19

    Visual scene analysis in humans has been characterized by the presence of regions in extrastriate cortex that are selectively responsive to scenes compared with objects or faces. While these regions have often been interpreted as representing high-level properties of scenes (e.g. category), they also exhibit substantial sensitivity to low-level (e.g. spatial frequency) and mid-level (e.g. spatial layout) properties, and it is unclear how these disparate findings can be united in a single framework. In this opinion piece, we suggest that this problem can be resolved by questioning the utility of the classical low- to high-level framework of visual perception for scene processing, and discuss why low- and mid-level properties may be particularly diagnostic for the behavioural goals specific to scene perception as compared to object recognition. In particular, we highlight the contributions of low-level vision to scene representation by reviewing (i) retinotopic biases and receptive field properties of scene-selective regions and (ii) the temporal dynamics of scene perception that demonstrate overlap of low- and mid-level feature representations with those of scene category. We discuss the relevance of these findings for scene perception and suggest a more expansive framework for visual scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).

  15. Semantic extraction and processing of medical records for patient-oriented visual index

    NASA Astrophysics Data System (ADS)

    Zheng, Weilin; Dong, Wenjie; Chen, Xiangjiao; Zhang, Jianguo

    2012-02-01

    To have comprehensive and completed understanding healthcare status of a patient, doctors need to search patient medical records from different healthcare information systems, such as PACS, RIS, HIS, USIS, as a reference of diagnosis and treatment decisions for the patient. However, it is time-consuming and tedious to do these procedures. In order to solve this kind of problems, we developed a patient-oriented visual index system (VIS) to use the visual technology to show health status and to retrieve the patients' examination information stored in each system with a 3D human model. In this presentation, we present a new approach about how to extract the semantic and characteristic information from the medical record systems such as RIS/USIS to create the 3D Visual Index. This approach includes following steps: (1) Building a medical characteristic semantic knowledge base; (2) Developing natural language processing (NLP) engine to perform semantic analysis and logical judgment on text-based medical records; (3) Applying the knowledge base and NLP engine on medical records to extract medical characteristics (e.g., the positive focus information), and then mapping extracted information to related organ/parts of 3D human model to create the visual index. We performed the testing procedures on 559 samples of radiological reports which include 853 focuses, and achieved 828 focuses' information. The successful rate of focus extraction is about 97.1%.

  16. Contributions of low- and high-level properties to neural processing of visual scenes in the human brain

    PubMed Central

    2017-01-01

    Visual scene analysis in humans has been characterized by the presence of regions in extrastriate cortex that are selectively responsive to scenes compared with objects or faces. While these regions have often been interpreted as representing high-level properties of scenes (e.g. category), they also exhibit substantial sensitivity to low-level (e.g. spatial frequency) and mid-level (e.g. spatial layout) properties, and it is unclear how these disparate findings can be united in a single framework. In this opinion piece, we suggest that this problem can be resolved by questioning the utility of the classical low- to high-level framework of visual perception for scene processing, and discuss why low- and mid-level properties may be particularly diagnostic for the behavioural goals specific to scene perception as compared to object recognition. In particular, we highlight the contributions of low-level vision to scene representation by reviewing (i) retinotopic biases and receptive field properties of scene-selective regions and (ii) the temporal dynamics of scene perception that demonstrate overlap of low- and mid-level feature representations with those of scene category. We discuss the relevance of these findings for scene perception and suggest a more expansive framework for visual scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044013

  17. Near-optimal integration of facial form and motion.

    PubMed

    Dobs, Katharina; Ma, Wei Ji; Reddy, Leila

    2017-09-08

    Human perception consists of the continuous integration of sensory cues pertaining to the same object. While it has been fairly well shown that humans use an optimal strategy when integrating low-level cues proportional to their relative reliability, the integration processes underlying high-level perception are much less understood. Here we investigate cue integration in a complex high-level perceptual system, the human face processing system. We tested cue integration of facial form and motion in an identity categorization task and found that an optimal model could successfully predict subjects' identity choices. Our results suggest that optimal cue integration may be implemented across different levels of the visual processing hierarchy.

  18. Protocols for the Investigation of Information Processing in Human Assessment of Fundamental Movement Skills.

    PubMed

    Ward, Brodie J; Thornton, Ashleigh; Lay, Brendan; Rosenberg, Michael

    2017-01-01

    Fundamental movement skill (FMS) assessment remains an important tool in classifying individuals' level of FMS proficiency. The collection of FMS performances for assessment and monitoring has remained unchanged over the last few decades, but new motion capture technologies offer opportunities to automate this process. To achieve this, a greater understanding of the human process of movement skill assessment is required. The authors present the rationale and protocols of a project in which they aim to investigate the visual search patterns and information extraction employed by human assessors during FMS assessment, as well as the implementation of the Kinect system for FMS capture.

  19. Differences in the Visual Perception of Symmetric Patterns in Orangutans (Pongo pygmaeus abelii) and Two Human Cultural Groups: A Comparative Eye-Tracking Study

    PubMed Central

    Mühlenbeck, Cordelia; Liebal, Katja; Pritsch, Carla; Jacobsen, Thomas

    2016-01-01

    Symmetric structures are of importance in relation to aesthetic preference. To investigate whether the preference for symmetric patterns is unique to humans, independent of their cultural background, we compared two human populations with distinct cultural backgrounds (Namibian hunter-gatherers and German town dwellers) with one species of non-human great apes (Orangutans) in their viewing behavior regarding symmetric and asymmetric patterns in two levels of complexity. In addition, the human participants were asked to give their aesthetic evaluation of a subset of the presented patterns. The results showed that humans of both cultural groups fixated on symmetric patterns for a longer period of time, regardless of the pattern’s complexity. On the contrary, Orangutans did not clearly differentiate between symmetric and asymmetric patterns, but were much faster in processing the presented stimuli and scanned the complete screen, while both human groups rested on the symmetric pattern after a short scanning time. The aesthetic evaluation test revealed that the fixation preference for symmetric patterns did not match with the aesthetic evaluation in the Hai//om group, whereas in the German group aesthetic evaluation was in accordance with the fixation preference in 60 percent of the cases. It can be concluded that humans prefer well-ordered structures in visual processing tasks, most likely because of a positive processing bias for symmetry, which Orangutans did not show in this task, and that, in humans, an aesthetic preference does not necessarily accompany the fixation preference. PMID:27065184

  20. Neural theory for the perception of causal actions.

    PubMed

    Fleischer, Falk; Christensen, Andrea; Caggiano, Vittorio; Thier, Peter; Giese, Martin A

    2012-07-01

    The efficient prediction of the behavior of others requires the recognition of their actions and an understanding of their action goals. In humans, this process is fast and extremely robust, as demonstrated by classical experiments showing that human observers reliably judge causal relationships and attribute interactive social behavior to strongly simplified stimuli consisting of simple moving geometrical shapes. While psychophysical experiments have identified critical visual features that determine the perception of causality and agency from such stimuli, the underlying detailed neural mechanisms remain largely unclear, and it is an open question why humans developed this advanced visual capability at all. We created pairs of naturalistic and abstract stimuli of hand actions that were exactly matched in terms of their motion parameters. We show that varying critical stimulus parameters for both stimulus types leads to very similar modulations of the perception of causality. However, the additional form information about the hand shape and its relationship with the object supports more fine-grained distinctions for the naturalistic stimuli. Moreover, we show that a physiologically plausible model for the recognition of goal-directed hand actions reproduces the observed dependencies of causality perception on critical stimulus parameters. These results support the hypothesis that selectivity for abstract action stimuli might emerge from the same neural mechanisms that underlie the visual processing of natural goal-directed action stimuli. Furthermore, the model proposes specific detailed neural circuits underlying this visual function, which can be evaluated in future experiments.

  1. Mental Images and the Modification of Learning Defects.

    ERIC Educational Resources Information Center

    Patten, Bernard M.

    Because human memory and thought involve extremely complex processes, it is possible to employ unusual modalities and specific visual strategies for remembering and problem-solving to assist patients with memory defects. This three-part paper discusses some of the research in the field of human memory and describes practical applications of these…

  2. Abnormal brain activation in neurofibromatosis type 1: a link between visual processing and the default mode network.

    PubMed

    Violante, Inês R; Ribeiro, Maria J; Cunha, Gil; Bernardino, Inês; Duarte, João V; Ramos, Fabiana; Saraiva, Jorge; Silva, Eduardo; Castelo-Branco, Miguel

    2012-01-01

    Neurofibromatosis type 1 (NF1) is one of the most common single gene disorders affecting the human nervous system with a high incidence of cognitive deficits, particularly visuospatial. Nevertheless, neurophysiological alterations in low-level visual processing that could be relevant to explain the cognitive phenotype are poorly understood. Here we used functional magnetic resonance imaging (fMRI) to study early cortical visual pathways in children and adults with NF1. We employed two distinct stimulus types differing in contrast and spatial and temporal frequencies to evoke relatively different activation of the magnocellular (M) and parvocellular (P) pathways. Hemodynamic responses were investigated in retinotopically-defined regions V1, V2 and V3 and then over the acquired cortical volume. Relative to matched control subjects, patients with NF1 showed deficient activation of the low-level visual cortex to both stimulus types. Importantly, this finding was observed for children and adults with NF1, indicating that low-level visual processing deficits do not ameliorate with age. Moreover, only during M-biased stimulation patients with NF1 failed to deactivate or even activated anterior and posterior midline regions of the default mode network. The observation that the magnocellular visual pathway is impaired in NF1 in early visual processing and is specifically associated with a deficient deactivation of the default mode network may provide a neural explanation for high-order cognitive deficits present in NF1, particularly visuospatial and attentional. A link between magnocellular and default mode network processing may generalize to neuropsychiatric disorders where such deficits have been separately identified.

  3. Domain specificity versus expertise: factors influencing distinct processing of faces.

    PubMed

    Carmel, David; Bentin, Shlomo

    2002-02-01

    To explore face specificity in visual processing, we compared the role of task-associated strategies and expertise on the N170 event-related potential (ERP) component elicited by human faces with the ERPs elicited by cars, birds, items of furniture, and ape faces. In Experiment 1, participants performed a car monitoring task and an animacy decision task. In Experiment 2, participants monitored human faces while faces of apes were the distracters. Faces elicited an equally conspicuous N170, significantly larger than the ERPs elicited by non-face categories regardless of whether they were ignored or had an equal status with other categories (Experiment 1), or were the targets (in Experiment 2). In contrast, the negative component elicited by cars during the same time range was larger if they were targets than if they were not. Furthermore, unlike the posterior-temporal distribution of the N170, the negative component elicited by cars and its modulation by task were more conspicuous at occipital sites. Faces of apes elicited an N170 that was similar in amplitude to that elicited by the human face targets, albeit peaking 10 ms later. As our participants were not ape experts, this pattern indicates that the N170 is face-specific, but not specie-specific, i.e. it is elicited by particular face features regardless of expertise. Overall, these results demonstrate the domain specificity of the visual mechanism implicated in processing faces, a mechanism which is not influenced by either task or expertise. The processing of other objects is probably accomplished by a more general visual processor, which is sensitive to strategic manipulations and attention.

  4. Modulation of neuronal oscillatory activity in the beta- and gamma-band is associated with current individual anxiety levels.

    PubMed

    Schneider, Till R; Hipp, Joerg F; Domnick, Claudia; Carl, Christine; Büchel, Christian; Engel, Andreas K

    2018-05-26

    Human faces are among the most salient visual stimuli and act both as socially and emotionally relevant signals. Faces and especially faces with emotional expression receive prioritized processing in the human brain and activate a distributed network of brain areas reflected, e.g., in enhanced oscillatory neuronal activity. However, an inconsistent picture emerged so far regarding neuronal oscillatory activity across different frequency-bands modulated by emotionally and socially relevant stimuli. The individual level of anxiety among healthy populations might be one explanation for these inconsistent findings. Therefore, we tested the hypothesis whether oscillatory neuronal activity is associated with individual anxiety levels during perception of faces with neutral and fearful facial expressions. We recorded neuronal activity using magnetoencephalography (MEG) in 27 healthy participants and determined their individual state anxiety levels. Images of human faces with neutral and fearful expressions, and physically matched visual control stimuli were presented while participants performed a simple color detection task. Spectral analyses revealed that face processing and in particular processing of fearful faces was characterized by enhanced neuronal activity in the theta- and gamma-band and decreased activity in the beta-band in early visual cortex and the fusiform gyrus (FFG). Moreover, the individuals' state anxiety levels correlated positively with the gamma-band response and negatively with the beta response in the FFG and the amygdala. Our results suggest that oscillatory neuronal activity plays an important role in affective face processing and is dependent on the individual level of state anxiety. Our work provides new insights on the role of oscillatory neuronal activity underlying processing of faces. Copyright © 2018. Published by Elsevier Inc.

  5. Foveated model observers to predict human performance in 3D images

    NASA Astrophysics Data System (ADS)

    Lago, Miguel A.; Abbey, Craig K.; Eckstein, Miguel P.

    2017-03-01

    We evaluate 3D search requires model observers that take into account the peripheral human visual processing (foveated models) to predict human observer performance. We show that two different 3D tasks, free search and location-known detection, influence the relative human visual detectability of two signals of different sizes in synthetic backgrounds mimicking the noise found in 3D digital breast tomosynthesis. One of the signals resembled a microcalcification (a small and bright sphere), while the other one was designed to look like a mass (a larger Gaussian blob). We evaluated current standard models observers (Hotelling; Channelized Hotelling; non-prewhitening matched filter with eye filter, NPWE; and non-prewhitening matched filter model, NPW) and showed that they incorrectly predict the relative detectability of the two signals in 3D search. We propose a new model observer (3D Foveated Channelized Hotelling Observer) that incorporates the properties of the visual system over a large visual field (fovea and periphery). We show that the foveated model observer can accurately predict the rank order of detectability of the signals in 3D images for each task. Together, these results motivate the use of a new generation of foveated model observers for predicting image quality for search tasks in 3D imaging modalities such as digital breast tomosynthesis or computed tomography.

  6. Learning rational temporal eye movement strategies.

    PubMed

    Hoppe, David; Rothkopf, Constantin A

    2016-07-19

    During active behavior humans redirect their gaze several times every second within the visual environment. Where we look within static images is highly efficient, as quantified by computational models of human gaze shifts in visual search and face recognition tasks. However, when we shift gaze is mostly unknown despite its fundamental importance for survival in a dynamic world. It has been suggested that during naturalistic visuomotor behavior gaze deployment is coordinated with task-relevant events, often predictive of future events, and studies in sportsmen suggest that timing of eye movements is learned. Here we establish that humans efficiently learn to adjust the timing of eye movements in response to environmental regularities when monitoring locations in the visual scene to detect probabilistically occurring events. To detect the events humans adopt strategies that can be understood through a computational model that includes perceptual and acting uncertainties, a minimal processing time, and, crucially, the intrinsic costs of gaze behavior. Thus, subjects traded off event detection rate with behavioral costs of carrying out eye movements. Remarkably, based on this rational bounded actor model the time course of learning the gaze strategies is fully explained by an optimal Bayesian learner with humans' characteristic uncertainty in time estimation, the well-known scalar law of biological timing. Taken together, these findings establish that the human visual system is highly efficient in learning temporal regularities in the environment and that it can use these regularities to control the timing of eye movements to detect behaviorally relevant events.

  7. The neural basis of visual dominance in the context of audio-visual object processing.

    PubMed

    Schmid, Carmen; Büchel, Christian; Rose, Michael

    2011-03-01

    Visual dominance refers to the observation that in bimodal environments vision often has an advantage over other senses in human. Therefore, a better memory performance for visual compared to, e.g., auditory material is assumed. However, the reason for this preferential processing and the relation to the memory formation is largely unknown. In this fMRI experiment, we manipulated cross-modal competition and attention, two factors that both modulate bimodal stimulus processing and can affect memory formation. Pictures and sounds of objects were presented simultaneously in two levels of recognisability, thus manipulating the amount of cross-modal competition. Attention was manipulated via task instruction and directed either to the visual or the auditory modality. The factorial design allowed a direct comparison of the effects between both modalities. The resulting memory performance showed that visual dominance was limited to a distinct task setting. Visual was superior to auditory object memory only when allocating attention towards the competing modality. During encoding, cross-modal competition and attention towards the opponent domain reduced fMRI signals in both neural systems, but cross-modal competition was more pronounced in the auditory system and only in auditory cortex this competition was further modulated by attention. Furthermore, neural activity reduction in auditory cortex during encoding was closely related to the behavioural auditory memory impairment. These results indicate that visual dominance emerges from a less pronounced vulnerability of the visual system against competition from the auditory domain. Copyright © 2010 Elsevier Inc. All rights reserved.

  8. Evidence for unlimited capacity processing of simple features in visual cortex

    PubMed Central

    White, Alex L.; Runeson, Erik; Palmer, John; Ernst, Zachary R.; Boynton, Geoffrey M.

    2017-01-01

    Performance in many visual tasks is impaired when observers attempt to divide spatial attention across multiple visual field locations. Correspondingly, neuronal response magnitudes in visual cortex are often reduced during divided compared with focused spatial attention. This suggests that early visual cortex is the site of capacity limits, where finite processing resources must be divided among attended stimuli. However, behavioral research demonstrates that not all visual tasks suffer such capacity limits: The costs of divided attention are minimal when the task and stimulus are simple, such as when searching for a target defined by orientation or contrast. To date, however, every neuroimaging study of divided attention has used more complex tasks and found large reductions in response magnitude. We bridged that gap by using functional magnetic resonance imaging to measure responses in the human visual cortex during simple feature detection. The first experiment used a visual search task: Observers detected a low-contrast Gabor patch within one or four potentially relevant locations. The second experiment used a dual-task design, in which observers made independent judgments of Gabor presence in patches of dynamic noise at two locations. In both experiments, blood-oxygen level–dependent (BOLD) signals in the retinotopic cortex were significantly lower for ignored than attended stimuli. However, when observers divided attention between multiple stimuli, BOLD signals were not reliably reduced and behavioral performance was unimpaired. These results suggest that processing of simple features in early visual cortex has unlimited capacity. PMID:28654964

  9. Resilience to the contralateral visual field bias as a window into object representations

    PubMed Central

    Garcea, Frank E.; Kristensen, Stephanie; Almeida, Jorge; Mahon, Bradford Z.

    2016-01-01

    Viewing images of manipulable objects elicits differential blood oxygen level-dependent (BOLD) contrast across parietal and dorsal occipital areas of the human brain that support object-directed reaching, grasping, and complex object manipulation. However, it is unknown which object-selective regions of parietal cortex receive their principal inputs from the ventral object-processing pathway and which receive their inputs from the dorsal object-processing pathway. Parietal areas that receive their inputs from the ventral visual pathway, rather than from the dorsal stream, will have inputs that are already filtered through object categorization and identification processes. This predicts that parietal regions that receive inputs from the ventral visual pathway should exhibit object-selective responses that are resilient to contralateral visual field biases. To test this hypothesis, adult participants viewed images of tools and animals that were presented to the left or right visual fields during functional magnetic resonance imaging (fMRI). We found that the left inferior parietal lobule showed robust tool preferences independently of the visual field in which tool stimuli were presented. In contrast, a region in posterior parietal/dorsal occipital cortex in the right hemisphere exhibited an interaction between visual field and category: tool-preferences were strongest contralateral to the stimulus. These findings suggest that action knowledge accessed in the left inferior parietal lobule operates over inputs that are abstracted from the visual input and contingent on analysis by the ventral visual pathway, consistent with its putative role in supporting object manipulation knowledge. PMID:27160998

  10. Role of temporal processing stages by inferior temporal neurons in facial recognition.

    PubMed

    Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Kawano, Kenji

    2011-01-01

    In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT) cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses. In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of face recognition.

  11. Role of Temporal Processing Stages by Inferior Temporal Neurons in Facial Recognition

    PubMed Central

    Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Kawano, Kenji

    2011-01-01

    In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT) cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses. In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of face recognition. PMID:21734904

  12. Can responses to basic non-numerical visual features explain neural numerosity responses?

    PubMed

    Harvey, Ben M; Dumoulin, Serge O

    2017-04-01

    Humans and many animals can distinguish between stimuli that differ in numerosity, the number of objects in a set. Human and macaque parietal lobes contain neurons that respond to changes in stimulus numerosity. However, basic non-numerical visual features can affect neural responses to and perception of numerosity, and visual features often co-vary with numerosity. Therefore, it is debated whether numerosity or co-varying low-level visual features underlie neural and behavioral responses to numerosity. To test the hypothesis that non-numerical visual features underlie neural numerosity responses in a human parietal numerosity map, we analyze responses to a group of numerosity stimulus configurations that have the same numerosity progression but vary considerably in their non-numerical visual features. Using ultra-high-field (7T) fMRI, we measure responses to these stimulus configurations in an area of posterior parietal cortex whose responses are believed to reflect numerosity-selective activity. We describe an fMRI analysis method to distinguish between alternative models of neural response functions, following a population receptive field (pRF) modeling approach. For each stimulus configuration, we first quantify the relationships between numerosity and several non-numerical visual features that have been proposed to underlie performance in numerosity discrimination tasks. We then determine how well responses to these non-numerical visual features predict the observed fMRI responses, and compare this to the predictions of responses to numerosity. We demonstrate that a numerosity response model predicts observed responses more accurately than models of responses to simple non-numerical visual features. As such, neural responses in cognitive processing need not reflect simpler properties of early sensory inputs. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Visual representation of scientific information.

    PubMed

    Wong, Bang

    2011-02-15

    Great technological advances have enabled researchers to generate an enormous amount of data. Data analysis is replacing data generation as the rate-limiting step in scientific research. With this wealth of information, we have an opportunity to understand the molecular causes of human diseases. However, the unprecedented scale, resolution, and variety of data pose new analytical challenges. Visual representation of data offers insights that can lead to new understanding, whether the purpose is analysis or communication. This presentation shows how art, design, and traditional illustration can enable scientific discovery. Examples will be drawn from the Broad Institute's Data Visualization Initiative, aimed at establishing processes for creating informative visualization models.

  14. Natural Tendency towards Beauty in Humans: Evidence from Binocular Rivalry.

    PubMed

    Mo, Ce; Xia, Tiansheng; Qin, Kaixin; Mo, Lei

    2016-01-01

    Although human preference for beauty is common and compelling in daily life, it remains unknown whether such preference is essentially subserved by social cognitive demands or natural tendency towards beauty encoded in the human mind intrinsically. Here we demonstrate experimentally that humans automatically exhibit preference for visual and moral beauty without explicit cognitive efforts. Using a binocular rivalry paradigm, we identified enhanced gender-independent perceptual dominance for physically attractive persons, and the results suggested universal preference for visual beauty based on perceivable forms. Moreover, we also identified perceptual dominance enhancement for characters associated with virtuous descriptions after controlling for facial attractiveness and vigilance-related attention effects, which suggested a similar implicit preference for moral beauty conveyed in prosocial behaviours. Our findings show that behavioural preference for beauty is driven by an inherent natural tendency towards beauty in humans rather than explicit social cognitive processes.

  15. Natural Tendency towards Beauty in Humans: Evidence from Binocular Rivalry

    PubMed Central

    Mo, Lei

    2016-01-01

    Although human preference for beauty is common and compelling in daily life, it remains unknown whether such preference is essentially subserved by social cognitive demands or natural tendency towards beauty encoded in the human mind intrinsically. Here we demonstrate experimentally that humans automatically exhibit preference for visual and moral beauty without explicit cognitive efforts. Using a binocular rivalry paradigm, we identified enhanced gender-independent perceptual dominance for physically attractive persons, and the results suggested universal preference for visual beauty based on perceivable forms. Moreover, we also identified perceptual dominance enhancement for characters associated with virtuous descriptions after controlling for facial attractiveness and vigilance-related attention effects, which suggested a similar implicit preference for moral beauty conveyed in prosocial behaviours. Our findings show that behavioural preference for beauty is driven by an inherent natural tendency towards beauty in humans rather than explicit social cognitive processes. PMID:26930202

  16. Stimulation-induced decreases in the diffusion of extra-vascular water in the human visual cortex: a window in time and space on mechanisms of brain water transport and economy.

    PubMed

    Baslow, Morris H; Hu, Caixia; Guilfoyle, David N

    2012-07-01

    In a human magnetic resonance diffusion-weighted imaging (DWI) investigation at 3 T and high diffusion sensitivity weighting (b = 1,800 s/mm(2)), which emphasizes the contribution of water in the extra-vascular compartment and minimizes that of the vascular compartment, we observed that visual stimulation with a flashing checkerboard at 8 Hz for a period of 600 s in eight subjects resulted in significant increases in DWI signals (mean +2.70%, range +0.51 to 8.54%). The increases in DWI signals in activated areas of the visual cortex indicated that during stimulation, the apparent diffusion coefficient (ADC) of extra-vascular compartment water decreased. In response to continuous stimulation, DWI signals gradually increased from pre-stimulation controls, leveling off after 400-500 s. During recovery from stimulation, DWI signals gradually decreased, approaching control levels in 300-400 s. In this study, we show for the first time that the effects of visual stimulation on DWI signals in the human visual cortex are cumulative over an extended period of time. We propose that these relatively slow stimulation-induced changes in the ADC of water in the extra-vascular compartment are due to transient changes in the ratio of faster diffusing free water to slower diffusing bound water and reflect brain water transport processes between the vascular and extra-vascular compartments at the cellular level. The nature of these processes including possible roles of the putative glucose water import and N-acetylaspartate water export molecular water pumps in brain function are discussed.

  17. Neural Representation of Motion-In-Depth in Area MT

    PubMed Central

    Sanada, Takahisa M.

    2014-01-01

    Neural processing of 2D visual motion has been studied extensively, but relatively little is known about how visual cortical neurons represent visual motion trajectories that include a component toward or away from the observer (motion in depth). Psychophysical studies have demonstrated that humans perceive motion in depth based on both changes in binocular disparity over time (CD cue) and interocular velocity differences (IOVD cue). However, evidence for neurons that represent motion in depth has been limited, especially in primates, and it is unknown whether such neurons make use of CD or IOVD cues. We show that approximately one-half of neurons in macaque area MT are selective for the direction of motion in depth, and that this selectivity is driven primarily by IOVD cues, with a small contribution from the CD cue. Our results establish that area MT, a central hub of the primate visual motion processing system, contains a 3D representation of visual motion. PMID:25411481

  18. Spatiotemporal dynamics underlying object completion in human ventral visual cortex.

    PubMed

    Tang, Hanlin; Buia, Calin; Madhavan, Radhika; Crone, Nathan E; Madsen, Joseph R; Anderson, William S; Kreiman, Gabriel

    2014-08-06

    Natural vision often involves recognizing objects from partial information. Recognition of objects from parts presents a significant challenge for theories of vision because it requires spatial integration and extrapolation from prior knowledge. Here we recorded intracranial field potentials of 113 visually selective electrodes from epilepsy patients in response to whole and partial objects. Responses along the ventral visual stream, particularly the inferior occipital and fusiform gyri, remained selective despite showing only 9%-25% of the object areas. However, these visually selective signals emerged ∼100 ms later for partial versus whole objects. These processing delays were particularly pronounced in higher visual areas within the ventral stream. This latency difference persisted when controlling for changes in contrast, signal amplitude, and the strength of selectivity. These results argue against a purely feedforward explanation of recognition from partial information, and provide spatiotemporal constraints on theories of object recognition that involve recurrent processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Neuroesthetics and healthcare design.

    PubMed

    Nanda, Upali; Pati, Debajyoti; McCurry, Katie

    2009-01-01

    While there is a growing consciousness about the importance of visually pleasing environments in healthcare design, little is known about the key underlying mechanisms that enable aesthetics to play an instrumental role in the caregiving process. Hence it is often one of the first items to be value engineered. Aesthetics has (rightfully) been provided preferential consideration in such pleasure settings such as museums and recreational facilities; but in healthcare settings it is often considered expendable. Should it be? In this paper the authors share evidence that visual stimuli undergo an aesthetic evaluation process in the human brain by default, even when not prompted; that responses to visual stimuli may be immediate and emotional; and that aesthetics can be a source of pleasure, a fundamental perceptual reward that can help mitigate the stress of a healthcare environment. The authors also provide examples of studies that address the role of specific visual elements and visual principles in aesthetic evaluations and emotional responses. Finally, they discuss the implications of these findings for the design of art and architecture in healthcare.

  20. The effects of alphabet and expertise on letter perception

    PubMed Central

    Wiley, Robert W.; Wilson, Colin; Rapp, Brenda

    2016-01-01

    Long-standing questions in human perception concern the nature of the visual features that underlie letter recognition and the extent to which the visual processing of letters is affected by differences in alphabets and levels of viewer expertise. We examined these issues in a novel approach using a same-different judgment task on pairs of letters from the Arabic alphabet with two participant groups—one with no prior exposure to Arabic and one with reading proficiency. Hierarchical clustering and linear mixed-effects modeling of reaction times and accuracy provide evidence that both the specific characteristics of the alphabet and observers’ previous experience with it affect how letters are perceived and visually processed. The findings of this research further our understanding of the multiple factors that affect letter perception and support the view of a visual system that dynamically adjusts its weighting of visual features as expert readers come to more efficiently and effectively discriminate the letters of the specific alphabet they are viewing. PMID:26913778

  1. An Analysis of Machine- and Human-Analytics in Classification.

    PubMed

    Tam, Gary K L; Kothari, Vivek; Chen, Min

    2017-01-01

    In this work, we present a study that traces the technical and cognitive processes in two visual analytics applications to a common theoretic model of soft knowledge that may be added into a visual analytics process for constructing a decision-tree model. Both case studies involved the development of classification models based on the "bag of features" approach. Both compared a visual analytics approach using parallel coordinates with a machine-learning approach using information theory. Both found that the visual analytics approach had some advantages over the machine learning approach, especially when sparse datasets were used as the ground truth. We examine various possible factors that may have contributed to such advantages, and collect empirical evidence for supporting the observation and reasoning of these factors. We propose an information-theoretic model as a common theoretic basis to explain the phenomena exhibited in these two case studies. Together we provide interconnected empirical and theoretical evidence to support the usefulness of visual analytics.

  2. Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.

    PubMed

    Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo

    2013-02-16

    We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Episodic Memory Retrieval Functionally Relies on Very Rapid Reactivation of Sensory Information.

    PubMed

    Waldhauser, Gerd T; Braun, Verena; Hanslmayr, Simon

    2016-01-06

    Episodic memory retrieval is assumed to rely on the rapid reactivation of sensory information that was present during encoding, a process termed "ecphory." We investigated the functional relevance of this scarcely understood process in two experiments in human participants. We presented stimuli to the left or right of fixation at encoding, followed by an episodic memory test with centrally presented retrieval cues. This allowed us to track the reactivation of lateralized sensory memory traces during retrieval. Successful episodic retrieval led to a very early (∼100-200 ms) reactivation of lateralized alpha/beta (10-25 Hz) electroencephalographic (EEG) power decreases in the visual cortex contralateral to the visual field at encoding. Applying rhythmic transcranial magnetic stimulation to interfere with early retrieval processing in the visual cortex led to decreased episodic memory performance specifically for items encoded in the visual field contralateral to the site of stimulation. These results demonstrate, for the first time, that episodic memory functionally relies on very rapid reactivation of sensory information. Remembering personal experiences requires a "mental time travel" to revisit sensory information perceived in the past. This process is typically described as a controlled, relatively slow process. However, by using electroencephalography to measure neural activity with a high time resolution, we show that such episodic retrieval entails a very rapid reactivation of sensory brain areas. Using transcranial magnetic stimulation to alter brain function during retrieval revealed that this early sensory reactivation is causally relevant for conscious remembering. These results give first neural evidence for a functional, preconscious component of episodic remembering. This provides new insight into the nature of human memory and may help in the understanding of psychiatric conditions that involve the automatic intrusion of unwanted memories. Copyright © 2016 the authors 0270-6474/16/360251-10$15.00/0.

  4. Is orbital volume associated with eyeball and visual cortex volume in humans?

    PubMed

    Pearce, Eiluned; Bridge, Holly

    2013-01-01

    In humans orbital volume increases linearly with absolute latitude. Scaling across mammals between visual system components suggests that these larger orbits should translate into larger eyes and visual cortices in high latitude humans. Larger eyes at high latitudes may be required to maintain adequate visual acuity and enhance visual sensitivity under lower light levels. To test the assumption that orbital volume can accurately index eyeball and visual cortex volumes specifically in humans. Structural Magnetic Resonance Imaging (MRI) techniques are employed to measure eye and orbit (n = 88) and brain and visual cortex (n = 99) volumes in living humans. Facial dimensions and foramen magnum area (a proxy for body mass) were also measured. A significant positive linear relationship was found between (i) orbital and eyeball volumes, (ii) eyeball and visual cortex grey matter volumes and (iii) different visual cortical areas, independently of overall brain volume. In humans the components of the visual system scale from orbit to eye to visual cortex volume independently of overall brain size. These findings indicate that orbit volume can index eye and visual cortex volume in humans, suggesting that larger high latitude orbits do translate into larger visual cortices.

  5. Is orbital volume associated with eyeball and visual cortex volume in humans?

    PubMed Central

    Pearce, Eiluned; Bridge, Holly

    2013-01-01

    Background In humans orbital volume increases linearly with absolute latitude. Scaling across mammals between visual system components suggests that these larger orbits should translate into larger eyes and visual cortices in high latitude humans. Larger eyes at high latitudes may be required to maintain adequate visual acuity and enhance visual sensitivity under lower light levels. Aim To test the assumption that orbital volume can accurately index eyeball and visual cortex volumes specifically in humans. Subjects & Methods Structural Magnetic Resonance Imaging (MRI) techniques are employed to measure eye and orbit (N=88), and brain and visual cortex (N=99) volumes in living humans. Facial dimensions and foramen magnum area (a proxy for body mass) were also measured. Results A significant positive linear relationship was found between (i) orbital and eyeball volumes, (ii) eyeball and visual cortex grey matter volumes, (iii) different visual cortical areas, independently of overall brain volume. Conclusion In humans the components of the visual system scale from orbit to eye to visual cortex volume independently of overall brain size. These findings indicate that orbit volume can index eye and visual cortex volume in humans, suggesting that larger high latitude orbits do translate into larger visual cortices. PMID:23879766

  6. Rapid Processing of a Global Feature in the ON Visual Pathways of Behaving Monkeys.

    PubMed

    Huang, Jun; Yang, Yan; Zhou, Ke; Zhao, Xudong; Zhou, Quan; Zhu, Hong; Yang, Yingshan; Zhang, Chunming; Zhou, Yifeng; Zhou, Wu

    2017-01-01

    Visual objects are recognized by their features. Whereas, some features are based on simple components (i.e., local features, such as orientation of line segments), some features are based on the whole object (i.e., global features, such as an object having a hole in it). Over the past five decades, behavioral, physiological, anatomical, and computational studies have established a general model of vision, which starts from extracting local features in the lower visual pathways followed by a feature integration process that extracts global features in the higher visual pathways. This local-to-global model is successful in providing a unified account for a vast sets of perception experiments, but it fails to account for a set of experiments showing human visual systems' superior sensitivity to global features. Understanding the neural mechanisms underlying the "global-first" process will offer critical insights into new models of vision. The goal of the present study was to establish a non-human primate model of rapid processing of global features for elucidating the neural mechanisms underlying differential processing of global and local features. Monkeys were trained to make a saccade to a target in the black background, which was different from the distractors (white circle) in color (e.g., red circle target), local features (e.g., white square target), a global feature (e.g., white ring with a hole target) or their combinations (e.g., red square target). Contrary to the predictions of the prevailing local-to-global model, we found that (1) detecting a distinction or a change in the global feature was faster than detecting a distinction or a change in color or local features; (2) detecting a distinction in color was facilitated by a distinction in the global feature, but not in the local features; and (3) detecting the hole was interfered by the local features of the hole (e.g., white ring with a squared hole). These results suggest that monkey ON visual systems have a subsystem that is more sensitive to distinctions in the global feature than local features. They also provide the behavioral constraints for identifying the underlying neural substrates.

  7. Overt attention toward oriented objects in free-viewing barn owls.

    PubMed

    Harmening, Wolf Maximilian; Orlowski, Julius; Ben-Shahar, Ohad; Wagner, Hermann

    2011-05-17

    Visual saliency based on orientation contrast is a perceptual product attributed to the functional organization of the mammalian brain. We examined this visual phenomenon in barn owls by mounting a wireless video microcamera on the owls' heads and confronting them with visual scenes that contained one differently oriented target among similarly oriented distracters. Without being confined by any particular task, the owls looked significantly longer, more often, and earlier at the target, thus exhibiting visual search strategies so far demonstrated in similar conditions only in primates. Given the considerable differences in phylogeny and the structure of visual pathways between owls and humans, these findings suggest that orientation saliency has computational optimality in a wide variety of ecological contexts, and thus constitutes a universal building block for efficient visual information processing in general.

  8. Are Categorical Spatial Relations Encoded by Shifting Visual Attention between Objects?

    ERIC Educational Resources Information Center

    Yuan, Lei; Uttal, David; Franconeri, Steven

    2016-01-01

    Perceiving not just values, but relations between values, is critical to human cognition. We tested the predictions of a proposed mechanism for processing categorical spatial relations between two objects--the "shift account" of relation processing--which states that relations such as "above" or "below" are extracted…

  9. Normal form from biological motion despite impaired ventral stream function.

    PubMed

    Gilaie-Dotan, S; Bentin, S; Harel, M; Rees, G; Saygin, A P

    2011-04-01

    We explored the extent to which biological motion perception depends on ventral stream integration by studying LG, an unusual case of developmental visual agnosia. LG has significant ventral stream processing deficits but no discernable structural cortical abnormality. LG's intermediate visual areas and object-sensitive regions exhibit abnormal activation during visual object perception, in contrast to area V5/MT+ which responds normally to visual motion (Gilaie-Dotan, Perry, Bonneh, Malach, & Bentin, 2009). Here, in three studies we used point light displays, which require visual integration, in adaptive threshold experiments to examine LG's ability to detect form from biological and non-biological motion cues. LG's ability to detect and discriminate form from biological motion was similar to healthy controls. In contrast, he was significantly deficient in processing form from non-biological motion. Thus, LG can rely on biological motion cues to perceive human forms, but is considerably impaired in extracting form from non-biological motion. Finally, we found that while LG viewed biological motion, activity in a network of brain regions associated with processing biological motion was functionally correlated with his V5/MT+ activity, indicating that normal inputs from V5/MT+ might suffice to activate his action perception system. These results indicate that processing of biologically moving form can dissociate from other form processing in the ventral pathway. Furthermore, the present results indicate that integrative ventral stream processing is necessary for uncompromised processing of non-biological form from motion. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Simultaneous chromatic and luminance human electroretinogram responses

    PubMed Central

    Parry, Neil R A; Murray, Ian J; Panorgias, Athanasios; McKeefry, Declan J; Lee, Barry B; Kremers, Jan

    2012-01-01

    The parallel processing of information forms an important organisational principle of the primate visual system. Here we describe experiments which use a novel chromatic–achromatic temporal compound stimulus to simultaneously identify colour and luminance specific signals in the human electroretinogram (ERG). Luminance and chromatic components are separated in the stimulus; the luminance modulation has twice the temporal frequency of the chromatic modulation. ERGs were recorded from four trichromatic and two dichromatic subjects (1 deuteranope and 1 protanope). At isoluminance, the fundamental (first harmonic) response was elicited by the chromatic component in the stimulus. The trichromatic ERGs possessed low-pass temporal tuning characteristics, reflecting the activity of parvocellular post-receptoral mechanisms. There was very little first harmonic response in the dichromats’ ERGs. The second harmonic response was elicited by the luminance modulation in the compound stimulus and showed, in all subjects, band-pass temporal tuning characteristic of magnocellular activity. Thus it is possible to concurrently elicit ERG responses from the human retina which reflect processing in both chromatic and luminance pathways. As well as providing a clear demonstration of the parallel nature of chromatic and luminance processing in the human retina, the differences that exist between ERGs from trichromatic and dichromatic subjects point to the existence of interactions between afferent post-receptoral pathways that are in operation from the earliest stages of visual processing. PMID:22586211

  11. Visualization-by-Sketching: An Artist's Interface for Creating Multivariate Time-Varying Data Visualizations.

    PubMed

    Schroeder, David; Keefe, Daniel F

    2016-01-01

    We present Visualization-by-Sketching, a direct-manipulation user interface for designing new data visualizations. The goals are twofold: First, make the process of creating real, animated, data-driven visualizations of complex information more accessible to artists, graphic designers, and other visual experts with traditional, non-technical training. Second, support and enhance the role of human creativity in visualization design, enabling visual experimentation and workflows similar to what is possible with traditional artistic media. The approach is to conceive of visualization design as a combination of processes that are already closely linked with visual creativity: sketching, digital painting, image editing, and reacting to exemplars. Rather than studying and tweaking low-level algorithms and their parameters, designers create new visualizations by painting directly on top of a digital data canvas, sketching data glyphs, and arranging and blending together multiple layers of animated 2D graphics. This requires new algorithms and techniques to interpret painterly user input relative to data "under" the canvas, balance artistic freedom with the need to produce accurate data visualizations, and interactively explore large (e.g., terabyte-sized) multivariate datasets. Results demonstrate a variety of multivariate data visualization techniques can be rapidly recreated using the interface. More importantly, results and feedback from artists support the potential for interfaces in this style to attract new, creative users to the challenging task of designing more effective data visualizations and to help these users stay "in the creative zone" as they work.

  12. Effect of tone mapping operators on visual attention deployment

    NASA Astrophysics Data System (ADS)

    Narwaria, Manish; Perreira Da Silva, Matthieu; Le Callet, Patrick; Pepion, Romuald

    2012-10-01

    High Dynamic Range (HDR) images/videos require the use of a tone mapping operator (TMO) when visualized on Low Dynamic Range (LDR) displays. From an artistic intention point of view, TMOs are not necessarily transparent and might induce different behavior to view the content. In this paper, we investigate and quantify how TMOs modify visual attention (VA). To that end both objective and subjective tests in the form of eye-tracking experiments have been conducted on several still image content that have been processed by 11 different TMOs. Our studies confirm that TMOs can indeed modify human attention and fixation behavior significantly. Therefore our studies suggest that VA needs consideration for evaluating the overall perceptual impact of TMOs on HDR content. Since the existing studies so far have only considered the quality or aesthetic appeal angle, this study brings in a new perspective regarding the importance of VA in HDR content processing for visualization on LDR displays.

  13. The Comparison of Visual Working Memory Representations with Perceptual Inputs

    PubMed Central

    Hyun, Joo-seok; Woodman, Geoffrey F.; Vogel, Edward K.; Hollingworth, Andrew

    2008-01-01

    The human visual system can notice differences between memories of previous visual inputs and perceptions of new visual inputs, but the comparison process that detects these differences has not been well characterized. This study tests the hypothesis that differences between the memory of a stimulus array and the perception of a new array are detected in a manner that is analogous to the detection of simple features in visual search tasks. That is, just as the presence of a task-relevant feature in visual search can be detected in parallel, triggering a rapid shift of attention to the object containing the feature, the presence of a memory-percept difference along a task-relevant dimension can be detected in parallel, triggering a rapid shift of attention to the changed object. Supporting evidence was obtained in a series of experiments that examined manual reaction times, saccadic reaction times, and event-related potential latencies. However, these experiments also demonstrated that a slow, limited-capacity process must occur before the observer can make a manual change-detection response. PMID:19653755

  14. Direct evidence for attention-dependent influences of the frontal eye-fields on feature-responsive visual cortex.

    PubMed

    Heinen, Klaartje; Feredoes, Eva; Weiskopf, Nikolaus; Ruff, Christian C; Driver, Jon

    2014-11-01

    Voluntary selective attention can prioritize different features in a visual scene. The frontal eye-fields (FEF) are one potential source of such feature-specific top-down signals, but causal evidence for influences on visual cortex (as was shown for "spatial" attention) has remained elusive. Here, we show that transcranial magnetic stimulation (TMS) applied to right FEF increased the blood oxygen level-dependent (BOLD) signals in visual areas processing "target feature" but not in "distracter feature"-processing regions. TMS-induced BOLD signals increase in motion-responsive visual cortex (MT+) when motion was attended in a display with moving dots superimposed on face stimuli, but in face-responsive fusiform area (FFA) when faces were attended to. These TMS effects on BOLD signal in both regions were negatively related to performance (on the motion task), supporting the behavioral relevance of this pathway. Our findings provide new causal evidence for the human FEF in the control of nonspatial "feature"-based attention, mediated by dynamic influences on feature-specific visual cortex that vary with the currently attended property. © The Author 2013. Published by Oxford University Press.

  15. Piglets Learn to Use Combined Human-Given Visual and Auditory Signals to Find a Hidden Reward in an Object Choice Task

    PubMed Central

    Bensoussan, Sandy; Cornil, Maude; Meunier-Salaün, Marie-Christine; Tallet, Céline

    2016-01-01

    Although animals rarely use only one sense to communicate, few studies have investigated the use of combinations of different signals between animals and humans. This study assessed for the first time the spontaneous reactions of piglets to human pointing gestures and voice in an object-choice task with a reward. Piglets (Sus scrofa domestica) mainly use auditory signals–individually or in combination with other signals—to communicate with their conspecifics. Their wide hearing range (42 Hz to 40.5 kHz) fits the range of human vocalisations (40 Hz to 1.5 kHz), which may induce sensitivity to the human voice. However, only their ability to use visual signals from humans, especially pointing gestures, has been assessed to date. The current study investigated the effects of signal type (visual, auditory and combined visual and auditory) and piglet experience on the piglets’ ability to locate a hidden food reward over successive tests. Piglets did not find the hidden reward at first presentation, regardless of the signal type given. However, they subsequently learned to use a combination of auditory and visual signals (human voice and static or dynamic pointing gestures) to successfully locate the reward in later tests. This learning process may result either from repeated presentations of the combination of static gestures and auditory signals over successive tests, or from transitioning from static to dynamic pointing gestures, again over successive tests. Furthermore, piglets increased their chance of locating the reward either if they did not go straight to a bowl after entering the test area or if they stared at the experimenter before visiting it. Piglets were not able to use the voice direction alone, indicating that a combination of signals (pointing and voice direction) is necessary. Improving our communication with animals requires adapting to their individual sensitivity to human-given signals. PMID:27792731

  16. Piglets Learn to Use Combined Human-Given Visual and Auditory Signals to Find a Hidden Reward in an Object Choice Task.

    PubMed

    Bensoussan, Sandy; Cornil, Maude; Meunier-Salaün, Marie-Christine; Tallet, Céline

    2016-01-01

    Although animals rarely use only one sense to communicate, few studies have investigated the use of combinations of different signals between animals and humans. This study assessed for the first time the spontaneous reactions of piglets to human pointing gestures and voice in an object-choice task with a reward. Piglets (Sus scrofa domestica) mainly use auditory signals-individually or in combination with other signals-to communicate with their conspecifics. Their wide hearing range (42 Hz to 40.5 kHz) fits the range of human vocalisations (40 Hz to 1.5 kHz), which may induce sensitivity to the human voice. However, only their ability to use visual signals from humans, especially pointing gestures, has been assessed to date. The current study investigated the effects of signal type (visual, auditory and combined visual and auditory) and piglet experience on the piglets' ability to locate a hidden food reward over successive tests. Piglets did not find the hidden reward at first presentation, regardless of the signal type given. However, they subsequently learned to use a combination of auditory and visual signals (human voice and static or dynamic pointing gestures) to successfully locate the reward in later tests. This learning process may result either from repeated presentations of the combination of static gestures and auditory signals over successive tests, or from transitioning from static to dynamic pointing gestures, again over successive tests. Furthermore, piglets increased their chance of locating the reward either if they did not go straight to a bowl after entering the test area or if they stared at the experimenter before visiting it. Piglets were not able to use the voice direction alone, indicating that a combination of signals (pointing and voice direction) is necessary. Improving our communication with animals requires adapting to their individual sensitivity to human-given signals.

  17. Connectopathy in Autism Spectrum Disorders: A Review of Evidence from Visual Evoked Potentials and Diffusion Magnetic Resonance Imaging

    PubMed Central

    Yamasaki, Takao; Maekawa, Toshihiko; Fujita, Takako; Tobimatsu, Shozo

    2017-01-01

    Individuals with autism spectrum disorder (ASD) show superior performance in processing fine details; however, they often exhibit impairments of gestalt face, global motion perception, and visual attention as well as core social deficits. Increasing evidence has suggested that social deficits in ASD arise from abnormal functional and structural connectivities between and within distributed cortical networks that are recruited during social information processing. Because the human visual system is characterized by a set of parallel, hierarchical, multistage network systems, we hypothesized that the altered connectivity of visual networks contributes to social cognition impairment in ASD. In the present review, we focused on studies of altered connectivity of visual and attention networks in ASD using visual evoked potentials (VEPs), event-related potentials (ERPs), and diffusion tensor imaging (DTI). A series of VEP, ERP, and DTI studies conducted in our laboratory have demonstrated complex alterations (impairment and enhancement) of visual and attention networks in ASD. Recent data have suggested that the atypical visual perception observed in ASD is caused by altered connectivity within parallel visual pathways and attention networks, thereby contributing to the impaired social communication observed in ASD. Therefore, we conclude that the underlying pathophysiological mechanism of ASD constitutes a “connectopathy.” PMID:29170625

  18. CAVEman: Standardized anatomical context for biomedical data mapping.

    PubMed

    Turinsky, Andrei L; Fanea, Elena; Trinh, Quang; Wat, Stephen; Hallgrímsson, Benedikt; Dong, Xiaoli; Shu, Xueling; Stromer, Julie N; Hill, Jonathan W; Edwards, Carol; Grosenick, Brenda; Yajima, Masumi; Sensen, Christoph W

    2008-01-01

    The authors have created a software system called the CAVEman, for the visual integration and exploration of heterogeneous anatomical and biomedical data. The CAVEman can be applied for both education and research tasks. The main component of the system is a three-dimensional digital atlas of the adult male human anatomy, structured according to the nomenclature of Terminologia Anatomica. The underlying data-indexing mechanism uses standard ontologies to map a range of biomedical data types onto the atlas. The CAVEman system is now used to visualize genetic processes in the context of the human anatomy and to facilitate visual exploration of the data. Through the use of Javatrade mark software, the atlas-based system is portable to virtually any computer environment, including personal computers and workstations. Existing Java tools for biomedical data analysis have been incorporated into the system. The affordability of virtual-reality installations has increased dramatically over the last several years. This creates new opportunities for educational scenarios that model important processes in a patient's body, including gene expression patterns, metabolic activity, the effects of interventions such as drug treatments, and eventually surgical simulations.

  19. Research on metallic material defect detection based on bionic sensing of human visual properties

    NASA Astrophysics Data System (ADS)

    Zhang, Pei Jiang; Cheng, Tao

    2018-05-01

    Due to the fact that human visual system can quickly lock the areas of interest in complex natural environment and focus on it, this paper proposes an eye-based visual attention mechanism by simulating human visual imaging features based on human visual attention mechanism Bionic Sensing Visual Inspection Model Method to Detect Defects of Metallic Materials in the Mechanical Field. First of all, according to the biologically visually significant low-level features, the mark of defect experience marking is used as the intermediate feature of simulated visual perception. Afterwards, SVM method was used to train the advanced features of visual defects of metal material. According to the weight of each party, the biometrics detection model of metal material defect, which simulates human visual characteristics, is obtained.

  20. Physics and psychophysics of color reproduction

    NASA Astrophysics Data System (ADS)

    Giorgianni, Edward J.

    1991-08-01

    The successful design of a color-imaging system requires knowledge of the factors used to produce and control color. This knowledge can be derived, in part, from measurements of the physical properties of the imaging system. Color itself, however, is a perceptual response and cannot be directly measured. Though the visual process begins with physics, as radiant energy reaching the eyes, it is in the mind of the observer that the stimuli produced from this radiant energy are interpreted and organized to form meaningful perceptions, including the perception of color. A comprehensive understanding of color reproduction, therefore, requires not only a knowledge of the physical properties of color-imaging systems but also an understanding of the physics, psychophysics, and psychology of the human observer. The human visual process is quite complex; in many ways the physical properties of color-imaging systems are easier to understand.

  1. Dystrophin Is Required for Proper Functioning of Luminance and Red-Green Cone Opponent Mechanisms in the Human Retina.

    PubMed

    Barboni, Mirella Telles Salgueiro; Martins, Cristiane Maria Gomes; Nagy, Balázs Vince; Tsai, Tina; Damico, Francisco Max; da Costa, Marcelo Fernandes; de Cassia, Rita; Pavanello, M; Lourenço, Naila Cristina Vilaça; de Cerqueira, Antonia Maria Pereira; Zatz, Mayana; Kremers, Jan; Ventura, Dora Fix

    2016-07-01

    Visual information is processed in parallel pathways in the visual system. Parallel processing begins at the synapse between the photoreceptors and their postreceptoral neurons in the human retina. The integrity of this first neural connection is vital for normal visual processing downstream. Of the numerous elements necessary for proper functioning of this synaptic contact, dystrophin proteins in the eye play an important role. Deficiency of muscle dystrophin causes Duchenne muscular dystrophy (DMD), an X-linked disease that affects muscle function and leads to decreased life expectancy. In DMD patients, postreceptoral retinal mechanisms underlying scotopic and photopic vision and ON- and OFF-pathway responses are also altered. In this study, we recorded the electroretinogram (ERG) while preferentially activating the (red-green) opponent or the luminance pathway, and compared data from healthy participants (n = 16) with those of DMD patients (n = 10). The stimuli were heterochromatic sinusoidal modulations at a mean luminance of 200 cd/m2. The recordings allowed us also to analyze ON and OFF cone-driven retinal responses. We found significant differences in 12-Hz response amplitudes and phases between controls and DMD patients, with conditions with large luminance content resulting in larger response amplitudes in DMD patients compared to controls, whereas responses of DMD patients were smaller when pure chromatic modulation was given. The results suggest that dystrophin is required for the proper function of luminance and red-green cone opponent mechanisms in the human retina.

  2. Bioelectronic nose and its application to smell visualization.

    PubMed

    Ko, Hwi Jin; Park, Tai Hyun

    2016-01-01

    There have been many trials to visualize smell using various techniques in order to objectively express the smell because information obtained from the sense of smell in human is very subjective. So far, well-trained experts such as a perfumer, complex and large-scale equipment such as GC-MS, and an electronic nose have played major roles in objectively detecting and recognizing odors. Recently, an optoelectronic nose was developed to achieve this purpose, but some limitations regarding the sensitivity and the number of smells that can be visualized still persist. Since the elucidation of the olfactory mechanism, numerous researches have been accomplished for the development of a sensing device by mimicking human olfactory system. Engineered olfactory cells were constructed to mimic the human olfactory system, and the use of engineered olfactory cells for smell visualization has been attempted with the use of various methods such as calcium imaging, CRE reporter assay, BRET, and membrane potential assay; however, it is not easy to consistently control the condition of cells and it is impossible to detect low odorant concentration. Recently, the bioelectronic nose was developed, and much improved along with the improvement of nano-biotechnology. The bioelectronic nose consists of the following two parts: primary transducer and secondary transducer. Biological materials as a primary transducer improved the selectivity of the sensor, and nanomaterials as a secondary transducer increased the sensitivity. Especially, the bioelectronic noses using various nanomaterials combined with human olfactory receptors or nanovesicles derived from engineered olfactory cells have a potential which can detect almost all of the smells recognized by human because an engineered olfactory cell might be able to express any human olfactory receptor as well as can mimic human olfactory system. Therefore, bioelectronic nose will be a potent tool for smell visualization, but only if two technologies are completed. First, a multi-channel array-sensing system has to be applied for the integration of all of the olfactory receptors into a single chip for mimicking the performance of human nose. Second, the processing technique of the multi-channel system signals should be simultaneously established with the conversion of the signals to visual images. With the use of this latest sensing technology, the realization of a proper smell-visualization technology is expected in the near future.

  3. Interactions of Top-Down and Bottom-Up Mechanisms in Human Visual Cortex

    PubMed Central

    McMains, Stephanie; Kastner, Sabine

    2011-01-01

    Multiple stimuli present in the visual field at the same time compete for neural representation by mutually suppressing their evoked activity throughout visual cortex, providing a neural correlate for the limited processing capacity of the visual system. Competitive interactions among stimuli can be counteracted by top-down, goal-directed mechanisms such as attention, and by bottom-up, stimulus-driven mechanisms. Because these two processes cooperate in everyday life to bias processing toward behaviorally relevant or particularly salient stimuli, it has proven difficult to study interactions between top-down and bottom-up mechanisms. Here, we used an experimental paradigm in which we first isolated the effects of a bottom-up influence on neural competition by parametrically varying the degree of perceptual grouping in displays that were not attended. Second, we probed the effects of directed attention on the competitive interactions induced with the parametric design. We found that the amount of attentional modulation varied linearly with the degree of competition left unresolved by bottom-up processes, such that attentional modulation was greatest when neural competition was little influenced by bottom-up mechanisms and smallest when competition was strongly influenced by bottom-up mechanisms. These findings suggest that the strength of attentional modulation in the visual system is constrained by the degree to which competitive interactions have been resolved by bottom-up processes related to the segmentation of scenes into candidate objects. PMID:21228167

  4. Evaluation of the traffic parameters in a metropolitan area by fusing visual perceptions and CNN processing of webcam images.

    PubMed

    Faro, Alberto; Giordano, Daniela; Spampinato, Concetto

    2008-06-01

    This paper proposes a traffic monitoring architecture based on a high-speed communication network whose nodes are equipped with fuzzy processors and cellular neural network (CNN) embedded systems. It implements a real-time mobility information system where visual human perceptions sent by people working on the territory and video-sequences of traffic taken from webcams are jointly processed to evaluate the fundamental traffic parameters for every street of a metropolitan area. This paper presents the whole methodology for data collection and analysis and compares the accuracy and the processing time of the proposed soft computing techniques with other existing algorithms. Moreover, this paper discusses when and why it is recommended to fuse the visual perceptions of the traffic with the automated measurements taken from the webcams to compute the maximum traveling time that is likely needed to reach any destination in the traffic network.

  5. Functionally defined white matter reveals segregated pathways in human ventral temporal cortex associated with category-specific processing

    PubMed Central

    Gomez, Jesse; Pestilli, Franco; Witthoft, Nathan; Golarai, Golijeh; Liberman, Alina; Poltoratski, Sonia; Yoon, Jennifer; Grill-Spector, Kalanit

    2014-01-01

    Summary It is unknown if the white matter properties associated with specific visual networks selectively affect category-specific processing. In a novel protocol we combined measurements of white matter structure, functional selectivity, and behavior in the same subjects. We find two parallel white matter pathways along the ventral temporal lobe connecting to either face-selective or place-selective regions. Diffusion properties of portions of these tracts adjacent to face- and place-selective regions of ventral temporal cortex correlate with behavioral performance for face or place processing, respectively. Strikingly, adults with developmental prosopagnosia (face blindness) express an atypical structure-behavior relationship near face-selective cortex, suggesting that white matter atypicalities in this region may have behavioral consequences. These data suggest that examining the interplay between cortical function, anatomical connectivity, and visual behavior is integral to understanding functional networks and their role in producing visual abilities and deficits. PMID:25569351

  6. Visual exploration and analysis of human-robot interaction rules

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Boyles, Michael J.

    2013-01-01

    We present a novel interaction paradigm for the visual exploration, manipulation and analysis of human-robot interaction (HRI) rules; our development is implemented using a visual programming interface and exploits key techniques drawn from both information visualization and visual data mining to facilitate the interaction design and knowledge discovery process. HRI is often concerned with manipulations of multi-modal signals, events, and commands that form various kinds of interaction rules. Depicting, manipulating and sharing such design-level information is a compelling challenge. Furthermore, the closed loop between HRI programming and knowledge discovery from empirical data is a relatively long cycle. This, in turn, makes design-level verification nearly impossible to perform in an earlier phase. In our work, we exploit a drag-and-drop user interface and visual languages to support depicting responsive behaviors from social participants when they interact with their partners. For our principal test case of gaze-contingent HRI interfaces, this permits us to program and debug the robots' responsive behaviors through a graphical data-flow chart editor. We exploit additional program manipulation interfaces to provide still further improvement to our programming experience: by simulating the interaction dynamics between a human and a robot behavior model, we allow the researchers to generate, trace and study the perception-action dynamics with a social interaction simulation to verify and refine their designs. Finally, we extend our visual manipulation environment with a visual data-mining tool that allows the user to investigate interesting phenomena such as joint attention and sequential behavioral patterns from multiple multi-modal data streams. We have created instances of HRI interfaces to evaluate and refine our development paradigm. As far as we are aware, this paper reports the first program manipulation paradigm that integrates visual programming interfaces, information visualization, and visual data mining methods to facilitate designing, comprehending, and evaluating HRI interfaces.

  7. Neuroplasticity and amblyopia: vision at the balance point.

    PubMed

    Tailor, Vijay K; Schwarzkopf, D Samuel; Dahlmann-Noor, Annegret H

    2017-02-01

    New insights into triggers and brakes of plasticity in the visual system are being translated into new treatment approaches which may improve outcomes not only in children, but also in adults. Visual experience-driven plasticity is greatest in early childhood, triggered by maturation of inhibitory interneurons which facilitate strengthening of synchronous synaptic connections, and inactivation of others. Normal binocular development leads to progressive refinement of monocular visual acuity, stereoacuity and fusion of images from both eyes. At the end of the 'critical period', structural and functional brakes such as dampening of acetylcholine receptor signalling and formation of perineuronal nets limit further synaptic remodelling. Imbalanced visual input from the two eyes can lead to imbalanced neural processing and permanent visual deficits, the commonest of which is amblyopia. The efficacy of new behavioural, physical and pharmacological interventions aiming to balance visual input and visual processing have been described in humans, and some are currently under evaluation in randomised controlled trials. Outcomes may change amblyopia treatment for children and adults, but the safety of new approaches will need careful monitoring, as permanent adverse events may occur when plasticity is re-induced after the end of the critical period.Video abstracthttp://links.lww.com/CONR/A42.

  8. Toward Model Building for Visual Aesthetic Perception

    PubMed Central

    Lughofer, Edwin; Zeng, Xianyi

    2017-01-01

    Several models of visual aesthetic perception have been proposed in recent years. Such models have drawn on investigations into the neural underpinnings of visual aesthetics, utilizing neurophysiological techniques and brain imaging techniques including functional magnetic resonance imaging, magnetoencephalography, and electroencephalography. The neural mechanisms underlying the aesthetic perception of the visual arts have been explained from the perspectives of neuropsychology, brain and cognitive science, informatics, and statistics. Although corresponding models have been constructed, the majority of these models contain elements that are difficult to be simulated or quantified using simple mathematical functions. In this review, we discuss the hypotheses, conceptions, and structures of six typical models for human aesthetic appreciation in the visual domain: the neuropsychological, information processing, mirror, quartet, and two hierarchical feed-forward layered models. Additionally, the neural foundation of aesthetic perception, appreciation, or judgement for each model is summarized. The development of a unified framework for the neurobiological mechanisms underlying the aesthetic perception of visual art and the validation of this framework via mathematical simulation is an interesting challenge in neuroaesthetics research. This review aims to provide information regarding the most promising proposals for bridging the gap between visual information processing and brain activity involved in aesthetic appreciation. PMID:29270194

  9. Video quality assessment method motivated by human visual perception

    NASA Astrophysics Data System (ADS)

    He, Meiling; Jiang, Gangyi; Yu, Mei; Song, Yang; Peng, Zongju; Shao, Feng

    2016-11-01

    Research on video quality assessment (VQA) plays a crucial role in improving the efficiency of video coding and the performance of video processing. It is well acknowledged that the motion energy model generates motion energy responses in a middle temporal area by simulating the receptive field of neurons in V1 for the motion perception of the human visual system. Motivated by the biological evidence for the visual motion perception, a VQA method is proposed in this paper, which comprises the motion perception quality index and the spatial index. To be more specific, the motion energy model is applied to evaluate the temporal distortion severity of each frequency component generated from the difference of Gaussian filter bank, which produces the motion perception quality index, and the gradient similarity measure is used to evaluate the spatial distortion of the video sequence to get the spatial quality index. The experimental results of the LIVE, CSIQ, and IVP video databases demonstrate that the random forests regression technique trained by the generated quality indices is highly correspondent to human visual perception and has many significant improvements than comparable well-performing methods. The proposed method has higher consistency with subjective perception and higher generalization capability.

  10. Dynamics of cortico-subcortical cross-modal operations involved in audio-visual object detection in humans.

    PubMed

    Fort, Alexandra; Delpuech, Claude; Pernier, Jacques; Giard, Marie-Hélène

    2002-10-01

    Very recently, a number of neuroimaging studies in humans have begun to investigate the question of how the brain integrates information from different sensory modalities to form unified percepts. Already, intermodal neural processing appears to depend on the modalities of inputs or the nature (speech/non-speech) of information to be combined. Yet, the variety of paradigms, stimuli and technics used make it difficult to understand the relationships between the factors operating at the perceptual level and the underlying physiological processes. In a previous experiment, we used event-related potentials to describe the spatio-temporal organization of audio-visual interactions during a bimodal object recognition task. Here we examined the network of cross-modal interactions involved in simple detection of the same objects. The objects were defined either by unimodal auditory or visual features alone, or by the combination of the two features. As expected, subjects detected bimodal stimuli more rapidly than either unimodal stimuli. Combined analysis of potentials, scalp current densities and dipole modeling revealed several interaction patterns within the first 200 micro s post-stimulus: in occipito-parietal visual areas (45-85 micro s), in deep brain structures, possibly the superior colliculus (105-140 micro s), and in right temporo-frontal regions (170-185 micro s). These interactions differed from those found during object identification in sensory-specific areas and possibly in the superior colliculus, indicating that the neural operations governing multisensory integration depend crucially on the nature of the perceptual processes involved.

  11. Human visual perceptual organization beats thinking on speed.

    PubMed

    van der Helm, Peter A

    2017-05-01

    What is the degree to which knowledge influences visual perceptual processes? This question, which is central to the seeing-versus-thinking debate in cognitive science, is often discussed using examples claimed to be proof of one stance or another. It has, however, also been muddled by the usage of different and unclear definitions of perception. Here, for the well-defined process of perceptual organization, I argue that including speed (or efficiency) into the equation opens a new perspective on the limits of top-down influences of thinking on seeing. While the input of the perceptual organization process may be modifiable and its output enrichable, the process itself seems so fast (or efficient) that thinking hardly has time to intrude and is effective mostly after the fact.

  12. Image Fusion Algorithms Using Human Visual System in Transform Domain

    NASA Astrophysics Data System (ADS)

    Vadhi, Radhika; Swamy Kilari, Veera; Samayamantula, Srinivas Kumar

    2017-08-01

    The endeavor of digital image fusion is to combine the important visual parts from various sources to advance the visibility eminence of the image. The fused image has a more visual quality than any source images. In this paper, the Human Visual System (HVS) weights are used in the transform domain to select appropriate information from various source images and then to attain a fused image. In this process, mainly two steps are involved. First, apply the DWT to the registered source images. Later, identify qualitative sub-bands using HVS weights. Hence, qualitative sub-bands are selected from different sources to form high quality HVS based fused image. The quality of the HVS based fused image is evaluated with general fusion metrics. The results show the superiority among the state-of-the art resolution Transforms (MRT) such as Discrete Wavelet Transform (DWT), Stationary Wavelet Transform (SWT), Contourlet Transform (CT), and Non Sub Sampled Contourlet Transform (NSCT) using maximum selection fusion rule.

  13. Evidence for Non-Opponent Coding of Colour Information in Human Visual Cortex: Selective Loss of "Green" Sensitivity in a Subject with Damaged Ventral Occipito-Temporal Cortex.

    PubMed

    Rauscher, Franziska G; Plant, Gordon T; James-Galton, Merle; Barbur, John L

    2011-01-01

    Damage to ventral occipito-temporal extrastriate visual cortex leads to the syndrome of prosopagnosia often with coexisting cerebral achromatopsia. A patient with this syndrome resulting in a left upper homonymous quadrantanopia, prosopagnosia, and incomplete achromatopsia is described. Chromatic sensitivity was assessed at a number of locations in the intact visual field using a dynamic luminance contrast masking technique that isolates the use of colour signals. In normal subjects chromatic detection thresholds form an elliptical contour when plotted in the Commission Internationale d'Eclairage, (x-y), chromaticity diagram. Because the extraction of colour signals in early visual processing involves opponent mechanisms, subjects with Daltonism (congenital red/green loss of sensitivity) show symmetric increase in thresholds towards the long wavelength ("red") and middle wavelength ("green") regions of the spectrum locus. This is also the case with acquired loss of chromatic sensitivity as a result of retinal or optic nerve disease. Our patient's results were an exception to this rule. Whilst his chromatic sensitivity in the central region of the visual field was reduced symmetrically for both "red/green" and "yellow/blue" directions in colour space, the subject's lower left quadrant showed a marked asymmetry in "red/green" thresholds with the greatest loss of sensitivity towards the "green" region of the spectrum locus. This spatially localized asymmetric loss of "green" but not "red" sensitivity has not been reported previously in human vision. Such loss is consistent with selective damage of neural substrates in the visual cortex that process colour information, but are spectrally non-opponent.

  14. The Naked Truth: The Face and Body Sensitive N170 Response Is Enhanced for Nude Bodies

    PubMed Central

    Hietanen, Jari K.; Nummenmaa, Lauri

    2011-01-01

    Recent event-related potential studies have shown that the occipitotemporal N170 component - best known for its sensitivity to faces - is also sensitive to perception of human bodies. Considering that in the timescale of evolution clothing is a relatively new invention that hides the bodily features relevant for sexual selection and arousal, we investigated whether the early N170 brain response would be enhanced to nude over clothed bodies. In two experiments, we measured N170 responses to nude bodies, bodies wearing swimsuits, clothed bodies, faces, and control stimuli (cars). We found that the N170 amplitude was larger to opposite and same-sex nude vs. clothed bodies. Moreover, the N170 amplitude increased linearly as the amount of clothing decreased from full clothing via swimsuits to nude bodies. Strikingly, the N170 response to nude bodies was even greater than that to faces, and the N170 amplitude to bodies was independent of whether the face of the bodies was visible or not. All human stimuli evoked greater N170 responses than did the control stimulus. Autonomic measurements and self-evaluations showed that nude bodies were affectively more arousing compared to the other stimulus categories. We conclude that the early visual processing of human bodies is sensitive to the visibility of the sex-related features of human bodies and that the visual processing of other people's nude bodies is enhanced in the brain. This enhancement is likely to reflect affective arousal elicited by nude bodies. Such facilitated visual processing of other people's nude bodies is possibly beneficial in identifying potential mating partners and competitors, and for triggering sexual behavior. PMID:22110574

  15. Sensitivity to First-Order Relations of Facial Elements in Infant Rhesus Macaques

    ERIC Educational Resources Information Center

    Paukner, Annika; Bower, Seth; Simpson, Elizabeth A.; Suomi, Stephen J.

    2013-01-01

    Faces are visually attractive to both human and nonhuman primates. Human neonates are thought to have a broad template for faces at birth and prefer face-like to non-face-like stimuli. To better compare developmental trajectories of face processing phylogenetically, here, we investigated preferences for face-like stimuli in infant rhesus macaques…

  16. Spatial Construction Skills of Chimpanzees ("Pan Troglodytes") and Young Human Children ("Homo Sapiens Sapiens")

    ERIC Educational Resources Information Center

    Poti, Patrizia; Hayashi, Misato; Matsuzawa, Tetsuro

    2009-01-01

    Spatial construction tasks are basic tests of visual-spatial processing. Two studies have assessed spatial construction skills in chimpanzees (Pan troglodytes) and young children (Homo sapiens sapiens) with a block modelling task. Study 1a subjects were three young chimpanzees and five adult chimpanzees. Study 1b subjects were 30 human children…

  17. 'What' and 'where' in the human brain.

    PubMed

    Ungerleider, L G; Haxby, J V

    1994-04-01

    Multiple visual areas in the cortex of nonhuman primates are organized into two hierarchically organized and functionally specialized processing pathways, a 'ventral stream' for object vision and a 'dorsal stream' for spatial vision. Recent findings from positron emission tomography activation studies have localized these pathways within the human brain, yielding insights into cortical hierarchies, specialization of function, and attentional mechanisms.

  18. Is it a bird? Is it a plane? Ultra-rapid visual categorisation of natural and artifactual objects.

    PubMed

    VanRullen, R; Thorpe, S J

    2001-01-01

    Visual processing is known to be very fast in ultra-rapid categorisation tasks where the subject has to decide whether a briefly flashed image belongs to a target category or not. Human subjects can respond in under 400 ms, and event-related-potential studies have shown that the underlying processing can be done in less than 150 ms. Monkeys trained to perform the same task have proved even faster. However, most of these experiments have only been done with biologically relevant target categories such as animals or food. Here we performed the same study on human subjects, alternating between a task in which the target category was 'animal', and a task in which the target category was 'means of transport'. These natural images of clearly artificial objects contained targets as varied as cars, trucks, trains, boats, aircraft, and hot-air balloons. However, the subjects performed almost identically in both tasks, with reaction times not significantly longer in the 'means of transport' task. These reaction times were much shorter than in any previous study on natural-image processing. We conclude that, at least for these two superordinate categories, the speed of ultra-rapid visual categorisation of natural scenes does not depend on the target category, and that this processing could rely primarily on feed-forward, automatic mechanisms.

  19. A Human Factors Framework for Payload Display Design

    NASA Technical Reports Server (NTRS)

    Dunn, Mariea C.; Hutchinson, Sonya L.

    1998-01-01

    During missions to space, one charge of the astronaut crew is to conduct research experiments. These experiments, referred to as payloads, typically are controlled by computers. Crewmembers interact with payload computers by using visual interfaces or displays. To enhance the safety, productivity, and efficiency of crewmember interaction with payload displays, particular attention must be paid to the usability of these displays. Enhancing display usability requires adoption of a design process that incorporates human factors engineering principles at each stage. This paper presents a proposed framework for incorporating human factors engineering principles into the payload display design process.

  20. Directed Communication between Nucleus Accumbens and Neocortex in Humans Is Differentially Supported by Synchronization in the Theta and Alpha Band.

    PubMed

    Horschig, Jörn M; Smolders, Ruud; Bonnefond, Mathilde; Schoffelen, Jan-Mathijs; van den Munckhof, Pepijn; Schuurman, P Richard; Cools, Roshan; Denys, Damiaan; Jensen, Ole

    2015-01-01

    Here, we report evidence for oscillatory bi-directional interactions between the nucleus accumbens and the neocortex in humans. Six patients performed a demanding covert visual attention task while we simultaneously recorded brain activity from deep-brain electrodes implanted in the nucleus accumbens and the surface electroencephalogram (EEG). Both theta and alpha oscillations were strongly coherent with the frontal and parietal EEG during the task. Theta-band coherence increased during processing of the visual stimuli. Granger causality analysis revealed that the nucleus accumbens was communicating with the neocortex primarily in the theta-band, while the cortex was communicating the nucleus accumbens in the alpha-band. These data are consistent with a model, in which theta- and alpha-band oscillations serve dissociable roles: Prior to stimulus processing, the cortex might suppress ongoing processing in the nucleus accumbens by modulating alpha-band activity. Subsequently, upon stimulus presentation, theta oscillations might facilitate the active exchange of stimulus information from the nucleus accumbens to the cortex.

  1. The forgotten artist: Why to consider intentions and interaction in a model of aesthetic experience. Comment on "Move me, astonish me... delight my eyes and brain: The Vienna Integrated Model of top-down and bottom-up processes in Art Perception (VIMAP) and corresponding affective, evaluative, and neurophysiological correlates" by Matthew Pelowski et al.

    NASA Astrophysics Data System (ADS)

    Brattico, Elvira; Brattico, Pauli; Vuust, Peter

    2017-07-01

    In their target article published in this journal issue, Pelowski et al. [1] address the question of how humans experience, and respond to, visual art. They propose a multi-layered model of the representations and processes involved in assessing visual art objects that, furthermore, involves both bottom-up and top-down elements. Their model provides predictions for seven different outcomes of human aesthetic experience, based on few distinct features (schema congruence, self-relevance, and coping necessity), and connects the underlying processing stages to ;specific correlates of the brain; (a similar attempt was previously done for music by [2-4]). In doing this, the model aims to account for the (often profound) experience of an individual viewer in front of an art object.

  2. A new multimodal interactive way of subjective scoring of 3D video quality of experience

    NASA Astrophysics Data System (ADS)

    Kim, Taewan; Lee, Kwanghyun; Lee, Sanghoon; Bovik, Alan C.

    2014-03-01

    People that watch today's 3D visual programs, such as 3D cinema, 3D TV and 3D games, experience wide and dynamically varying ranges of 3D visual immersion and 3D quality of experience (QoE). It is necessary to be able to deploy reliable methodologies that measure each viewers subjective experience. We propose a new methodology that we call Multimodal Interactive Continuous Scoring of Quality (MICSQ). MICSQ is composed of a device interaction process between the 3D display and a separate device (PC, tablet, etc.) used as an assessment tool, and a human interaction process between the subject(s) and the device. The scoring process is multimodal, using aural and tactile cues to help engage and focus the subject(s) on their tasks. Moreover, the wireless device interaction process makes it possible for multiple subjects to assess 3D QoE simultaneously in a large space such as a movie theater, and at di®erent visual angles and distances.

  3. Attentional load and sensory competition in human vision: modulation of fMRI responses by load at fixation during task-irrelevant stimulation in the peripheral visual field.

    PubMed

    Schwartz, Sophie; Vuilleumier, Patrik; Hutton, Chloe; Maravita, Angelo; Dolan, Raymond J; Driver, Jon

    2005-06-01

    Perceptual suppression of distractors may depend on both endogenous and exogenous factors, such as attentional load of the current task and sensory competition among simultaneous stimuli, respectively. We used functional magnetic resonance imaging (fMRI) to compare these two types of attentional effects and examine how they may interact in the human brain. We varied the attentional load of a visual monitoring task performed on a rapid stream at central fixation without altering the central stimuli themselves, while measuring the impact on fMRI responses to task-irrelevant peripheral checkerboards presented either unilaterally or bilaterally. Activations in visual cortex for irrelevant peripheral stimulation decreased with increasing attentional load at fixation. This relative decrease was present even in V1, but became larger for successive visual areas through to V4. Decreases in activation for contralateral peripheral checkerboards due to higher central load were more pronounced within retinotopic cortex corresponding to 'inner' peripheral locations relatively near the central targets than for more eccentric 'outer' locations, demonstrating a predominant suppression of nearby surround rather than strict 'tunnel vision' during higher task load at central fixation. Contralateral activations for peripheral stimulation in one hemifield were reduced by competition with concurrent stimulation in the other hemifield only in inferior parietal cortex, not in retinotopic areas of occipital visual cortex. In addition, central attentional load interacted with competition due to bilateral versus unilateral peripheral stimuli specifically in posterior parietal and fusiform regions. These results reveal that task-dependent attentional load, and interhemifield stimulus-competition, can produce distinct influences on the neural responses to peripheral visual stimuli within the human visual system. These distinct mechanisms in selective visual processing may be integrated within posterior parietal areas, rather than earlier occipital cortex.

  4. Physics: Quantum problems solved through games

    NASA Astrophysics Data System (ADS)

    Maniscalco, Sabrina

    2016-04-01

    Humans are better than computers at performing certain tasks because of their intuition and superior visual processing. Video games are now being used to channel these abilities to solve problems in quantum physics. See Letter p.210

  5. Internal model of gravity influences configural body processing.

    PubMed

    Barra, Julien; Senot, Patrice; Auclair, Laurent

    2017-01-01

    Human bodies are processed by a configural processing mechanism. Evidence supporting this claim is the body inversion effect, in which inversion impairs recognition of bodies more than other objects. Biomechanical configuration, as well as both visual and embodied expertise, has been demonstrated to play an important role in this effect. Nevertheless, the important factor of body inversion effect may also be linked to gravity orientation since gravity is one of the most fundamental constraints of our biology, behavior, and perception on Earth. The visual presentation of an inverted body in a typical body inversion paradigm turns the observed body upside down but also inverts the implicit direction of visual gravity in the scene. The orientation of visual gravity is then in conflict with the direction of actual gravity and may influence configural processing. To test this hypothesis, we dissociated the orientations of the body and of visual gravity by manipulating body posture. In a pretest we showed that it was possible to turn an avatar upside down (inversion relative to retinal coordinates) without inverting the orientation of visual gravity when the avatar stands on his/her hands. We compared the inversion effect in typical conditions (with gravity conflict when the avatar is upside down) to the inversion effect in conditions with no conflict between visual and physical gravity. The results of our experiment revealed that the inversion effect, as measured by both error rate and reaction time, was strongly reduced when there was no gravity conflict. Our results suggest that when an observed body is upside down (inversion relative to participants' retinal coordinates) but the orientation of visual gravity is not, configural processing of bodies might still be possible. In this paper, we discuss the implications of an internal model of gravity in the configural processing of observed bodies. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. The Human Performance Evaluation Process: A Resource for Reviewing the Identification and Resolution of Human Performance Problems

    DTIC Science & Technology

    2002-03-01

    may show that personnel were working in a very noisy environment, wearing hearing protection, and were unable to communicate effectively — with the... hearing others’ descriptions of what occurred or discussing it with them, the kinds of questions that are asked during the interview, and other factors...Marijuana use decreases the ability to process both auditory and visual stimuli, so may affect, for example, an individual’s ability to read and follow

  7. NK1 receptor antagonism and emotional processing in healthy volunteers.

    PubMed

    Chandra, P; Hafizi, S; Massey-Chase, R M; Goodwin, G M; Cowen, P J; Harmer, C J

    2010-04-01

    The neurokinin-1 (NK(1)) receptor antagonist, aprepitant, showed activity in several animal models of depression; however, its efficacy in clinical trials was disappointing. There is little knowledge of the role of NK(1) receptors in human emotional behaviour to help explain this discrepancy. The aim of the current study was to assess the effects of a single oral dose of aprepitant (125 mg) on models of emotional processing sensitive to conventional antidepressant drug administration in 38 healthy volunteers, randomly allocated to receive aprepitant or placebo in a between groups double blind design. Performance on measures of facial expression recognition, emotional categorisation, memory and attentional visual-probe were assessed following the drug absorption. Relative to placebo, aprepitant improved recognition of happy facial expressions and increased vigilance to emotional information in the unmasked condition of the visual probe task. In contrast, aprepitant impaired emotional memory and slowed responses in the facial expression recognition task suggesting possible deleterious effects on cognition. These results suggest that while antagonism of NK(1) receptors does affect emotional processing in humans, its effects are more restricted and less consistent across tasks than those of conventional antidepressants. Human models of emotional processing may provide a useful means of assessing the likely therapeutic potential of new treatments for depression.

  8. Integrating natural language processing and web GIS for interactive knowledge domain visualization

    NASA Astrophysics Data System (ADS)

    Du, Fangming

    Recent years have seen a powerful shift towards data-rich environments throughout society. This has extended to a change in how the artifacts and products of scientific knowledge production can be analyzed and understood. Bottom-up approaches are on the rise that combine access to huge amounts of academic publications with advanced computer graphics and data processing tools, including natural language processing. Knowledge domain visualization is one of those multi-technology approaches, with its aim of turning domain-specific human knowledge into highly visual representations in order to better understand the structure and evolution of domain knowledge. For example, network visualizations built from co-author relations contained in academic publications can provide insight on how scholars collaborate with each other in one or multiple domains, and visualizations built from the text content of articles can help us understand the topical structure of knowledge domains. These knowledge domain visualizations need to support interactive viewing and exploration by users. Such spatialization efforts are increasingly looking to geography and GIS as a source of metaphors and practical technology solutions, even when non-georeferenced information is managed, analyzed, and visualized. When it comes to deploying spatialized representations online, web mapping and web GIS can provide practical technology solutions for interactive viewing of knowledge domain visualizations, from panning and zooming to the overlay of additional information. This thesis presents a novel combination of advanced natural language processing - in the form of topic modeling - with dimensionality reduction through self-organizing maps and the deployment of web mapping/GIS technology towards intuitive, GIS-like, exploration of a knowledge domain visualization. A complete workflow is proposed and implemented that processes any corpus of input text documents into a map form and leverages a web application framework to let users explore knowledge domain maps interactively. This workflow is implemented and demonstrated for a data set of more than 66,000 conference abstracts.

  9. Integrating visual learning within a model-based ATR system

    NASA Astrophysics Data System (ADS)

    Carlotto, Mark; Nebrich, Mark

    2017-05-01

    Automatic target recognition (ATR) systems, like human photo-interpreters, rely on a variety of visual information for detecting, classifying, and identifying manmade objects in aerial imagery. We describe the integration of a visual learning component into the Image Data Conditioner (IDC) for target/clutter and other visual classification tasks. The component is based on an implementation of a model of the visual cortex developed by Serre, Wolf, and Poggio. Visual learning in an ATR context requires the ability to recognize objects independent of location, scale, and rotation. Our method uses IDC to extract, rotate, and scale image chips at candidate target locations. A bootstrap learning method effectively extends the operation of the classifier beyond the training set and provides a measure of confidence. We show how the classifier can be used to learn other features that are difficult to compute from imagery such as target direction, and to assess the performance of the visual learning process itself.

  10. Evaluation of Visualization Software

    NASA Technical Reports Server (NTRS)

    Globus, Al; Uselton, Sam

    1995-01-01

    Visualization software is widely used in scientific and engineering research. But computed visualizations can be very misleading, and the errors are easy to miss. We feel that the software producing the visualizations must be thoroughly evaluated and the evaluation process as well as the results must be made available. Testing and evaluation of visualization software is not a trivial problem. Several methods used in testing other software are helpful, but these methods are (apparently) often not used. When they are used, the description and results are generally not available to the end user. Additional evaluation methods specific to visualization must also be developed. We present several useful approaches to evaluation, ranging from numerical analysis of mathematical portions of algorithms to measurement of human performance while using visualization systems. Along with this brief survey, we present arguments for the importance of evaluations and discussions of appropriate use of some methods.

  11. Learning-dependent plasticity with and without training in the human brain.

    PubMed

    Zhang, Jiaxiang; Kourtzi, Zoe

    2010-07-27

    Long-term experience through development and evolution and shorter-term training in adulthood have both been suggested to contribute to the optimization of visual functions that mediate our ability to interpret complex scenes. However, the brain plasticity mechanisms that mediate the detection of objects in cluttered scenes remain largely unknown. Here, we combine behavioral and functional MRI (fMRI) measurements to investigate the human-brain mechanisms that mediate our ability to learn statistical regularities and detect targets in clutter. We show two different routes to visual learning in clutter with discrete brain plasticity signatures. Specifically, opportunistic learning of regularities typical in natural contours (i.e., collinearity) can occur simply through frequent exposure, generalize across untrained stimulus features, and shape processing in occipitotemporal regions implicated in the representation of global forms. In contrast, learning to integrate discontinuities (i.e., elements orthogonal to contour paths) requires task-specific training (bootstrap-based learning), is stimulus-dependent, and enhances processing in intraparietal regions implicated in attention-gated learning. We propose that long-term experience with statistical regularities may facilitate opportunistic learning of collinear contours, whereas learning to integrate discontinuities entails bootstrap-based training for the detection of contours in clutter. These findings provide insights in understanding how long-term experience and short-term training interact to shape the optimization of visual recognition processes.

  12. Visual Data Exploration and Analysis - Report on the Visualization Breakout Session of the SCaLeS Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, E. Wes; Frank, Randy; Fulcomer, Sam

    Scientific visualization is the transformation of abstract information into images, and it plays an integral role in the scientific process by facilitating insight into observed or simulated phenomena. Visualization as a discipline spans many research areas from computer science, cognitive psychology and even art. Yet the most successful visualization applications are created when close synergistic interactions with domain scientists are part of the algorithmic design and implementation process, leading to visual representations with clear scientific meaning. Visualization is used to explore, to debug, to gain understanding, and as an analysis tool. Visualization is literally everywhere--images are present in this report,more » on television, on the web, in books and magazines--the common theme is the ability to present information visually that is rapidly assimilated by human observers, and transformed into understanding or insight. As an indispensable part a modern science laboratory, visualization is akin to the biologist's microscope or the electrical engineer's oscilloscope. Whereas the microscope is limited to small specimens or use of optics to focus light, the power of scientific visualization is virtually limitless: visualization provides the means to examine data that can be at galactic or atomic scales, or at any size in between. Unlike the traditional scientific tools for visual inspection, visualization offers the means to ''see the unseeable.'' Trends in demographics or changes in levels of atmospheric CO{sub 2} as a function of greenhouse gas emissions are familiar examples of such unseeable phenomena. Over time, visualization techniques evolve in response to scientific need. Each scientific discipline has its ''own language,'' verbal and visual, used for communication. The visual language for depicting electrical circuits is much different than the visual language for depicting theoretical molecules or trends in the stock market. There is no ''one visualization too'' that can serve as a panacea for all science disciplines. Instead, visualization researchers work hand in hand with domain scientists as part of the scientific research process to define, create, adapt and refine software that ''speaks the visual language'' of each scientific domain.« less

  13. A visual analytic framework for data fusion in investigative intelligence

    NASA Astrophysics Data System (ADS)

    Cai, Guoray; Gross, Geoff; Llinas, James; Hall, David

    2014-05-01

    Intelligence analysis depends on data fusion systems to provide capabilities of detecting and tracking important objects, events, and their relationships in connection to an analytical situation. However, automated data fusion technologies are not mature enough to offer reliable and trustworthy information for situation awareness. Given the trend of increasing sophistication of data fusion algorithms and loss of transparency in data fusion process, analysts are left out of the data fusion process cycle with little to no control and confidence on the data fusion outcome. Following the recent rethinking of data fusion as human-centered process, this paper proposes a conceptual framework towards developing alternative data fusion architecture. This idea is inspired by the recent advances in our understanding of human cognitive systems, the science of visual analytics, and the latest thinking about human-centered data fusion. Our conceptual framework is supported by an analysis of the limitation of existing fully automated data fusion systems where the effectiveness of important algorithmic decisions depend on the availability of expert knowledge or the knowledge of the analyst's mental state in an investigation. The success of this effort will result in next generation data fusion systems that can be better trusted while maintaining high throughput.

  14. Understanding human visual systems and its impact on our intelligent instruments

    NASA Astrophysics Data System (ADS)

    Strojnik Scholl, Marija; Páez, Gonzalo; Scholl, Michelle K.

    2013-09-01

    We review the evolution of machine vision and comment on the cross-fertilization from the neural sciences onto flourishing fields of neural processing, parallel processing, and associative memory in optical sciences and computing. Then we examine how the intensive efforts in mapping the human brain have been influenced by concepts in computer sciences, control theory, and electronic circuits. We discuss two neural paths that employ the input from the vision sense to determine the navigational options and object recognition. They are ventral temporal pathway for object recognition (what?) and dorsal parietal pathway for navigation (where?), respectively. We describe the reflexive and conscious decision centers in cerebral cortex involved with visual attention and gaze control. Interestingly, these require return path though the midbrain for ocular muscle control. We find that the cognitive psychologists currently study human brain employing low-spatial-resolution fMRI with temporal response on the order of a second. In recent years, the life scientists have concentrated on insect brains to study neural processes. We discuss how reflexive and conscious gaze-control decisions are made in the frontal eye field and inferior parietal lobe, constituting the fronto-parietal attention network. We note that ethical and experiential learnings impact our conscious decisions.

  15. Optical hiding with visual cryptography

    NASA Astrophysics Data System (ADS)

    Shi, Yishi; Yang, Xiubo

    2017-11-01

    We propose an optical hiding method based on visual cryptography. In the hiding process, we convert the secret information into a set of fabricated phase-keys, which are completely independent of each other, intensity-detected-proof and image-covered, leading to the high security. During the extraction process, the covered phase-keys are illuminated with laser beams and then incoherently superimposed to extract the hidden information directly by human vision, without complicated optical implementations and any additional computation, resulting in the convenience of extraction. Also, the phase-keys are manufactured as the diffractive optical elements that are robust to the attacks, such as the blocking and the phase-noise. Optical experiments verify that the high security, the easy extraction and the strong robustness are all obtainable in the visual-cryptography-based optical hiding.

  16. Neural responses to salient visual stimuli.

    PubMed Central

    Morris, J S; Friston, K J; Dolan, R J

    1997-01-01

    The neural mechanisms involved in the selective processing of salient or behaviourally important stimuli are uncertain. We used an aversive conditioning paradigm in human volunteer subjects to manipulate the salience of visual stimuli (emotionally expressive faces) presented during positron emission tomography (PET) neuroimaging. Increases in salience, and conflicts between the innate and acquired value of the stimuli, produced augmented activation of the pulvinar nucleus of the right thalamus. Furthermore, this pulvinar activity correlated positively with responses in structures hypothesized to mediate value in the brain right amygdala and basal forebrain (including the cholinergic nucleus basalis of Meynert). The results provide evidence that the pulvinar nucleus of the thalamus plays a crucial modulatory role in selective visual processing, and that changes in perceptual salience are mediated by value-dependent plasticity in pulvinar responses. PMID:9178546

  17. Anatomy and physiology of the afferent visual system.

    PubMed

    Prasad, Sashank; Galetta, Steven L

    2011-01-01

    The efficient organization of the human afferent visual system meets enormous computational challenges. Once visual information is received by the eye, the signal is relayed by the retina, optic nerve, chiasm, tracts, lateral geniculate nucleus, and optic radiations to the striate cortex and extrastriate association cortices for final visual processing. At each stage, the functional organization of these circuits is derived from their anatomical and structural relationships. In the retina, photoreceptors convert photons of light to an electrochemical signal that is relayed to retinal ganglion cells. Ganglion cell axons course through the optic nerve, and their partial decussation in the chiasm brings together corresponding inputs from each eye. Some inputs follow pathways to mediate pupil light reflexes and circadian rhythms. However, the majority of inputs arrive at the lateral geniculate nucleus, which relays visual information via second-order neurons that course through the optic radiations to arrive in striate cortex. Feedback mechanisms from higher cortical areas shape the neuronal responses in early visual areas, supporting coherent visual perception. Detailed knowledge of the anatomy of the afferent visual system, in combination with skilled examination, allows precise localization of neuropathological processes and guides effective diagnosis and management of neuro-ophthalmic disorders. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Task-dependent modulation of the visual sensory thalamus assists visual-speech recognition.

    PubMed

    Díaz, Begoña; Blank, Helen; von Kriegstein, Katharina

    2018-05-14

    The cerebral cortex modulates early sensory processing via feed-back connections to sensory pathway nuclei. The functions of this top-down modulation for human behavior are poorly understood. Here, we show that top-down modulation of the visual sensory thalamus (the lateral geniculate body, LGN) is involved in visual-speech recognition. In two independent functional magnetic resonance imaging (fMRI) studies, LGN response increased when participants processed fast-varying features of articulatory movements required for visual-speech recognition, as compared to temporally more stable features required for face identification with the same stimulus material. The LGN response during the visual-speech task correlated positively with the visual-speech recognition scores across participants. In addition, the task-dependent modulation was present for speech movements and did not occur for control conditions involving non-speech biological movements. In face-to-face communication, visual speech recognition is used to enhance or even enable understanding what is said. Speech recognition is commonly explained in frameworks focusing on cerebral cortex areas. Our findings suggest that task-dependent modulation at subcortical sensory stages has an important role for communication: Together with similar findings in the auditory modality the findings imply that task-dependent modulation of the sensory thalami is a general mechanism to optimize speech recognition. Copyright © 2018. Published by Elsevier Inc.

  19. A Cognitive Model for Exposition of Human Deception and Counterdeception

    DTIC Science & Technology

    1987-10-01

    for understanding deception and counterdeceptlon, for developing related tactics, and for stimulating research in cognitive processes. Further...Processing Resources; Attention) BUFFER MEMORY MANAGER (Local) (Problem Solving; Learning; Procedures) BUFFER MEMORY SENSORS Visual, Auditory ...Perception and Misperception in International Politics, Princeton University Press, Princeton, NJ, 1976. Key, W.B., Subliminal Seduction. New

  20. Coding of visual object features and feature conjunctions in the human brain.

    PubMed

    Martinovic, Jasna; Gruber, Thomas; Müller, Matthias M

    2008-01-01

    Object recognition is achieved through neural mechanisms reliant on the activity of distributed coordinated neural assemblies. In the initial steps of this process, an object's features are thought to be coded very rapidly in distinct neural assemblies. These features play different functional roles in the recognition process--while colour facilitates recognition, additional contours and edges delay it. Here, we selectively varied the amount and role of object features in an entry-level categorization paradigm and related them to the electrical activity of the human brain. We found that early synchronizations (approx. 100 ms) increased quantitatively when more image features had to be coded, without reflecting their qualitative contribution to the recognition process. Later activity (approx. 200-400 ms) was modulated by the representational role of object features. These findings demonstrate that although early synchronizations may be sufficient for relatively crude discrimination of objects in visual scenes, they cannot support entry-level categorization. This was subserved by later processes of object model selection, which utilized the representational value of object features such as colour or edges to select the appropriate model and achieve identification.

  1. Self-Organization of Spatio-Temporal Hierarchy via Learning of Dynamic Visual Image Patterns on Action Sequences

    PubMed Central

    Jung, Minju; Hwang, Jungsik; Tani, Jun

    2015-01-01

    It is well known that the visual cortex efficiently processes high-dimensional spatial information by using a hierarchical structure. Recently, computational models that were inspired by the spatial hierarchy of the visual cortex have shown remarkable performance in image recognition. Up to now, however, most biological and computational modeling studies have mainly focused on the spatial domain and do not discuss temporal domain processing of the visual cortex. Several studies on the visual cortex and other brain areas associated with motor control support that the brain also uses its hierarchical structure as a processing mechanism for temporal information. Based on the success of previous computational models using spatial hierarchy and temporal hierarchy observed in the brain, the current report introduces a novel neural network model for the recognition of dynamic visual image patterns based solely on the learning of exemplars. This model is characterized by the application of both spatial and temporal constraints on local neural activities, resulting in the self-organization of a spatio-temporal hierarchy necessary for the recognition of complex dynamic visual image patterns. The evaluation with the Weizmann dataset in recognition of a set of prototypical human movement patterns showed that the proposed model is significantly robust in recognizing dynamically occluded visual patterns compared to other baseline models. Furthermore, an evaluation test for the recognition of concatenated sequences of those prototypical movement patterns indicated that the model is endowed with a remarkable capability for the contextual recognition of long-range dynamic visual image patterns. PMID:26147887

  2. Self-Organization of Spatio-Temporal Hierarchy via Learning of Dynamic Visual Image Patterns on Action Sequences.

    PubMed

    Jung, Minju; Hwang, Jungsik; Tani, Jun

    2015-01-01

    It is well known that the visual cortex efficiently processes high-dimensional spatial information by using a hierarchical structure. Recently, computational models that were inspired by the spatial hierarchy of the visual cortex have shown remarkable performance in image recognition. Up to now, however, most biological and computational modeling studies have mainly focused on the spatial domain and do not discuss temporal domain processing of the visual cortex. Several studies on the visual cortex and other brain areas associated with motor control support that the brain also uses its hierarchical structure as a processing mechanism for temporal information. Based on the success of previous computational models using spatial hierarchy and temporal hierarchy observed in the brain, the current report introduces a novel neural network model for the recognition of dynamic visual image patterns based solely on the learning of exemplars. This model is characterized by the application of both spatial and temporal constraints on local neural activities, resulting in the self-organization of a spatio-temporal hierarchy necessary for the recognition of complex dynamic visual image patterns. The evaluation with the Weizmann dataset in recognition of a set of prototypical human movement patterns showed that the proposed model is significantly robust in recognizing dynamically occluded visual patterns compared to other baseline models. Furthermore, an evaluation test for the recognition of concatenated sequences of those prototypical movement patterns indicated that the model is endowed with a remarkable capability for the contextual recognition of long-range dynamic visual image patterns.

  3. Diversification of visual media retrieval results using saliency detection

    NASA Astrophysics Data System (ADS)

    Muratov, Oleg; Boato, Giulia; De Natale, Franesco G. B.

    2013-03-01

    Diversification of retrieval results allows for better and faster search. Recently there has been proposed different methods for diversification of image retrieval results mainly utilizing text information and techniques imported from natural language processing domain. However, images contain visual information that is impossible to describe in text and the use of visual features is inevitable. Visual saliency is information about the main object of an image implicitly included by humans while creating visual content. For this reason it is naturally to exploit this information for the task of diversification of the content. In this work we study whether visual saliency can be used for the task of diversification and propose a method for re-ranking image retrieval results using saliency. The evaluation has shown that the use of saliency information results in higher diversity of retrieval results.

  4. Simulating the decentralized processes of the human immune system in a virtual anatomy model.

    PubMed

    Sarpe, Vladimir; Jacob, Christian

    2013-01-01

    Many physiological processes within the human body can be perceived and modeled as large systems of interacting particles or swarming agents. The complex processes of the human immune system prove to be challenging to capture and illustrate without proper reference to the spatial distribution of immune-related organs and systems. Our work focuses on physical aspects of immune system processes, which we implement through swarms of agents. This is our first prototype for integrating different immune processes into one comprehensive virtual physiology simulation. Using agent-based methodology and a 3-dimensional modeling and visualization environment (LINDSAY Composer), we present an agent-based simulation of the decentralized processes in the human immune system. The agents in our model - such as immune cells, viruses and cytokines - interact through simulated physics in two different, compartmentalized and decentralized 3-dimensional environments namely, (1) within the tissue and (2) inside a lymph node. While the two environments are separated and perform their computations asynchronously, an abstract form of communication is allowed in order to replicate the exchange, transportation and interaction of immune system agents between these sites. The distribution of simulated processes, that can communicate across multiple, local CPUs or through a network of machines, provides a starting point to build decentralized systems that replicate larger-scale processes within the human body, thus creating integrated simulations with other physiological systems, such as the circulatory, endocrine, or nervous system. Ultimately, this system integration across scales is our goal for the LINDSAY Virtual Human project. Our current immune system simulations extend our previous work on agent-based simulations by introducing advanced visualizations within the context of a virtual human anatomy model. We also demonstrate how to distribute a collection of connected simulations over a network of computers. As a future endeavour, we plan to use parameter tuning techniques on our model to further enhance its biological credibility. We consider these in silico experiments and their associated modeling and optimization techniques as essential components in further enhancing our capabilities of simulating a whole-body, decentralized immune system, to be used both for medical education and research as well as for virtual studies in immunoinformatics.

  5. Intrinsic, stimulus-driven and task-dependent connectivity in human auditory cortex.

    PubMed

    Häkkinen, Suvi; Rinne, Teemu

    2018-06-01

    A hierarchical and modular organization is a central hypothesis in the current primate model of auditory cortex (AC) but lacks validation in humans. Here we investigated whether fMRI connectivity at rest and during active tasks is informative of the functional organization of human AC. Identical pitch-varying sounds were presented during a visual discrimination (i.e. no directed auditory attention), pitch discrimination, and two versions of pitch n-back memory tasks. Analysis based on fMRI connectivity at rest revealed a network structure consisting of six modules in supratemporal plane (STP), temporal lobe, and inferior parietal lobule (IPL) in both hemispheres. In line with the primate model, in which higher-order regions have more longer-range connections than primary regions, areas encircling the STP module showed the highest inter-modular connectivity. Multivariate pattern analysis indicated significant connectivity differences between the visual task and rest (driven by the presentation of sounds during the visual task), between auditory and visual tasks, and between pitch discrimination and pitch n-back tasks. Further analyses showed that these differences were particularly due to connectivity modulations between the STP and IPL modules. While the results are generally in line with the primate model, they highlight the important role of human IPL during the processing of both task-irrelevant and task-relevant auditory information. Importantly, the present study shows that fMRI connectivity at rest, during presentation of sounds, and during active listening provides novel information about the functional organization of human AC.

  6. Learning to recognize face shapes through serial exploration.

    PubMed

    Wallraven, Christian; Whittingstall, Lisa; Bülthoff, Heinrich H

    2013-05-01

    Human observers are experts at visual face recognition due to specialized visual mechanisms for face processing that evolve with perceptual expertize. Such expertize has long been attributed to the use of configural processing, enabled by fast, parallel information encoding of the visual information in the face. Here we tested whether participants can learn to efficiently recognize faces that are serially encoded-that is, when only partial visual information about the face is available at any given time. For this, ten participants were trained in gaze-restricted face recognition in which face masks were viewed through a small aperture controlled by the participant. Tests comparing trained with untrained performance revealed (1) a marked improvement in terms of speed and accuracy, (2) a gradual development of configural processing strategies, and (3) participants' ability to rapidly learn and accurately recognize novel exemplars. This performance pattern demonstrates that participants were able to learn new strategies to compensate for the serial nature of information encoding. The results are discussed in terms of expertize acquisition and relevance for other sensory modalities relying on serial encoding.

  7. BatMass: a Java Software Platform for LC-MS Data Visualization in Proteomics and Metabolomics.

    PubMed

    Avtonomov, Dmitry M; Raskind, Alexander; Nesvizhskii, Alexey I

    2016-08-05

    Mass spectrometry (MS) coupled to liquid chromatography (LC) is a commonly used technique in metabolomic and proteomic research. As the size and complexity of LC-MS-based experiments grow, it becomes increasingly more difficult to perform quality control of both raw data and processing results. In a practical setting, quality control steps for raw LC-MS data are often overlooked, and assessment of an experiment's success is based on some derived metrics such as "the number of identified compounds". The human brain interprets visual data much better than plain text, hence the saying "a picture is worth a thousand words". Here, we present the BatMass software package, which allows for performing quick quality control of raw LC-MS data through its fast visualization capabilities. It also serves as a testbed for developers of LC-MS data processing algorithms by providing a data access library for open mass spectrometry file formats and a means of visually mapping processing results back to the original data. We illustrate the utility of BatMass with several use cases of quality control and data exploration.

  8. BatMass: a Java software platform for LC/MS data visualization in proteomics and metabolomics

    PubMed Central

    Avtonomov, Dmitry; Raskind, Alexander; Nesvizhskii, Alexey I.

    2017-01-01

    Mass spectrometry (MS) coupled to liquid chromatography (LC) is a commonly used technique in metabolomic and proteomic research. As the size and complexity of LC/MS based experiments grow, it becomes increasingly more difficult to perform quality control of both raw data and processing results. In a practical setting, quality control steps for raw LC/MS data are often overlooked and assessment of an experiment's success is based on some derived metrics such as “the number of identified compounds”. Human brain interprets visual data much better than plain text, hence the saying “a picture is worth a thousand words”. Here we present BatMass software package which allows to perform quick quality control of raw LC/MS data through its fast visualization capabilities. It also serves as a testbed for developers of LC/MS data processing algorithms by providing a data access library for open mass spectrometry file formats and a means of visually mapping processing results back to the original data. We illustrate the utility of BatMass with several use cases of quality control and data exploration. PMID:27306858

  9. Healthy children show gender differences in correlations between nonverbal cognitive ability and brain activation during visual perception.

    PubMed

    Asano, Kohei; Taki, Yasuyuki; Hashizume, Hiroshi; Sassa, Yuko; Thyreau, Benjamin; Asano, Michiko; Takeuchi, Hikaru; Kawashima, Ryuta

    2014-08-08

    Humans perceive textual and nontextual information in visual perception, and both depend on language. In childhood education, students exhibit diverse perceptual abilities, such that some students process textual information better and some process nontextual information better. These predispositions involve many factors, including cognitive ability and learning preference. However, the relationship between verbal and nonverbal cognitive abilities and brain activation during visual perception has not yet been examined in children. We used functional magnetic resonance imaging to examine the relationship between nonverbal and verbal cognitive abilities and brain activation during nontextual visual perception in large numbers of children. A significant positive correlation was found between nonverbal cognitive abilities and brain activation in the right temporoparietal junction, which is thought to be related to attention reorienting. This significant positive correlation existed only in boys. These findings suggested that male brain activation differed from female brain activation, and that this depended on individual cognitive processes, even if there was no gender difference in behavioral performance. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  10. Human factors guidelines for applications of 3D perspectives: a literature review

    NASA Astrophysics Data System (ADS)

    Dixon, Sharon; Fitzhugh, Elisabeth; Aleva, Denise

    2009-05-01

    Once considered too processing-intense for general utility, application of the third dimension to convey complex information is facilitated by the recent proliferation of technological advancements in computer processing, 3D displays, and 3D perspective (2.5D) renderings within a 2D medium. The profusion of complex and rapidly-changing dynamic information being conveyed in operational environments has elevated interest in possible military applications of 3D technologies. 3D can be a powerful mechanism for clearer information portrayal, facilitating rapid and accurate identification of key elements essential to mission performance and operator safety. However, implementation of 3D within legacy systems can be costly, making integration prohibitive. Therefore, identifying which tasks may benefit from 3D or 2.5D versus simple 2D visualizations is critical. Unfortunately, there is no "bible" of human factors guidelines for usability optimization of 2D, 2.5D, or 3D visualizations nor for determining which display best serves a particular application. Establishing such guidelines would provide an invaluable tool for designers and operators. Defining issues common to each will enhance design effectiveness. This paper presents the results of an extensive review of open source literature addressing 3D information displays, with particular emphasis on comparison of true 3D with 2D and 2.5D representations and their utility for military tasks. Seventy-five papers are summarized, highlighting militarily relevant applications of 3D visualizations and 2.5D perspective renderings. Based on these findings, human factors guidelines for when and how to use these visualizations, along with recommendations for further research are discussed.

  11. Spatiotemporal Filter for Visual Motion Integration from Pursuit Eye Movements in Humans and Monkeys

    PubMed Central

    Liu, Bing

    2017-01-01

    Despite the enduring interest in motion integration, a direct measure of the space–time filter that the brain imposes on a visual scene has been elusive. This is perhaps because of the challenge of estimating a 3D function from perceptual reports in psychophysical tasks. We take a different approach. We exploit the close connection between visual motion estimates and smooth pursuit eye movements to measure stimulus–response correlations across space and time, computing the linear space–time filter for global motion direction in humans and monkeys. Although derived from eye movements, we find that the filter predicts perceptual motion estimates quite well. To distinguish visual from motor contributions to the temporal duration of the pursuit motion filter, we recorded single-unit responses in the monkey middle temporal cortical area (MT). We find that pursuit response delays are consistent with the distribution of cortical neuron latencies and that temporal motion integration for pursuit is consistent with a short integration MT subpopulation. Remarkably, the visual system appears to preferentially weight motion signals across a narrow range of foveal eccentricities rather than uniformly over the whole visual field, with a transiently enhanced contribution from locations along the direction of motion. We find that the visual system is most sensitive to motion falling at approximately one-third the radius of the stimulus aperture. Hypothesizing that the visual drive for pursuit is related to the filtered motion energy in a motion stimulus, we compare measured and predicted eye acceleration across several other target forms. SIGNIFICANCE STATEMENT A compact model of the spatial and temporal processing underlying global motion perception has been elusive. We used visually driven smooth eye movements to find the 3D space–time function that best predicts both eye movements and perception of translating dot patterns. We found that the visual system does not appear to use all available motion signals uniformly, but rather weights motion preferentially in a narrow band at approximately one-third the radius of the stimulus. Although not universal, the filter predicts responses to other types of stimuli, demonstrating a remarkable degree of generalization that may lead to a deeper understanding of visual motion processing. PMID:28003348

  12. Different Types of Laughter Modulate Connectivity within Distinct Parts of the Laughter Perception Network

    PubMed Central

    Ethofer, Thomas; Brück, Carolin; Alter, Kai; Grodd, Wolfgang; Kreifelts, Benjamin

    2013-01-01

    Laughter is an ancient signal of social communication among humans and non-human primates. Laughter types with complex social functions (e.g., taunt and joy) presumably evolved from the unequivocal and reflex-like social bonding signal of tickling laughter already present in non-human primates. Here, we investigated the modulations of cerebral connectivity associated with different laughter types as well as the effects of attention shifts between implicit and explicit processing of social information conveyed by laughter using functional magnetic resonance imaging (fMRI). Complex social laughter types and tickling laughter were found to modulate connectivity in two distinguishable but partially overlapping parts of the laughter perception network irrespective of task instructions. Connectivity changes, presumably related to the higher acoustic complexity of tickling laughter, occurred between areas in the prefrontal cortex and the auditory association cortex, potentially reflecting higher demands on acoustic analysis associated with increased information load on auditory attention, working memory, evaluation and response selection processes. In contrast, the higher degree of socio-relational information in complex social laughter types was linked to increases of connectivity between auditory association cortices, the right dorsolateral prefrontal cortex and brain areas associated with mentalizing as well as areas in the visual associative cortex. These modulations might reflect automatic analysis of acoustic features, attention direction to informative aspects of the laughter signal and the retention of those in working memory during evaluation processes. These processes may be associated with visual imagery supporting the formation of inferences on the intentions of our social counterparts. Here, the right dorsolateral precentral cortex appears as a network node potentially linking the functions of auditory and visual associative sensory cortices with those of the mentalizing-associated anterior mediofrontal cortex during the decoding of social information in laughter. PMID:23667619

  13. Different types of laughter modulate connectivity within distinct parts of the laughter perception network.

    PubMed

    Wildgruber, Dirk; Szameitat, Diana P; Ethofer, Thomas; Brück, Carolin; Alter, Kai; Grodd, Wolfgang; Kreifelts, Benjamin

    2013-01-01

    Laughter is an ancient signal of social communication among humans and non-human primates. Laughter types with complex social functions (e.g., taunt and joy) presumably evolved from the unequivocal and reflex-like social bonding signal of tickling laughter already present in non-human primates. Here, we investigated the modulations of cerebral connectivity associated with different laughter types as well as the effects of attention shifts between implicit and explicit processing of social information conveyed by laughter using functional magnetic resonance imaging (fMRI). Complex social laughter types and tickling laughter were found to modulate connectivity in two distinguishable but partially overlapping parts of the laughter perception network irrespective of task instructions. Connectivity changes, presumably related to the higher acoustic complexity of tickling laughter, occurred between areas in the prefrontal cortex and the auditory association cortex, potentially reflecting higher demands on acoustic analysis associated with increased information load on auditory attention, working memory, evaluation and response selection processes. In contrast, the higher degree of socio-relational information in complex social laughter types was linked to increases of connectivity between auditory association cortices, the right dorsolateral prefrontal cortex and brain areas associated with mentalizing as well as areas in the visual associative cortex. These modulations might reflect automatic analysis of acoustic features, attention direction to informative aspects of the laughter signal and the retention of those in working memory during evaluation processes. These processes may be associated with visual imagery supporting the formation of inferences on the intentions of our social counterparts. Here, the right dorsolateral precentral cortex appears as a network node potentially linking the functions of auditory and visual associative sensory cortices with those of the mentalizing-associated anterior mediofrontal cortex during the decoding of social information in laughter.

  14. Dog Breed Differences in Visual Communication with Humans.

    PubMed

    Konno, Akitsugu; Romero, Teresa; Inoue-Murayama, Miho; Saito, Atsuko; Hasegawa, Toshikazu

    2016-01-01

    Domestic dogs (Canis familiaris) have developed a close relationship with humans through the process of domestication. In human-dog interactions, eye contact is a key element of relationship initiation and maintenance. Previous studies have suggested that canine ability to produce human-directed communicative signals is influenced by domestication history, from wolves to dogs, as well as by recent breed selection for particular working purposes. To test the genetic basis for such abilities in purebred dogs, we examined gazing behavior towards humans using two types of behavioral experiments: the 'visual contact task' and the 'unsolvable task'. A total of 125 dogs participated in the study. Based on the genetic relatedness among breeds subjects were classified into five breed groups: Ancient, Herding, Hunting, Retriever-Mastiff and Working). We found that it took longer time for Ancient breeds to make an eye-contact with humans, and that they gazed at humans for shorter periods of time than any other breed group in the unsolvable situation. Our findings suggest that spontaneous gaze behavior towards humans is associated with genetic similarity to wolves rather than with recent selective pressure to create particular working breeds.

  15. NASA's Pleiades Supercomputer Crunches Data For Groundbreaking Analysis and Visualizations

    NASA Image and Video Library

    2016-11-23

    The Pleiades supercomputer at NASA's Ames Research Center, recently named the 13th fastest computer in the world, provides scientists and researchers high-fidelity numerical modeling of complex systems and processes. By using detailed analyses and visualizations of large-scale data, Pleiades is helping to advance human knowledge and technology, from designing the next generation of aircraft and spacecraft to understanding the Earth's climate and the mysteries of our galaxy.

  16. Theoretical aspects of color vision

    NASA Technical Reports Server (NTRS)

    Wolbarsht, M. L.

    1972-01-01

    The three color receptors of Young-Helmholtz and the opponent colors type of information processing postulated by Hering are both present in the human visual system. This mixture accounts for both the phenomena of color matching or hue discrimination and such perceptual qualities of color as the division of the spectrum into color bands. The functioning of the cells in the visual system, especially within the retina, and the relation of this function to color perception are discussed.

  17. Human factors of intelligent computer aided display design

    NASA Technical Reports Server (NTRS)

    Hunt, R. M.

    1985-01-01

    Design concepts for a decision support system being studied at NASA Langley as an aid to visual display unit (VDU) designers are described. Ideally, human factors should be taken into account by VDU designers. In reality, although the human factors database on VDUs is small, such systems must be constantly developed. Human factors are therefore a secondary consideration. An expert system will thus serve mainly in an advisory capacity. Functions can include facilitating the design process by shortening the time to generate and alter drawings, enhancing the capability of breaking design requirements down into simpler functions, and providing visual displays equivalent to the final product. The VDU system could also discriminate, and display the difference, between designer decisions and machine inferences. The system could also aid in analyzing the effects of designer choices on future options and in ennunciating when there are data available on a design selections.

  18. Visual recovery in cortical blindness is limited by high internal noise

    PubMed Central

    Cavanaugh, Matthew R.; Zhang, Ruyuan; Melnick, Michael D.; Das, Anasuya; Roberts, Mariel; Tadin, Duje; Carrasco, Marisa; Huxlin, Krystel R.

    2015-01-01

    Damage to the primary visual cortex typically causes cortical blindness (CB) in the hemifield contralateral to the damaged hemisphere. Recent evidence indicates that visual training can partially reverse CB at trained locations. Whereas training induces near-complete recovery of coarse direction and orientation discriminations, deficits in fine motion processing remain. Here, we systematically disentangle components of the perceptual inefficiencies present in CB fields before and after coarse direction discrimination training. In seven human CB subjects, we measured threshold versus noise functions before and after coarse direction discrimination training in the blind field and at corresponding intact field locations. Threshold versus noise functions were analyzed within the framework of the linear amplifier model and the perceptual template model. Linear amplifier model analysis identified internal noise as a key factor differentiating motion processing across the tested areas, with visual training reducing internal noise in the blind field. Differences in internal noise also explained residual perceptual deficits at retrained locations. These findings were confirmed with perceptual template model analysis, which further revealed that the major residual deficits between retrained and intact field locations could be explained by differences in internal additive noise. There were no significant differences in multiplicative noise or the ability to process external noise. Together, these results highlight the critical role of altered internal noise processing in mediating training-induced visual recovery in CB fields, and may explain residual perceptual deficits relative to intact regions of the visual field. PMID:26389544

  19. A quantitative model for transforming reflectance spectra into the Munsell color space using cone sensitivity functions and opponent process weights.

    PubMed

    D'Andrade, Roy G; Romney, A Kimball

    2003-05-13

    This article presents a computational model of the process through which the human visual system transforms reflectance spectra into perceptions of color. Using physical reflectance spectra data and standard human cone sensitivity functions we describe the transformations necessary for predicting the location of colors in the Munsell color space. These transformations include quantitative estimates of the opponent process weights needed to transform cone activations into Munsell color space coordinates. Using these opponent process weights, the Munsell position of specific colors can be predicted from their physical spectra with a mean correlation of 0.989.

  20. The Face-Processing Network Is Resilient to Focal Resection of Human Visual Cortex

    PubMed Central

    Jonas, Jacques; Gomez, Jesse; Maillard, Louis; Brissart, Hélène; Hossu, Gabriela; Jacques, Corentin; Loftus, David; Colnat-Coulbois, Sophie; Stigliani, Anthony; Barnett, Michael A.; Grill-Spector, Kalanit; Rossion, Bruno

    2016-01-01

    Human face perception requires a network of brain regions distributed throughout the occipital and temporal lobes with a right hemisphere advantage. Present theories consider this network as either a processing hierarchy beginning with the inferior occipital gyrus (occipital face area; IOG-faces/OFA) or a multiple-route network with nonhierarchical components. The former predicts that removing IOG-faces/OFA will detrimentally affect downstream stages, whereas the latter does not. We tested this prediction in a human patient (Patient S.P.) requiring removal of the right inferior occipital cortex, including IOG-faces/OFA. We acquired multiple fMRI measurements in Patient S.P. before and after a preplanned surgery and multiple measurements in typical controls, enabling both within-subject/across-session comparisons (Patient S.P. before resection vs Patient S.P. after resection) and between-subject/across-session comparisons (Patient S.P. vs controls). We found that the spatial topology and selectivity of downstream ipsilateral face-selective regions were stable 1 and 8 month(s) after surgery. Additionally, the reliability of distributed patterns of face selectivity in Patient S.P. before versus after resection was not different from across-session reliability in controls. Nevertheless, postoperatively, representations of visual space were typical in dorsal face-selective regions but atypical in ventral face-selective regions and V1 of the resected hemisphere. Diffusion weighted imaging in Patient S.P. and controls identifies white matter tracts connecting retinotopic areas to downstream face-selective regions, which may contribute to the stable and plastic features of the face network in Patient S.P. after surgery. Together, our results support a multiple-route network of face processing with nonhierarchical components and shed light on stable and plastic features of high-level visual cortex following focal brain damage. SIGNIFICANCE STATEMENT Brain networks consist of interconnected functional regions commonly organized in processing hierarchies. Prevailing theories predict that damage to the input of the hierarchy will detrimentally affect later stages. We tested this prediction with multiple brain measurements in a rare human patient requiring surgical removal of the putative input to a network processing faces. Surprisingly, the spatial topology and selectivity of downstream face-selective regions are stable after surgery. Nevertheless, representations of visual space were typical in dorsal face-selective regions but atypical in ventral face-selective regions and V1. White matter connections from outside the face network may support these stable and plastic features. As processing hierarchies are ubiquitous in biological and nonbiological systems, our results have pervasive implications for understanding the construction of resilient networks. PMID:27511014

  1. Sensitive periods for the functional specialization of the neural system for human face processing.

    PubMed

    Röder, Brigitte; Ley, Pia; Shenoy, Bhamy H; Kekunnaya, Ramesh; Bottari, Davide

    2013-10-15

    The aim of the study was to identify possible sensitive phases in the development of the processing system for human faces. We tested the neural processing of faces in 11 humans who had been blind from birth and had undergone cataract surgery between 2 mo and 14 y of age. Pictures of faces and houses, scrambled versions of these pictures, and pictures of butterflies were presented while event-related potentials were recorded. Participants had to respond to the pictures of butterflies (targets) only. All participants, even those who had been blind from birth for several years, were able to categorize the pictures and to detect the targets. In healthy controls and in a group of visually impaired individuals with a history of developmental or incomplete congenital cataracts, the well-known enhancement of the N170 (negative peak around 170 ms) event-related potential to faces emerged, but a face-sensitive response was not observed in humans with a history of congenital dense cataracts. By contrast, this group showed a similar N170 response to all visual stimuli, which was indistinguishable from the N170 response to faces in the controls. The face-sensitive N170 response has been associated with the structural encoding of faces. Therefore, these data provide evidence for the hypothesis that the functional differentiation of category-specific neural representations in humans, presumably involving the elaboration of inhibitory circuits, is dependent on experience and linked to a sensitive period. Such functional specialization of neural systems seems necessary to archive high processing proficiency.

  2. Visualization techniques for tongue analysis in traditional Chinese medicine

    NASA Astrophysics Data System (ADS)

    Pham, Binh L.; Cai, Yang

    2004-05-01

    Visual inspection of the tongue has been an important diagnostic method of Traditional Chinese Medicine (TCM). Clinic data have shown significant connections between various viscera cancers and abnormalities in the tongue and the tongue coating. Visual inspection of the tongue is simple and inexpensive, but the current practice in TCM is mainly experience-based and the quality of the visual inspection varies between individuals. The computerized inspection method provides quantitative models to evaluate color, texture and surface features on the tongue. In this paper, we investigate visualization techniques and processes to allow interactive data analysis with the aim to merge computerized measurements with human expert's diagnostic variables based on five-scale diagnostic conditions: Healthy (H), History Cancers (HC), History of Polyps (HP), Polyps (P) and Colon Cancer (C).

  3. The uncertain response in humans and animals

    NASA Technical Reports Server (NTRS)

    Smith, J. D.; Shields, W. E.; Schull, J.; Washburn, D. A.; Rumbaugh, D. M. (Principal Investigator)

    1997-01-01

    There has been no comparative psychological study of uncertainty processes. Accordingly, the present experiments asked whether animals, like humans, escape adaptively when they are uncertain. Human and animal observers were given two primary responses in a visual discrimination task, and the opportunity to escape from some trials into easier ones. In one psychophysical task (using a threshold paradigm), humans escaped selectively the difficult trials that left them uncertain of the stimulus. Two rhesus monkeys (Macaca mulatta) also showed this pattern. In a second psychophysical task (using the method of constant stimuli), some humans showed this pattern but one escaped infrequently and nonoptimally. Monkeys showed equivalent individual differences. The data suggest that escapes by humans and monkeys are interesting cognitive analogs and may reflect controlled decisional processes prompted by the perceptual ambiguity at threshold.

  4. Human Dimensions in Future Battle Command Systems: A Workshop Report

    DTIC Science & Technology

    2008-04-01

    information processing). These dimensions can best be described anecdotally and metaphorically as: • Battle command is a human-centric...enhance information visualization techniques in the decision tools, including multimodal platforms: video, graphics, symbols, etc. This should be...organization members. Each dimension can metaphorically represent the spatial location of individuals and group thinking in a trajectory of social norms

  5. Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction

    PubMed Central

    Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta

    2018-01-01

    The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research. PMID:29599739

  6. Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction.

    PubMed

    Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta

    2018-01-01

    The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research.

  7. TOPICAL REVIEW: Prosthetic interfaces with the visual system: biological issues

    NASA Astrophysics Data System (ADS)

    Cohen, Ethan D.

    2007-06-01

    The design of effective visual prostheses for the blind represents a challenge for biomedical engineers and neuroscientists. Significant progress has been made in the miniaturization and processing power of prosthesis electronics; however development lags in the design and construction of effective machine brain interfaces with visual system neurons. This review summarizes what has been learned about stimulating neurons in the human and primate retina, lateral geniculate nucleus and visual cortex. Each level of the visual system presents unique challenges for neural interface design. Blind patients with the retinal degenerative disease retinitis pigmentosa (RP) are a common population in clinical trials of visual prostheses. The visual performance abilities of normals and RP patients are compared. To generate pattern vision in blind patients, the visual prosthetic interface must effectively stimulate the retinotopically organized neurons in the central visual field to elicit patterned visual percepts. The development of more biologically compatible methods of stimulating visual system neurons is critical to the development of finer spatial percepts. Prosthesis electrode arrays need to adapt to different optimal stimulus locations, stimulus patterns, and patient disease states.

  8. The role of the amygdala and the basal ganglia in visual processing of central vs. peripheral emotional content.

    PubMed

    Almeida, Inês; van Asselen, Marieke; Castelo-Branco, Miguel

    2013-09-01

    In human cognition, most relevant stimuli, such as faces, are processed in central vision. However, it is widely believed that recognition of relevant stimuli (e.g. threatening animal faces) at peripheral locations is also important due to their survival value. Moreover, task instructions have been shown to modulate brain regions involved in threat recognition (e.g. the amygdala). In this respect it is also controversial whether tasks requiring explicit focus on stimulus threat content vs. implicit processing differently engage primitive subcortical structures involved in emotional appraisal. Here we have addressed the role of central vs. peripheral processing in the human amygdala using animal threatening vs. non-threatening face stimuli. First, a simple animal face recognition task with threatening and non-threatening animal faces, as well as non-face control stimuli, was employed in naïve subjects (implicit task). A subsequent task was then performed with the same stimulus categories (but different stimuli) in which subjects were told to explicitly detect threat signals. We found lateralized amygdala responses both to the spatial location of stimuli and to the threatening content of faces depending on the task performed: the right amygdala showed increased responses to central compared to left presented stimuli specifically during the threat detection task, while the left amygdala was better prone to discriminate threatening faces from non-facial displays during the animal face recognition task. Additionally, the right amygdala responded to faces during the threat detection task but only when centrally presented. Moreover, we have found no evidence for superior responses of the amygdala to peripheral stimuli. Importantly, we have found that striatal regions activate differentially depending on peripheral vs. central processing of threatening faces. Accordingly, peripheral processing of these stimuli activated more strongly the putaminal region, while central processing engaged mainly the caudate nucleus. We conclude that the human amygdala has a central bias for face stimuli, and that visual processing recruits different striatal regions, putaminal or caudate based, depending on the task and on whether peripheral or central visual processing is involved. © 2013 Elsevier Ltd. All rights reserved.

  9. Detection and quantification of flow consistency in business process models.

    PubMed

    Burattin, Andrea; Bernstein, Vered; Neurauter, Manuel; Soffer, Pnina; Weber, Barbara

    2018-01-01

    Business process models abstract complex business processes by representing them as graphical models. Their layout, as determined by the modeler, may have an effect when these models are used. However, this effect is currently not fully understood. In order to systematically study this effect, a basic set of measurable key visual features is proposed, depicting the layout properties that are meaningful to the human user. The aim of this research is thus twofold: first, to empirically identify key visual features of business process models which are perceived as meaningful to the user and second, to show how such features can be quantified into computational metrics, which are applicable to business process models. We focus on one particular feature, consistency of flow direction, and show the challenges that arise when transforming it into a precise metric. We propose three different metrics addressing these challenges, each following a different view of flow consistency. We then report the results of an empirical evaluation, which indicates which metric is more effective in predicting the human perception of this feature. Moreover, two other automatic evaluations describing the performance and the computational capabilities of our metrics are reported as well.

  10. Exploiting range imagery: techniques and applications

    NASA Astrophysics Data System (ADS)

    Armbruster, Walter

    2009-07-01

    Practically no applications exist for which automatic processing of 2D intensity imagery can equal human visual perception. This is not the case for range imagery. The paper gives examples of 3D laser radar applications, for which automatic data processing can exceed human visual cognition capabilities and describes basic processing techniques for attaining these results. The examples are drawn from the fields of helicopter obstacle avoidance, object detection in surveillance applications, object recognition at high range, multi-object-tracking, and object re-identification in range image sequences. Processing times and recognition performances are summarized. The techniques used exploit the bijective continuity of the imaging process as well as its independence of object reflectivity, emissivity and illumination. This allows precise formulations of the probability distributions involved in figure-ground segmentation, feature-based object classification and model based object recognition. The probabilistic approach guarantees optimal solutions for single images and enables Bayesian learning in range image sequences. Finally, due to recent results in 3D-surface completion, no prior model libraries are required for recognizing and re-identifying objects of quite general object categories, opening the way to unsupervised learning and fully autonomous cognitive systems.

  11. Neural Dynamics Underlying Target Detection in the Human Brain

    PubMed Central

    Bansal, Arjun K.; Madhavan, Radhika; Agam, Yigal; Golby, Alexandra; Madsen, Joseph R.

    2014-01-01

    Sensory signals must be interpreted in the context of goals and tasks. To detect a target in an image, the brain compares input signals and goals to elicit the correct behavior. We examined how target detection modulates visual recognition signals by recording intracranial field potential responses from 776 electrodes in 10 epileptic human subjects. We observed reliable differences in the physiological responses to stimuli when a cued target was present versus absent. Goal-related modulation was particularly strong in the inferior temporal and fusiform gyri, two areas important for object recognition. Target modulation started after 250 ms post stimulus, considerably after the onset of visual recognition signals. While broadband signals exhibited increased or decreased power, gamma frequency power showed predominantly increases during target presence. These observations support models where task goals interact with sensory inputs via top-down signals that influence the highest echelons of visual processing after the onset of selective responses. PMID:24553944

  12. Review of intraoperative optical coherence tomography: technology and applications [Invited

    PubMed Central

    Carrasco-Zevallos, Oscar M.; Viehland, Christian; Keller, Brenton; Draelos, Mark; Kuo, Anthony N.; Toth, Cynthia A.; Izatt, Joseph A.

    2017-01-01

    During microsurgery, en face imaging of the surgical field through the operating microscope limits the surgeon’s depth perception and visualization of instruments and sub-surface anatomy. Surgical procedures outside microsurgery, such as breast tumor resections, may also benefit from visualization of the sub-surface tissue structures. The widespread clinical adoption of optical coherence tomography (OCT) in ophthalmology and its growing prominence in other fields, such as cancer imaging, has motivated the development of intraoperative OCT for real-time tomographic visualization of surgical interventions. This article reviews key technological developments in intraoperative OCT and their applications in human surgery. We focus on handheld OCT probes, microscope-integrated OCT systems, and OCT-guided laser treatment platforms designed for intraoperative use. Moreover, we discuss intraoperative OCT adjuncts and processing techniques currently under development to optimize the surgical feedback derivable from OCT data. Lastly, we survey salient clinical studies of intraoperative OCT for human surgery. PMID:28663853

  13. Cultural and Species Differences in Gazing Patterns for Marked and Decorated Objects: A Comparative Eye-Tracking Study

    PubMed Central

    Mühlenbeck, Cordelia; Jacobsen, Thomas; Pritsch, Carla; Liebal, Katja

    2017-01-01

    Objects from the Middle Paleolithic period colored with ochre and marked with incisions represent the beginning of non-utilitarian object manipulation in different species of the Homo genus. To investigate the visual effects caused by these markings, we compared humans who have different cultural backgrounds (Namibian hunter–gatherers and German city dwellers) to one species of non-human great apes (orangutans) with respect to their perceptions of markings on objects. We used eye-tracking to analyze their fixation patterns and the durations of their fixations on marked and unmarked stones and sticks. In an additional test, humans evaluated the objects regarding their aesthetic preferences. Our hypotheses were that colorful markings help an individual to structure the surrounding world by making certain features of the environment salient, and that aesthetic appreciation should be associated with this structuring. Our results showed that humans fixated on the marked objects longer and used them in the structural processing of the objects and their background, but did not consistently report finding them more beautiful. Orangutans, in contrast, did not distinguish between object and background in their visual processing and did not clearly fixate longer on the markings. Our results suggest that marking behavior is characteristic for humans and evolved as an attention-directing rather than aesthetic benefit. PMID:28167923

  14. MALINA: a web service for visual analytics of human gut microbiota whole-genome metagenomic reads.

    PubMed

    Tyakht, Alexander V; Popenko, Anna S; Belenikin, Maxim S; Altukhov, Ilya A; Pavlenko, Alexander V; Kostryukova, Elena S; Selezneva, Oksana V; Larin, Andrei K; Karpova, Irina Y; Alexeev, Dmitry G

    2012-12-07

    MALINA is a web service for bioinformatic analysis of whole-genome metagenomic data obtained from human gut microbiota sequencing. As input data, it accepts metagenomic reads of various sequencing technologies, including long reads (such as Sanger and 454 sequencing) and next-generation (including SOLiD and Illumina). It is the first metagenomic web service that is capable of processing SOLiD color-space reads, to authors' knowledge. The web service allows phylogenetic and functional profiling of metagenomic samples using coverage depth resulting from the alignment of the reads to the catalogue of reference sequences which are built into the pipeline and contain prevalent microbial genomes and genes of human gut microbiota. The obtained metagenomic composition vectors are processed by the statistical analysis and visualization module containing methods for clustering, dimension reduction and group comparison. Additionally, the MALINA database includes vectors of bacterial and functional composition for human gut microbiota samples from a large number of existing studies allowing their comparative analysis together with user samples, namely datasets from Russian Metagenome project, MetaHIT and Human Microbiome Project (downloaded from http://hmpdacc.org). MALINA is made freely available on the web at http://malina.metagenome.ru. The website is implemented in JavaScript (using Ext JS), Microsoft .NET Framework, MS SQL, Python, with all major browsers supported.

  15. Is Attentional Resource Allocation Across Sensory Modalities Task-Dependent?

    PubMed

    Wahn, Basil; König, Peter

    2017-01-01

    Human information processing is limited by attentional resources. That is, via attentional mechanisms, humans select a limited amount of sensory input to process while other sensory input is neglected. In multisensory research, a matter of ongoing debate is whether there are distinct pools of attentional resources for each sensory modality or whether attentional resources are shared across sensory modalities. Recent studies have suggested that attentional resource allocation across sensory modalities is in part task-dependent. That is, the recruitment of attentional resources across the sensory modalities depends on whether processing involves object-based attention (e.g., the discrimination of stimulus attributes) or spatial attention (e.g., the localization of stimuli). In the present paper, we review findings in multisensory research related to this view. For the visual and auditory sensory modalities, findings suggest that distinct resources are recruited when humans perform object-based attention tasks, whereas for the visual and tactile sensory modalities, partially shared resources are recruited. If object-based attention tasks are time-critical, shared resources are recruited across the sensory modalities. When humans perform an object-based attention task in combination with a spatial attention task, partly shared resources are recruited across the sensory modalities as well. Conversely, for spatial attention tasks, attentional processing does consistently involve shared attentional resources for the sensory modalities. Generally, findings suggest that the attentional system flexibly allocates attentional resources depending on task demands. We propose that such flexibility reflects a large-scale optimization strategy that minimizes the brain's costly resource expenditures and simultaneously maximizes capability to process currently relevant information.

  16. Visualization of decision processes using a cognitive architecture

    NASA Astrophysics Data System (ADS)

    Livingston, Mark A.; Murugesan, Arthi; Brock, Derek; Frost, Wende K.; Perzanowski, Dennis

    2013-01-01

    Cognitive architectures are computational theories of reasoning the human mind engages in as it processes facts and experiences. A cognitive architecture uses declarative and procedural knowledge to represent mental constructs that are involved in decision making. Employing a model of behavioral and perceptual constraints derived from a set of one or more scenarios, the architecture reasons about the most likely consequence(s) of a sequence of events. Reasoning of any complexity and depth involving computational processes, however, is often opaque and challenging to comprehend. Arguably, for decision makers who may need to evaluate or question the results of autonomous reasoning, it would be useful to be able to inspect the steps involved in an interactive, graphical format. When a chain of evidence and constraint-based decision points can be visualized, it becomes easier to explore both how and why a scenario of interest will likely unfold in a particular way. In initial work on a scheme for visualizing cognitively-based decision processes, we focus on generating graphical representations of models run in the Polyscheme cognitive architecture. Our visualization algorithm operates on a modified version of Polyscheme's output, which is accomplished by augmenting models with a simple set of tags. We provide example visualizations and discuss properties of our technique that pose challenges for our representation goals. We conclude with a summary of feedback solicited from domain experts and practitioners in the field of cognitive modeling.

  17. Do we track what we see? Common versus independent processing for motion perception and smooth pursuit eye movements: a review.

    PubMed

    Spering, Miriam; Montagnini, Anna

    2011-04-22

    Many neurophysiological studies in monkeys have indicated that visual motion information for the guidance of perception and smooth pursuit eye movements is - at an early stage - processed in the same visual pathway in the brain, crucially involving the middle temporal area (MT). However, these studies left some questions unanswered: Are perception and pursuit driven by the same or independent neuronal signals within this pathway? Are the perceptual interpretation of visual motion information and the motor response to visual signals limited by the same source of neuronal noise? Here, we review psychophysical studies that were motivated by these questions and compared perception and pursuit behaviorally in healthy human observers. We further review studies that focused on the interaction between perception and pursuit. The majority of results point to similarities between perception and pursuit, but dissociations were also reported. We discuss recent developments in this research area and conclude with suggestions for common and separate principles for the guidance of perceptual and motor responses to visual motion information. Copyright © 2010 Elsevier Ltd. All rights reserved.

  18. Visual scan paths are abnormal in deluded schizophrenics.

    PubMed

    Phillips, M L; David, A S

    1997-01-01

    One explanation for delusion formation is that they result from distorted appreciation of complex stimuli. The study investigated delusions in schizophrenia using a physiological marker of visual attention and information processing, the visual scan path-a map tracing the direction and duration of gaze when an individual views a stimulus. The aim was to demonstrate the presence of a specific deficit in processing meaningful stimuli (e.g. human faces) in deluded schizophrenics (DS) by relating this to abnormal viewing strategies. Visual scan paths were measured in acutely-deluded (n = 7) and non-deluded (n = 7) schizophrenics matched for medication, illness duration and negative symptoms, plus 10 age-matched normal controls. DS employed abnormal strategies for viewing single faces and face pairs in a recognition task, staring at fewer points and fixating non-feature areas to a significantly greater extent than both control groups (P < 0.05). The results indicate that DS direct their attention to less salient visual information when viewing faces. Future paradigms employing more complex stimuli and testing DS when less-deluded will allow further clarification of the relationship between viewing strategies and delusions.

  19. Functional Relationships for Investigating Cognitive Processes

    PubMed Central

    Wright, Anthony A.

    2013-01-01

    Functional relationships (from systematic manipulation of critical variables) are advocated for revealing fundamental processes of (comparative) cognition—through examples from my work in psychophysics, learning, and memory. Functional relationships for pigeon wavelength (hue) discrimination revealed best discrimination at the spectral points of hue transition for pigeons—a correspondence (i.e., functional relationship) similar to that for humans. Functional relationships for learning revealed: Item-specific or relational learning in matching to sample as a function of the pigeons’ sample-response requirement, and same/different abstract-concept learning as a function of the training set size for rhesus monkeys, capuchin monkeys, and pigeons. Functional relationships for visual memory revealed serial position functions (a 1st order functional relationship) that changed systematically with retention delay (a 2nd order relationship) for pigeons, capuchin monkeys, rhesus monkeys, and humans. Functional relationships for rhesus-monkey auditory memory also revealed systematic changes in serial position functions with delay, but these changes were opposite to those for visual memory. Functional relationships for proactive interference revealed interference that varied as a function of a ratio of delay times. Functional relationships for change detection memory revealed (qualitative) similarities and (quantitative) differences in human and monkey visual short term memory as a function of the number of memory items. It is concluded that these findings were made possible by varying critical variables over a substantial portion of the manipulable range to generate functions and derive relationships. PMID:23174335

  20. Neurotechnology for intelligence analysts

    NASA Astrophysics Data System (ADS)

    Kruse, Amy A.; Boyd, Karen C.; Schulman, Joshua J.

    2006-05-01

    Geospatial Intelligence Analysts are currently faced with an enormous volume of imagery, only a fraction of which can be processed or reviewed in a timely operational manner. Computer-based target detection efforts have failed to yield the speed, flexibility and accuracy of the human visual system. Rather than focus solely on artificial systems, we hypothesize that the human visual system is still the best target detection apparatus currently in use, and with the addition of neuroscience-based measurement capabilities it can surpass the throughput of the unaided human severalfold. Using electroencephalography (EEG), Thorpe et al1 described a fast signal in the brain associated with the early detection of targets in static imagery using a Rapid Serial Visual Presentation (RSVP) paradigm. This finding suggests that it may be possible to extract target detection signals from complex imagery in real time utilizing non-invasive neurophysiological assessment tools. To transform this phenomenon into a capability for defense applications, the Defense Advanced Research Projects Agency (DARPA) currently is sponsoring an effort titled Neurotechnology for Intelligence Analysts (NIA). The vision of the NIA program is to revolutionize the way that analysts handle intelligence imagery, increasing both the throughput of imagery to the analyst and overall accuracy of the assessments. Successful development of a neurobiologically-based image triage system will enable image analysts to train more effectively and process imagery with greater speed and precision.

  1. A bottom-up model of spatial attention predicts human error patterns in rapid scene recognition.

    PubMed

    Einhäuser, Wolfgang; Mundhenk, T Nathan; Baldi, Pierre; Koch, Christof; Itti, Laurent

    2007-07-20

    Humans demonstrate a peculiar ability to detect complex targets in rapidly presented natural scenes. Recent studies suggest that (nearly) no focal attention is required for overall performance in such tasks. Little is known, however, of how detection performance varies from trial to trial and which stages in the processing hierarchy limit performance: bottom-up visual processing (attentional selection and/or recognition) or top-down factors (e.g., decision-making, memory, or alertness fluctuations)? To investigate the relative contribution of these factors, eight human observers performed an animal detection task in natural scenes presented at 20 Hz. Trial-by-trial performance was highly consistent across observers, far exceeding the prediction of independent errors. This consistency demonstrates that performance is not primarily limited by idiosyncratic factors but by visual processing. Two statistical stimulus properties, contrast variation in the target image and the information-theoretical measure of "surprise" in adjacent images, predict performance on a trial-by-trial basis. These measures are tightly related to spatial attention, demonstrating that spatial attention and rapid target detection share common mechanisms. To isolate the causal contribution of the surprise measure, eight additional observers performed the animal detection task in sequences that were reordered versions of those all subjects had correctly recognized in the first experiment. Reordering increased surprise before and/or after the target while keeping the target and distractors themselves unchanged. Surprise enhancement impaired target detection in all observers. Consequently, and contrary to several previously published findings, our results demonstrate that attentional limitations, rather than target recognition alone, affect the detection of targets in rapidly presented visual sequences.

  2. Examining the cognitive demands of analogy instructions compared to explicit instructions.

    PubMed

    Tse, Choi Yeung Andy; Wong, Andus; Whitehill, Tara; Ma, Estella; Masters, Rich

    2016-10-01

    In many learning domains, instructions are presented explicitly despite high cognitive demands associated with their processing. This study examined cognitive demands imposed on working memory by different types of instruction to speak with maximum pitch variation: visual analogy, verbal analogy and explicit verbal instruction. Forty participants were asked to memorise a set of 16 visual and verbal stimuli while reading aloud a Cantonese paragraph with maximum pitch variation. Instructions about how to achieve maximum pitch variation were presented via visual analogy, verbal analogy, explicit rules or no instruction. Pitch variation was assessed off-line, using standard deviation of fundamental frequency. Immediately after reading, participants recalled as many stimuli as possible. Analogy instructions resulted in significantly increased pitch variation compared to explicit instructions or no instructions. Explicit instructions resulted in poorest recall of stimuli. Visual analogy instructions resulted in significantly poorer recall of visual stimuli than verbal stimuli. The findings suggest that non-propositional instructions presented via analogy may be less cognitively demanding than instructions that are presented explicitly. Processing analogy instructions that are presented as a visual representation is likely to load primarily visuospatial components of working memory rather than phonological components. The findings are discussed with reference to speech therapy and human cognition.

  3. Characterization of electroencephalography signals for estimating saliency features in videos.

    PubMed

    Liang, Zhen; Hamada, Yasuyuki; Oba, Shigeyuki; Ishii, Shin

    2018-05-12

    Understanding the functions of the visual system has been one of the major targets in neuroscience formany years. However, the relation between spontaneous brain activities and visual saliency in natural stimuli has yet to be elucidated. In this study, we developed an optimized machine learning-based decoding model to explore the possible relationships between the electroencephalography (EEG) characteristics and visual saliency. The optimal features were extracted from the EEG signals and saliency map which was computed according to an unsupervised saliency model ( Tavakoli and Laaksonen, 2017). Subsequently, various unsupervised feature selection/extraction techniques were examined using different supervised regression models. The robustness of the presented model was fully verified by means of ten-fold or nested cross validation procedure, and promising results were achieved in the reconstruction of saliency features based on the selected EEG characteristics. Through the successful demonstration of using EEG characteristics to predict the real-time saliency distribution in natural videos, we suggest the feasibility of quantifying visual content through measuring brain activities (EEG signals) in real environments, which would facilitate the understanding of cortical involvement in the processing of natural visual stimuli and application developments motivated by human visual processing. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Selective visual attention to emotional words: Early parallel frontal and visual activations followed by interactive effects in visual cortex.

    PubMed

    Schindler, Sebastian; Kissler, Johanna

    2016-10-01

    Human brains spontaneously differentiate between various emotional and neutral stimuli, including written words whose emotional quality is symbolic. In the electroencephalogram (EEG), emotional-neutral processing differences are typically reflected in the early posterior negativity (EPN, 200-300 ms) and the late positive potential (LPP, 400-700 ms). These components are also enlarged by task-driven visual attention, supporting the assumption that emotional content naturally drives attention. Still, the spatio-temporal dynamics of interactions between emotional stimulus content and task-driven attention remain to be specified. Here, we examine this issue in visual word processing. Participants attended to negative, neutral, or positive nouns while high-density EEG was recorded. Emotional content and top-down attention both amplified the EPN component in parallel. On the LPP, by contrast, emotion and attention interacted: Explicit attention to emotional words led to a substantially larger amplitude increase than did explicit attention to neutral words. Source analysis revealed early parallel effects of emotion and attention in bilateral visual cortex and a later interaction of both in right visual cortex. Distinct effects of attention were found in inferior, middle and superior frontal, paracentral, and parietal areas, as well as in the anterior cingulate cortex (ACC). Results specify separate and shared mechanisms of emotion and attention at distinct processing stages. Hum Brain Mapp 37:3575-3587, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  5. Decoding Visual Location From Neural Patterns in the Auditory Cortex of the Congenitally Deaf

    PubMed Central

    Almeida, Jorge; He, Dongjun; Chen, Quanjing; Mahon, Bradford Z.; Zhang, Fan; Gonçalves, Óscar F.; Fang, Fang; Bi, Yanchao

    2016-01-01

    Sensory cortices of individuals who are congenitally deprived of a sense can exhibit considerable plasticity and be recruited to process information from the senses that remain intact. Here, we explored whether the auditory cortex of congenitally deaf individuals represents visual field location of a stimulus—a dimension that is represented in early visual areas. We used functional MRI to measure neural activity in auditory and visual cortices of congenitally deaf and hearing humans while they observed stimuli typically used for mapping visual field preferences in visual cortex. We found that the location of a visual stimulus can be successfully decoded from the patterns of neural activity in auditory cortex of congenitally deaf but not hearing individuals. This is particularly true for locations within the horizontal plane and within peripheral vision. These data show that the representations stored within neuroplastically changed auditory cortex can align with dimensions that are typically represented in visual cortex. PMID:26423461

  6. The dorsal "action" pathway.

    PubMed

    Gallivan, Jason P; Goodale, Melvyn A

    2018-01-01

    In 1992, Goodale and Milner proposed a division of labor in the visual pathways of the primate cerebral cortex. According to their account, the ventral pathway, which projects to occipitotemporal cortex, constructs our visual percepts, while the dorsal pathway, which projects to posterior parietal cortex, mediates the visual control of action. Although the framing of the two-visual-system hypothesis has not been without controversy, it is clear that vision for action and vision for perception have distinct computational requirements, and significant support for the proposed neuroanatomic division has continued to emerge over the last two decades from human neuropsychology, neuroimaging, behavioral psychophysics, and monkey neurophysiology. In this chapter, we review much of this evidence, with a particular focus on recent findings from human neuroimaging and monkey neurophysiology, demonstrating a specialized role for parietal cortex in visually guided behavior. But even though the available evidence suggests that dedicated circuits mediate action and perception, in order to produce adaptive goal-directed behavior there must be a close coupling and seamless integration of information processing across these two systems. We discuss such ventral-dorsal-stream interactions and argue that the two pathways play different, yet complementary, roles in the production of skilled behavior. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Auditory Working Memory Load Impairs Visual Ventral Stream Processing: Toward a Unified Model of Attentional Load

    ERIC Educational Resources Information Center

    Klemen, Jane; Buchel, Christian; Buhler, Mira; Menz, Mareike M.; Rose, Michael

    2010-01-01

    Attentional interference between tasks performed in parallel is known to have strong and often undesired effects. As yet, however, the mechanisms by which interference operates remain elusive. A better knowledge of these processes may facilitate our understanding of the effects of attention on human performance and the debilitating consequences…

  8. Advances in color science: from retina to behavior

    PubMed Central

    Chatterjee, Soumya; Field, Greg D.; Horwitz, Gregory D.; Johnson, Elizabeth N.; Koida, Kowa; Mancuso, Katherine

    2010-01-01

    Color has become a premier model system for understanding how information is processed by neural circuits, and for investigating the relationships among genes, neural circuits and perception. Both the physical stimulus for color and the perceptual output experienced as color are quite well characterized, but the neural mechanisms that underlie the transformation from stimulus to perception are incompletely understood. The past several years have seen important scientific and technical advances that are changing our understanding of these mechanisms. Here, and in the accompanying minisymposium, we review the latest findings and hypotheses regarding color computations in the retina, primary visual cortex and higher-order visual areas, focusing on non-human primates, a model of human color vision. PMID:21068298

  9. Evidence for Non-Opponent Coding of Colour Information in Human Visual Cortex: Selective Loss of “Green” Sensitivity in a Subject with Damaged Ventral Occipito-Temporal Cortex

    PubMed Central

    Rauscher, Franziska G.; Plant, Gordon T.; James-Galton, Merle; Barbur, John L.

    2011-01-01

    Damage to ventral occipito-temporal extrastriate visual cortex leads to the syndrome of prosopagnosia often with coexisting cerebral achromatopsia. A patient with this syndrome resulting in a left upper homonymous quadrantanopia, prosopagnosia, and incomplete achromatopsia is described. Chromatic sensitivity was assessed at a number of locations in the intact visual field using a dynamic luminance contrast masking technique that isolates the use of colour signals. In normal subjects chromatic detection thresholds form an elliptical contour when plotted in the Commission Internationale d’Eclairage, (x-y), chromaticity diagram. Because the extraction of colour signals in early visual processing involves opponent mechanisms, subjects with Daltonism (congenital red/green loss of sensitivity) show symmetric increase in thresholds towards the long wavelength (“red”) and middle wavelength (“green”) regions of the spectrum locus. This is also the case with acquired loss of chromatic sensitivity as a result of retinal or optic nerve disease. Our patient’s results were an exception to this rule. Whilst his chromatic sensitivity in the central region of the visual field was reduced symmetrically for both “red/green” and “yellow/blue” directions in colour space, the subject’s lower left quadrant showed a marked asymmetry in “red/green” thresholds with the greatest loss of sensitivity towards the “green” region of the spectrum locus. This spatially localized asymmetric loss of “green” but not “red” sensitivity has not been reported previously in human vision. Such loss is consistent with selective damage of neural substrates in the visual cortex that process colour information, but are spectrally non-opponent. PMID:27956924

  10. Additive effects of affective arousal and top-down attention on the event-related brain responses to human bodies.

    PubMed

    Hietanen, Jari K; Kirjavainen, Ilkka; Nummenmaa, Lauri

    2014-12-01

    The early visual event-related 'N170 response' is sensitive to human body configuration and it is enhanced to nude versus clothed bodies. We tested whether the N170 response as well as later EPN and P3/LPP responses to nude bodies reflect the effect of increased arousal elicited by these stimuli, or top-down allocation of object-based attention to the nude bodies. Participants saw pictures of clothed and nude bodies and faces. In each block, participants were asked to direct their attention towards stimuli from a specified target category while ignoring others. Object-based attention did not modulate the N170 amplitudes towards attended stimuli; instead N170 response was larger to nude bodies compared to stimuli from other categories. Top-down attention and affective arousal had additive effects on the EPN and P3/LPP responses reflecting later processing stages. We conclude that nude human bodies have a privileged status in the visual processing system due to the affective arousal they trigger. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Feature-based attentional modulations in the absence of direct visual stimulation.

    PubMed

    Serences, John T; Boynton, Geoffrey M

    2007-07-19

    When faced with a crowded visual scene, observers must selectively attend to behaviorally relevant objects to avoid sensory overload. Often this selection process is guided by prior knowledge of a target-defining feature (e.g., the color red when looking for an apple), which enhances the firing rate of visual neurons that are selective for the attended feature. Here, we used functional magnetic resonance imaging and a pattern classification algorithm to predict the attentional state of human observers as they monitored a visual feature (one of two directions of motion). We find that feature-specific attention effects spread across the visual field-even to regions of the scene that do not contain a stimulus. This spread of feature-based attention to empty regions of space may facilitate the perception of behaviorally relevant stimuli by increasing sensitivity to attended features at all locations in the visual field.

  12. Pilot Errors Involving Head-Up Displays (HUDs), Helmet-Mounted Displays (HMDs), and Night Vision Goggles (NVGs)

    DTIC Science & Technology

    1992-01-01

    results in stimulation of spatial-motion-location visual processes, which are known to take precedence over any other sensor or cognitive stimuli. In...or version he is flying. This was initially an observation that stimulated the birth of the human-factors engineering discipline during World War H...collisions with the surface, the pilot needs inputs to sensory channels other than the focal visual system. Properly designed auditory and

  13. Method of simulation and visualization of FDG metabolism based on VHP image

    NASA Astrophysics Data System (ADS)

    Cui, Yunfeng; Bai, Jing

    2005-04-01

    FDG ([18F] 2-fluoro-2-deoxy-D-glucose) is the typical tracer used in clinical PET (positron emission tomography) studies. The FDG-PET is an important imaging tool for early diagnosis and treatment of malignant tumor and functional disease. The main purpose of this work is to propose a method that represents FDG metabolism in human body through the simulation and visualization of 18F distribution process dynamically based on the segmented VHP (Visible Human Project) image dataset. First, the plasma time-activity curve (PTAC) and the tissues time-activity curves (TTAC) are obtained from the previous studies and the literatures. According to the obtained PTAC and TTACs, a set of corresponding values are assigned to the segmented VHP image, Thus a set of dynamic images are derived to show the 18F distribution in the concerned tissues for the predetermined sampling schedule. Finally, the simulated FDG distribution images are visualized in 3D and 2D formats, respectively, incorporated with principal interaction functions. As compared with original PET image, our visualization result presents higher resolution because of the high resolution of VHP image data, and show the distribution process of 18F dynamically. The results of our work can be used in education and related research as well as a tool for the PET operator to design their PET experiment program.

  14. Effects of spatial frequency and location of fearful faces on human amygdala activity.

    PubMed

    Morawetz, Carmen; Baudewig, Juergen; Treue, Stefan; Dechent, Peter

    2011-01-31

    Facial emotion perception plays a fundamental role in interpersonal social interactions. Images of faces contain visual information at various spatial frequencies. The amygdala has previously been reported to be preferentially responsive to low-spatial frequency (LSF) rather than to high-spatial frequency (HSF) filtered images of faces presented at the center of the visual field. Furthermore, it has been proposed that the amygdala might be especially sensitive to affective stimuli in the periphery. In the present study we investigated the impact of spatial frequency and stimulus eccentricity on face processing in the human amygdala and fusiform gyrus using functional magnetic resonance imaging (fMRI). The spatial frequencies of pictures of fearful faces were filtered to produce images that retained only LSF or HSF information. Facial images were presented either in the left or right visual field at two different eccentricities. In contrast to previous findings, we found that the amygdala responds to LSF and HSF stimuli in a similar manner regardless of the location of the affective stimuli in the visual field. Furthermore, the fusiform gyrus did not show differential responses to spatial frequency filtered images of faces. Our findings argue against the view that LSF information plays a crucial role in the processing of facial expressions in the amygdala and of a higher sensitivity to affective stimuli in the periphery. Copyright © 2010 Elsevier B.V. All rights reserved.

  15. Spatial frequency characteristics at image decision-point locations for observers with different radiological backgrounds in lung nodule detection

    NASA Astrophysics Data System (ADS)

    Pietrzyk, Mariusz W.; Manning, David J.; Dix, Alan; Donovan, Tim

    2009-02-01

    Aim: The goal of the study is to determine the spatial frequency characteristics at locations in the image of overt and covert observers' decisions and find out if there are any similarities in different observers' groups: the same radiological experience group or the same accuracy scored level. Background: The radiological task is described as a visual searching decision making procedure involving visual perception and cognitive processing. Humans perceive the world through a number of spatial frequency channels, each sensitive to visual information carried by different spatial frequency ranges and orientations. Recent studies have shown that particular physical properties of local and global image-based elements are correlated with the performance and the level of experience of human observers in breast cancer and lung nodule detections. Neurological findings in visual perception were an inspiration for wavelet applications in vision research because the methodology tries to mimic the brain processing algorithms. Methods: The wavelet approach to the set of postero-anterior chest radiographs analysis has been used to characterize perceptual preferences observers with different levels of experience in the radiological task. Psychophysical methodology has been applied to track eye movements over the image, where particular ROIs related to the observers' fixation clusters has been analysed in the spaces frame by Daubechies functions. Results: Significance differences have been found between the spatial frequency characteristics at the location of different decisions.

  16. Finding Waldo: Learning about Users from their Interactions.

    PubMed

    Brown, Eli T; Ottley, Alvitta; Zhao, Helen; Quan Lin; Souvenir, Richard; Endert, Alex; Chang, Remco

    2014-12-01

    Visual analytics is inherently a collaboration between human and computer. However, in current visual analytics systems, the computer has limited means of knowing about its users and their analysis processes. While existing research has shown that a user's interactions with a system reflect a large amount of the user's reasoning process, there has been limited advancement in developing automated, real-time techniques that mine interactions to learn about the user. In this paper, we demonstrate that we can accurately predict a user's task performance and infer some user personality traits by using machine learning techniques to analyze interaction data. Specifically, we conduct an experiment in which participants perform a visual search task, and apply well-known machine learning algorithms to three encodings of the users' interaction data. We achieve, depending on algorithm and encoding, between 62% and 83% accuracy at predicting whether each user will be fast or slow at completing the task. Beyond predicting performance, we demonstrate that using the same techniques, we can infer aspects of the user's personality factors, including locus of control, extraversion, and neuroticism. Further analyses show that strong results can be attained with limited observation time: in one case 95% of the final accuracy is gained after a quarter of the average task completion time. Overall, our findings show that interactions can provide information to the computer about its human collaborator, and establish a foundation for realizing mixed-initiative visual analytics systems.

  17. Object-related activity revealed by functional magnetic resonance imaging in human occipital cortex.

    PubMed Central

    Malach, R; Reppas, J B; Benson, R R; Kwong, K K; Jiang, H; Kennedy, W A; Ledden, P J; Brady, T J; Rosen, B R; Tootell, R B

    1995-01-01

    The stages of integration leading from local feature analysis to object recognition were explored in human visual cortex by using the technique of functional magnetic resonance imaging. Here we report evidence for object-related activation. Such activation was located at the lateral-posterior aspect of the occipital lobe, just abutting the posterior aspect of the motion-sensitive area MT/V5, in a region termed the lateral occipital complex (LO). LO showed preferential activation to images of objects, compared to a wide range of texture patterns. This activation was not caused by a global difference in the Fourier spatial frequency content of objects versus texture images, since object images produced enhanced LO activation compared to textures matched in power spectra but randomized in phase. The preferential activation to objects also could not be explained by different patterns of eye movements: similar levels of activation were observed when subjects fixated on the objects and when they scanned the objects with their eyes. Additional manipulations such as spatial frequency filtering and a 4-fold change in visual size did not affect LO activation. These results suggest that the enhanced responses to objects were not a manifestation of low-level visual processing. A striking demonstration that activity in LO is uniquely correlated to object detectability was produced by the "Lincoln" illusion, in which blurring of objects digitized into large blocks paradoxically increases their recognizability. Such blurring led to significant enhancement of LO activation. Despite the preferential activation to objects, LO did not seem to be involved in the final, "semantic," stages of the recognition process. Thus, objects varying widely in their recognizability (e.g., famous faces, common objects, and unfamiliar three-dimensional abstract sculptures) activated it to a similar degree. These results are thus evidence for an intermediate link in the chain of processing stages leading to object recognition in human visual cortex. Images Fig. 1 Fig. 2 Fig. 3 PMID:7667258

  18. A dual-channel fusion system of visual and infrared images based on color transfer

    NASA Astrophysics Data System (ADS)

    Pei, Chuang; Jiang, Xiao-yu; Zhang, Peng-wei; Liang, Hao-cong

    2013-09-01

    A dual-channel fusion system of visual and infrared images based on color transfer The increasing availability and deployment of imaging sensors operating in multiple spectrums has led to a large research effort in image fusion, resulting in a plethora of pixel-level image fusion algorithms. However, most of these algorithms have gray or false color fusion results which are not adapt to human vision. Transfer color from a day-time reference image to get natural color fusion result is an effective way to solve this problem, but the computation cost of color transfer is expensive and can't meet the request of real-time image processing. We developed a dual-channel infrared and visual images fusion system based on TMS320DM642 digital signal processing chip. The system is divided into image acquisition and registration unit, image fusion processing unit, system control unit and image fusion result out-put unit. The image registration of dual-channel images is realized by combining hardware and software methods in the system. False color image fusion algorithm in RGB color space is used to get R-G fused image, then the system chooses a reference image to transfer color to the fusion result. A color lookup table based on statistical properties of images is proposed to solve the complexity computation problem in color transfer. The mapping calculation between the standard lookup table and the improved color lookup table is simple and only once for a fixed scene. The real-time fusion and natural colorization of infrared and visual images are realized by this system. The experimental result shows that the color-transferred images have a natural color perception to human eyes, and can highlight the targets effectively with clear background details. Human observers with this system will be able to interpret the image better and faster, thereby improving situational awareness and reducing target detection time.

  19. Three-dimensional visualization of morphology and ventilation procedure (air flow and diffusion) of a subdivision of the acinus using synchrotron radiation microtomography of the human lung specimens

    NASA Astrophysics Data System (ADS)

    Shimizu, Kenji; Ikura, Hirohiko; Ikezoe, Junpei; Nagareda, Tomofumi; Yagi, Naoto; Umetani, Keiji; Imai, Yutaka

    2004-04-01

    We have previously reported a synchrotron radiation (SR) microtomography system constructed at the bending magnet beamline at the SPring-8. This system has been applied to the lungs obtained at autopsy and inflated and fixed by Heitzman"s method. Normal lung and lung specimens with two different types of pathologic processes (fibrosis and emphysema) were included. Serial SR microtomographic images were stacked to yield the isotropic volumetric data with high-resolution (12 μm3 in voxel size). Within the air spaces of a subdivision of the acinus, each voxel is segmented three-dimensionally using a region growing algorithm ("rolling ball algorithm"). For each voxel within the segmented air spaces, two types of voxel coding have been performed: single-seeded (SS) coding and boundary-seeded (BS) coding, in which the minimum distance from an initial point as the only seed point and all object boundary voxels as a seed set were calculated and assigned as the code values to each voxel, respectively. With these two codes, combinations of surface rendering and volume rendering techniques were applied to visualize three-dimensional morphology of a subdivision of the acinus. Furthermore, sequentially filling process of air into a subdivision of the acinus was simulated under several conditions to visualize the ventilation procedure (air flow and diffusion). A subdivision of the acinus was reconstructed three-dimensionally, demonstrating the normal architecture of the human lung. Significant differences in appearance of ventilation procedure were observed between normal and two pathologic processes due to the alteration of the lung architecture. Three-dimensional reconstruction of the microstructure of a subdivision of the acinus and visualization of the ventilation procedure (air flow and diffusion) with SR microtomography would offer a new approach to study the morphology, physiology, and pathophysiology of the human respiratory system.

  20. Right hemispheric dominance in gaze-triggered reflexive shift of attention in humans.

    PubMed

    Okada, Takashi; Sato, Wataru; Toichi, Motomi

    2006-11-01

    Recent findings suggest a right hemispheric dominance in gaze-triggered shifts of attention. The aim of this study was to clarify the dominant hemisphere in the gaze processing that mediates attentional shift. A target localization task, with preceding non-predicative gaze cues presented to each visual field, was undertaken by 44 healthy subjects, measuring reaction time (RT). A face identification task was also given to determine hemispheric dominance in face processing for each subject. RT differences between valid and invalid cues were larger when presented in the left rather than the right visual field. This held true regardless of individual hemispheric dominance in face processing. Together, these results indicate right hemispheric dominance in gaze-triggered reflexive shifts of attention in normal healthy subjects.

Top