Sample records for enhanced visual perception

  1. Enhanced Perceptual Functioning in Autism: An Update, and Eight Principles of Autistic Perception

    ERIC Educational Resources Information Center

    Mottron, Laurent; Dawson, Michelle; Soulieres, Isabelle; Hubert, Benedicte; Burack, Jake

    2006-01-01

    We propose an "Enhanced Perceptual Functioning" model encompassing the main differences between autistic and non-autistic social and non-social perceptual processing: locally oriented visual and auditory perception, enhanced low-level discrimination, use of a more posterior network in "complex" visual tasks, enhanced perception…

  2. Cholinergic, But Not Dopaminergic or Noradrenergic, Enhancement Sharpens Visual Spatial Perception in Humans

    PubMed Central

    Wallace, Deanna L.

    2017-01-01

    The neuromodulator acetylcholine modulates spatial integration in visual cortex by altering the balance of inputs that generate neuronal receptive fields. These cholinergic effects may provide a neurobiological mechanism underlying the modulation of visual representations by visual spatial attention. However, the consequences of cholinergic enhancement on visuospatial perception in humans are unknown. We conducted two experiments to test whether enhancing cholinergic signaling selectively alters perceptual measures of visuospatial interactions in human subjects. In Experiment 1, a double-blind placebo-controlled pharmacology study, we measured how flanking distractors influenced detection of a small contrast decrement of a peripheral target, as a function of target-flanker distance. We found that cholinergic enhancement with the cholinesterase inhibitor donepezil improved target detection, and modeling suggested that this was mainly due to a narrowing of the extent of facilitatory perceptual spatial interactions. In Experiment 2, we tested whether these effects were selective to the cholinergic system or would also be observed following enhancements of related neuromodulators dopamine or norepinephrine. Unlike cholinergic enhancement, dopamine (bromocriptine) and norepinephrine (guanfacine) manipulations did not improve performance or systematically alter the spatial profile of perceptual interactions between targets and distractors. These findings reveal mechanisms by which cholinergic signaling influences visual spatial interactions in perception and improves processing of a visual target among distractors, effects that are notably similar to those of spatial selective attention. SIGNIFICANCE STATEMENT Acetylcholine influences how visual cortical neurons integrate signals across space, perhaps providing a neurobiological mechanism for the effects of visual selective attention. However, the influence of cholinergic enhancement on visuospatial perception remains unknown. Here we demonstrate that cholinergic enhancement improves detection of a target flanked by distractors, consistent with sharpened visuospatial perceptual representations. Furthermore, whereas most pharmacological studies focus on a single neurotransmitter, many neuromodulators can have related effects on cognition and perception. Thus, we also demonstrate that enhancing noradrenergic and dopaminergic systems does not systematically improve visuospatial perception or alter its tuning. Our results link visuospatial tuning effects of acetylcholine at the neuronal and perceptual levels and provide insights into the connection between cholinergic signaling and visual attention. PMID:28336568

  3. Visual adaptation enhances action sound discrimination.

    PubMed

    Barraclough, Nick E; Page, Steve A; Keefe, Bruce D

    2017-01-01

    Prolonged exposure, or adaptation, to a stimulus in 1 modality can bias, but also enhance, perception of a subsequent stimulus presented within the same modality. However, recent research has also found that adaptation in 1 modality can bias perception in another modality. Here, we show a novel crossmodal adaptation effect, where adaptation to a visual stimulus enhances subsequent auditory perception. We found that when compared to no adaptation, prior adaptation to visual, auditory, or audiovisual hand actions enhanced discrimination between 2 subsequently presented hand action sounds. Discrimination was most enhanced when the visual action "matched" the auditory action. In addition, prior adaptation to a visual, auditory, or audiovisual action caused subsequent ambiguous action sounds to be perceived as less like the adaptor. In contrast, these crossmodal action aftereffects were not generated by adaptation to the names of actions. Enhanced crossmodal discrimination and crossmodal perceptual aftereffects may result from separate mechanisms operating in audiovisual action sensitive neurons within perceptual systems. Adaptation-induced crossmodal enhancements cannot be explained by postperceptual responses or decisions. More generally, these results together indicate that adaptation is a ubiquitous mechanism for optimizing perceptual processing of multisensory stimuli.

  4. The Development of Face Perception in Infancy: Intersensory Interference and Unimodal Visual Facilitation

    PubMed Central

    Bahrick, Lorraine E.; Lickliter, Robert; Castellanos, Irina

    2014-01-01

    Although research has demonstrated impressive face perception skills of young infants, little attention has focused on conditions that enhance versus impair infant face perception. The present studies tested the prediction, generated from the Intersensory Redundancy Hypothesis (IRH), that face discrimination, which relies on detection of visual featural information, would be impaired in the context of intersensory redundancy provided by audiovisual speech, and enhanced in the absence of intersensory redundancy (unimodal visual and asynchronous audiovisual speech) in early development. Later in development, following improvements in attention, faces should be discriminated in both redundant audiovisual and nonredundant stimulation. Results supported these predictions. Two-month-old infants discriminated a novel face in unimodal visual and asynchronous audiovisual speech but not in synchronous audiovisual speech. By 3 months, face discrimination was evident even during synchronous audiovisual speech. These findings indicate that infant face perception is enhanced and emerges developmentally earlier following unimodal visual than synchronous audiovisual exposure and that intersensory redundancy generated by naturalistic audiovisual speech can interfere with face processing. PMID:23244407

  5. Visual local and global processing in low-functioning deaf individuals with and without autism spectrum disorder.

    PubMed

    Maljaars, J P W; Noens, I L J; Scholte, E M; Verpoorten, R A W; van Berckelaer-Onnes, I A

    2011-01-01

    The ComFor study has indicated that individuals with intellectual disability (ID) and autism spectrum disorder (ASD) show enhanced visual local processing compared with individuals with ID only. Items of the ComFor with meaningless materials provided the best discrimination between the two samples. These results can be explained by the weak central coherence account. The main focus of the present study is to examine whether enhanced visual perception is also present in low-functioning deaf individuals with and without ASD compared with individuals with ID, and to evaluate the underlying cognitive style in deaf and hearing individuals with ASD. Different sorting tasks (selected from the ComFor) were administered from four subsamples: (1) individuals with ID (n = 68); (2) individuals with ID and ASD (n = 72); (3) individuals with ID and deafness (n = 22); and (4) individuals with ID, ASD and deafness (n = 15). Differences in performance on sorting tasks with meaningful and meaningless materials between the four subgroups were analysed. Age and level of functioning were taken into account. Analyses of covariance revealed that results of deaf individuals with ID and ASD are in line with the results of hearing individuals with ID and ASD. Both groups showed enhanced visual perception, especially on meaningless sorting tasks, when compared with hearing individuals with ID, but not compared with deaf individuals with ID. In ASD either with or without deafness, enhanced visual perception for meaningless information can be understood within the framework of the central coherence theory, whereas in deafness, enhancement in visual perception might be due to a more generally enhanced visual perception as a result of auditory deprivation. © 2010 The Authors. Journal of Intellectual Disability Research © 2010 Blackwell Publishing Ltd.

  6. Visual enhancing of tactile perception in the posterior parietal cortex.

    PubMed

    Ro, Tony; Wallace, Ruth; Hagedorn, Judith; Farnè, Alessandro; Pienkos, Elizabeth

    2004-01-01

    The visual modality typically dominates over our other senses. Here we show that after inducing an extreme conflict in the left hand between vision of touch (present) and the feeling of touch (absent), sensitivity to touch increases for several minutes after the conflict. Transcranial magnetic stimulation of the posterior parietal cortex after this conflict not only eliminated the enduring visual enhancement of touch, but also impaired normal tactile perception. This latter finding demonstrates a direct role of the parietal lobe in modulating tactile perception as a result of the conflict between these senses. These results provide evidence for visual-to-tactile perceptual modulation and demonstrate effects of illusory vision of touch on touch perception through a long-lasting modulatory process in the posterior parietal cortex.

  7. Auditory emotional cues enhance visual perception.

    PubMed

    Zeelenberg, René; Bocanegra, Bruno R

    2010-04-01

    Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by emotional cues as compared to neutral cues. When the cue was presented visually we replicated the emotion-induced impairment found in other studies. Our results suggest emotional stimuli have a twofold effect on perception. They impair perception by reflexively attracting attention at the expense of competing stimuli. However, emotional stimuli also induce a nonspecific perceptual enhancement that carries over onto other stimuli when competition is reduced, for example, by presenting stimuli in different modalities. Copyright 2009 Elsevier B.V. All rights reserved.

  8. Enhanced perceptual functioning in autism: an update, and eight principles of autistic perception.

    PubMed

    Mottron, Laurent; Dawson, Michelle; Soulières, Isabelle; Hubert, Benedicte; Burack, Jake

    2006-01-01

    We propose an "Enhanced Perceptual Functioning" model encompassing the main differences between autistic and non-autistic social and non-social perceptual processing: locally oriented visual and auditory perception, enhanced low-level discrimination, use of a more posterior network in "complex" visual tasks, enhanced perception of first order static stimuli, diminished perception of complex movement, autonomy of low-level information processing toward higher-order operations, and differential relation between perception and general intelligence. Increased perceptual expertise may be implicated in the choice of special ability in savant autistics, and in the variability of apparent presentations within PDD (autism with and without typical speech, Asperger syndrome) in non-savant autistics. The overfunctioning of brain regions typically involved in primary perceptual functions may explain the autistic perceptual endophenotype.

  9. Testing the generality of the zoom-lens model: Evidence for visual-pathway specific effects of attended-region size on perception.

    PubMed

    Goodhew, Stephanie C; Lawrence, Rebecca K; Edwards, Mark

    2017-05-01

    There are volumes of information available to process in visual scenes. Visual spatial attention is a critically important selection mechanism that prevents these volumes from overwhelming our visual system's limited-capacity processing resources. We were interested in understanding the effect of the size of the attended area on visual perception. The prevailing model of attended-region size across cognition, perception, and neuroscience is the zoom-lens model. This model stipulates that the magnitude of perceptual processing enhancement is inversely related to the size of the attended region, such that a narrow attended-region facilitates greater perceptual enhancement than a wider region. Yet visual processing is subserved by two major visual pathways (magnocellular and parvocellular) that operate with a degree of independence in early visual processing and encode contrasting visual information. Historically, testing of the zoom-lens has used measures of spatial acuity ideally suited to parvocellular processing. This, therefore, raises questions about the generality of the zoom-lens model to different aspects of visual perception. We found that while a narrow attended-region facilitated spatial acuity and the perception of high spatial frequency targets, it had no impact on either temporal acuity or the perception of low spatial frequency targets. This pattern also held up when targets were not presented centrally. This supports the notion that visual attended-region size has dissociable effects on magnocellular versus parvocellular mediated visual processing.

  10. Smelling directions: Olfaction modulates ambiguous visual motion perception

    PubMed Central

    Kuang, Shenbing; Zhang, Tao

    2014-01-01

    Senses of smells are often accompanied by simultaneous visual sensations. Previous studies have documented enhanced olfactory performance with concurrent presence of congruent color- or shape- related visual cues, and facilitated visual object perception when congruent smells are simultaneously present. These visual object-olfaction interactions suggest the existences of couplings between the olfactory pathway and the visual ventral processing stream. However, it is not known if olfaction can modulate visual motion perception, a function that is related to the visual dorsal stream. We tested this possibility by examining the influence of olfactory cues on the perceptions of ambiguous visual motion signals. We showed that, after introducing an association between motion directions and olfactory cues, olfaction could indeed bias ambiguous visual motion perceptions. Our result that olfaction modulates visual motion processing adds to the current knowledge of cross-modal interactions and implies a possible functional linkage between the olfactory system and the visual dorsal pathway. PMID:25052162

  11. Long-Lasting Enhancement of Visual Perception with Repetitive Noninvasive Transcranial Direct Current Stimulation

    PubMed Central

    Behrens, Janina R.; Kraft, Antje; Irlbacher, Kerstin; Gerhardt, Holger; Olma, Manuel C.; Brandt, Stephan A.

    2017-01-01

    Understanding processes performed by an intact visual cortex as the basis for developing methods that enhance or restore visual perception is of great interest to both researchers and medical practitioners. Here, we explore whether contrast sensitivity, a main function of the primary visual cortex (V1), can be improved in healthy subjects by repetitive, noninvasive anodal transcranial direct current stimulation (tDCS). Contrast perception was measured via threshold perimetry directly before and after intervention (tDCS or sham stimulation) on each day over 5 consecutive days (24 subjects, double-blind study). tDCS improved contrast sensitivity from the second day onwards, with significant effects lasting 24 h. After the last stimulation on day 5, the anodal group showed a significantly greater improvement in contrast perception than the sham group (23 vs. 5%). We found significant long-term effects in only the central 2–4° of the visual field 4 weeks after the last stimulation. We suspect a combination of two factors contributes to these lasting effects. First, the V1 area that represents the central retina was located closer to the polarization electrode, resulting in higher current density. Second, the central visual field is represented by a larger cortical area relative to the peripheral visual field (cortical magnification). This is the first study showing that tDCS over V1 enhances contrast perception in healthy subjects for several weeks. This study contributes to the investigation of the causal relationship between the external modulation of neuronal membrane potential and behavior (in our case, visual perception). Because the vast majority of human studies only show temporary effects after single tDCS sessions targeting the visual system, our study underpins the potential for lasting effects of repetitive tDCS-induced modulation of neuronal excitability. PMID:28860969

  12. Enhanced Local Processing of Dynamic Visual Information in Autism: Evidence from Speed Discrimination

    ERIC Educational Resources Information Center

    Chen, Y.; Norton, D. J.; McBain, R.; Gold, J.; Frazier, J. A.; Coyle, J. T.

    2012-01-01

    An important issue for understanding visual perception in autism concerns whether individuals with this neurodevelopmental disorder possess an advantage in processing local visual information, and if so, what is the nature of this advantage. Perception of movement speed is a visual process that relies on computation of local spatiotemporal signals…

  13. Audiovisual Perception of Congruent and Incongruent Dutch Front Vowels

    ERIC Educational Resources Information Center

    Valkenier, Bea; Duyne, Jurriaan Y.; Andringa, Tjeerd C.; Baskent, Deniz

    2012-01-01

    Purpose: Auditory perception of vowels in background noise is enhanced when combined with visually perceived speech features. The objective of this study was to investigate whether the influence of visual cues on vowel perception extends to incongruent vowels, in a manner similar to the McGurk effect observed with consonants. Method:…

  14. The Effect of a Computerized Visual Perception and Visual-Motor Integration Training Program on Improving Chinese Handwriting of Children with Handwriting Difficulties

    ERIC Educational Resources Information Center

    Poon, K. W.; Li-Tsang, C. W .P.; Weiss, T. P. L.; Rosenblum, S.

    2010-01-01

    This study aimed to investigate the effect of a computerized visual perception and visual-motor integration training program to enhance Chinese handwriting performance among children with learning difficulties, particularly those with handwriting problems. Participants were 26 primary-one children who were assessed by educational psychologists and…

  15. Improving visual perception through neurofeedback

    PubMed Central

    Scharnowski, Frank; Hutton, Chloe; Josephs, Oliver; Weiskopf, Nikolaus; Rees, Geraint

    2012-01-01

    Perception depends on the interplay of ongoing spontaneous activity and stimulus-evoked activity in sensory cortices. This raises the possibility that training ongoing spontaneous activity alone might be sufficient for enhancing perceptual sensitivity. To test this, we trained human participants to control ongoing spontaneous activity in circumscribed regions of retinotopic visual cortex using real-time functional MRI based neurofeedback. After training, we tested participants using a new and previously untrained visual detection task that was presented at the visual field location corresponding to the trained region of visual cortex. Perceptual sensitivity was significantly enhanced only when participants who had previously learned control over ongoing activity were now exercising control, and only for that region of visual cortex. Our new approach allows us to non-invasively and non-pharmacologically manipulate regionally specific brain activity, and thus provide ‘brain training’ to deliver particular perceptual enhancements. PMID:23223302

  16. Visual Contrast Enhancement Algorithm Based on Histogram Equalization

    PubMed Central

    Ting, Chih-Chung; Wu, Bing-Fei; Chung, Meng-Liang; Chiu, Chung-Cheng; Wu, Ya-Ching

    2015-01-01

    Image enhancement techniques primarily improve the contrast of an image to lend it a better appearance. One of the popular enhancement methods is histogram equalization (HE) because of its simplicity and effectiveness. However, it is rarely applied to consumer electronics products because it can cause excessive contrast enhancement and feature loss problems. These problems make the images processed by HE look unnatural and introduce unwanted artifacts in them. In this study, a visual contrast enhancement algorithm (VCEA) based on HE is proposed. VCEA considers the requirements of the human visual perception in order to address the drawbacks of HE. It effectively solves the excessive contrast enhancement problem by adjusting the spaces between two adjacent gray values of the HE histogram. In addition, VCEA reduces the effects of the feature loss problem by using the obtained spaces. Furthermore, VCEA enhances the detailed textures of an image to generate an enhanced image with better visual quality. Experimental results show that images obtained by applying VCEA have higher contrast and are more suited to human visual perception than those processed by HE and other HE-based methods. PMID:26184219

  17. Orientation of selective effects of body tilt on visually induced perception of self-motion.

    PubMed

    Nakamura, S; Shimojo, S

    1998-10-01

    We examined the effect of body posture upon visually induced perception of self-motion (vection) with various angles of observer's tilt. The experiment indicated that the tilted body of observer could enhance perceived strength of vertical vection, while there was no effect of body tilt on horizontal vection. This result suggests that there is an interaction between the effects of visual and vestibular information on perception of self-motion.

  18. Association for Education of the Visually Handicapped Biennial Conference (Forty-Ninth, Toronto, Canada, June 1968).

    ERIC Educational Resources Information Center

    Association for Education of the Visually Handicapped, Philadelphia, PA.

    Essays on the visually handicapped are concerned with congenital rubella, an evaluation of multiply handicapped children, the use and abuse of the IQ, visual perception dysfunction, spatial perceptions in the partially sighted, programs in daily living skills, sex education needs, and physical activity as an enhancement of functioning. Other…

  19. Auditory Emotional Cues Enhance Visual Perception

    ERIC Educational Resources Information Center

    Zeelenberg, Rene; Bocanegra, Bruno R.

    2010-01-01

    Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by…

  20. Elevated arousal levels enhance contrast perception.

    PubMed

    Kim, Dongho; Lokey, Savannah; Ling, Sam

    2017-02-01

    Our state of arousal fluctuates from moment to moment-fluctuations that can have profound impacts on behavior. Arousal has been proposed to play a powerful, widespread role in the brain, influencing processes as far ranging as perception, memory, learning, and decision making. Although arousal clearly plays a critical role in modulating behavior, the mechanisms underlying this modulation remain poorly understood. To address this knowledge gap, we examined the modulatory role of arousal on one of the cornerstones of visual perception: contrast perception. Using a reward-driven paradigm to manipulate arousal state, we discovered that elevated arousal state substantially enhances visual sensitivity, incurring a multiplicative modulation of contrast response. Contrast defines vision, determining whether objects appear visible or invisible to us, and these results indicate that one of the consequences of decreased arousal state is an impaired ability to visually process our environment.

  1. Contrast Enhancement Algorithm Based on Gap Adjustment for Histogram Equalization

    PubMed Central

    Chiu, Chung-Cheng; Ting, Chih-Chung

    2016-01-01

    Image enhancement methods have been widely used to improve the visual effects of images. Owing to its simplicity and effectiveness histogram equalization (HE) is one of the methods used for enhancing image contrast. However, HE may result in over-enhancement and feature loss problems that lead to unnatural look and loss of details in the processed images. Researchers have proposed various HE-based methods to solve the over-enhancement problem; however, they have largely ignored the feature loss problem. Therefore, a contrast enhancement algorithm based on gap adjustment for histogram equalization (CegaHE) is proposed. It refers to a visual contrast enhancement algorithm based on histogram equalization (VCEA), which generates visually pleasing enhanced images, and improves the enhancement effects of VCEA. CegaHE adjusts the gaps between two gray values based on the adjustment equation, which takes the properties of human visual perception into consideration, to solve the over-enhancement problem. Besides, it also alleviates the feature loss problem and further enhances the textures in the dark regions of the images to improve the quality of the processed images for human visual perception. Experimental results demonstrate that CegaHE is a reliable method for contrast enhancement and that it significantly outperforms VCEA and other methods. PMID:27338412

  2. Working memory can enhance unconscious visual perception.

    PubMed

    Pan, Yi; Cheng, Qiu-Ping; Luo, Qian-Ying

    2012-06-01

    We demonstrate that unconscious processing of a stimulus property can be enhanced when there is a match between the contents of working memory and the stimulus presented in the visual field. Participants first held a cue (a colored circle) in working memory and then searched for a brief masked target shape presented simultaneously with a distractor shape. When participants reported having no awareness of the target shape at all, search performance was more accurate in the valid condition, where the target matched the cue in color, than in the neutral condition, where the target mismatched the cue. This effect cannot be attributed to bottom-up perceptual priming from the presentation of a memory cue, because unconscious perception was not enhanced when the cue was merely perceptually identified but not actively held in working memory. These findings suggest that reentrant feedback from the contents of working memory modulates unconscious visual perception.

  3. Perceptual Training Strongly Improves Visual Motion Perception in Schizophrenia

    ERIC Educational Resources Information Center

    Norton, Daniel J.; McBain, Ryan K.; Ongur, Dost; Chen, Yue

    2011-01-01

    Schizophrenia patients exhibit perceptual and cognitive deficits, including in visual motion processing. Given that cognitive systems depend upon perceptual inputs, improving patients' perceptual abilities may be an effective means of cognitive intervention. In healthy people, motion perception can be enhanced through perceptual learning, but it…

  4. Tunnel vision: sharper gradient of spatial attention in autism.

    PubMed

    Robertson, Caroline E; Kravitz, Dwight J; Freyberg, Jan; Baron-Cohen, Simon; Baker, Chris I

    2013-04-17

    Enhanced perception of detail has long been regarded a hallmark of autism spectrum conditions (ASC), but its origins are unknown. Normal sensitivity on all fundamental perceptual measures-visual acuity, contrast discrimination, and flicker detection-is strongly established in the literature. If individuals with ASC do not have superior low-level vision, how is perception of detail enhanced? We argue that this apparent paradox can be resolved by considering visual attention, which is known to enhance basic visual sensitivity, resulting in greater acuity and lower contrast thresholds. Here, we demonstrate that the focus of attention and concomitant enhancement of perception are sharper in human individuals with ASC than in matched controls. Using a simple visual acuity task embedded in a standard cueing paradigm, we mapped the spatial and temporal gradients of attentional enhancement by varying the distance and onset time of visual targets relative to an exogenous cue, which obligatorily captures attention. Individuals with ASC demonstrated a greater fall-off in performance with distance from the cue than controls, indicating a sharper spatial gradient of attention. Further, this sharpness was highly correlated with the severity of autistic symptoms in ASC, as well as autistic traits across both ASC and control groups. These findings establish the presence of a form of "tunnel vision" in ASC, with far-reaching implications for our understanding of the social and neurobiological aspects of autism.

  5. Parallel processing of general and specific threat during early stages of perception

    PubMed Central

    2016-01-01

    Differential processing of threat can consummate as early as 100 ms post-stimulus. Moreover, early perception not only differentiates threat from non-threat stimuli but also distinguishes among discrete threat subtypes (e.g. fear, disgust and anger). Combining spatial-frequency-filtered images of fear, disgust and neutral scenes with high-density event-related potentials and intracranial source estimation, we investigated the neural underpinnings of general and specific threat processing in early stages of perception. Conveyed in low spatial frequencies, fear and disgust images evoked convergent visual responses with similarly enhanced N1 potentials and dorsal visual (middle temporal gyrus) cortical activity (relative to neutral cues; peaking at 156 ms). Nevertheless, conveyed in high spatial frequencies, fear and disgust elicited divergent visual responses, with fear enhancing and disgust suppressing P1 potentials and ventral visual (occipital fusiform) cortical activity (peaking at 121 ms). Therefore, general and specific threat processing operates in parallel in early perception, with the ventral visual pathway engaged in specific processing of discrete threats and the dorsal visual pathway in general threat processing. Furthermore, selectively tuned to distinctive spatial-frequency channels and visual pathways, these parallel processes underpin dimensional and categorical threat characterization, promoting efficient threat response. These findings thus lend support to hybrid models of emotion. PMID:26412811

  6. Reading Acquisition Enhances an Early Visual Process of Contour Integration

    ERIC Educational Resources Information Center

    Szwed, Marcin; Ventura, Paulo; Querido, Luis; Cohen, Laurent; Dehaene, Stanislas

    2012-01-01

    The acquisition of reading has an extensive impact on the developing brain and leads to enhanced abilities in phonological processing and visual letter perception. Could this expertise also extend to early visual abilities outside the reading domain? Here we studied the performance of illiterate, ex-illiterate and literate adults closely matched…

  7. Seeing and Feeling for Self and Other: Proprioceptive Spatial Location Determines Multisensory Enhancement of Touch

    ERIC Educational Resources Information Center

    Cardini, Flavia; Haggard, Patrick; Ladavas, Elisabetta

    2013-01-01

    We have investigated the relation between visuo-tactile interactions and the self-other distinction. In the Visual Enhancement of Touch (VET) effect, non-informative vision of one's own hand improves tactile spatial perception. Previous studies suggested that looking at "another"person's hand could also enhance tactile perception, but did not…

  8. Audiovisual integration in hemianopia: A neurocomputational account based on cortico-collicular interaction.

    PubMed

    Magosso, Elisa; Bertini, Caterina; Cuppini, Cristiano; Ursino, Mauro

    2016-10-01

    Hemianopic patients retain some abilities to integrate audiovisual stimuli in the blind hemifield, showing both modulation of visual perception by auditory stimuli and modulation of auditory perception by visual stimuli. Indeed, conscious detection of a visual target in the blind hemifield can be improved by a spatially coincident auditory stimulus (auditory enhancement of visual detection), while a visual stimulus in the blind hemifield can improve localization of a spatially coincident auditory stimulus (visual enhancement of auditory localization). To gain more insight into the neural mechanisms underlying these two perceptual phenomena, we propose a neural network model including areas of neurons representing the retina, primary visual cortex (V1), extrastriate visual cortex, auditory cortex and the Superior Colliculus (SC). The visual and auditory modalities in the network interact via both direct cortical-cortical connections and subcortical-cortical connections involving the SC; the latter, in particular, integrates visual and auditory information and projects back to the cortices. Hemianopic patients were simulated by unilaterally lesioning V1, and preserving spared islands of V1 tissue within the lesion, to analyze the role of residual V1 neurons in mediating audiovisual integration. The network is able to reproduce the audiovisual phenomena in hemianopic patients, linking perceptions to neural activations, and disentangles the individual contribution of specific neural circuits and areas via sensitivity analyses. The study suggests i) a common key role of SC-cortical connections in mediating the two audiovisual phenomena; ii) a different role of visual cortices in the two phenomena: auditory enhancement of conscious visual detection being conditional on surviving V1 islands, while visual enhancement of auditory localization persisting even after complete V1 damage. The present study may contribute to advance understanding of the audiovisual dialogue between cortical and subcortical structures in healthy and unisensory deficit conditions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Reduced efficiency of audiovisual integration for nonnative speech.

    PubMed

    Yi, Han-Gyol; Phelps, Jasmine E B; Smiljanic, Rajka; Chandrasekaran, Bharath

    2013-11-01

    The role of visual cues in native listeners' perception of speech produced by nonnative speakers has not been extensively studied. Native perception of English sentences produced by native English and Korean speakers in audio-only and audiovisual conditions was examined. Korean speakers were rated as more accented in audiovisual than in the audio-only condition. Visual cues enhanced word intelligibility for native English speech but less so for Korean-accented speech. Reduced intelligibility of Korean-accented audiovisual speech was associated with implicit visual biases, suggesting that listener-related factors partially influence the efficiency of audiovisual integration for nonnative speech perception.

  10. Representational Account of Memory: Insights from Aging and Synesthesia.

    PubMed

    Pfeifer, Gaby; Ward, Jamie; Chan, Dennis; Sigala, Natasha

    2016-12-01

    The representational account of memory envisages perception and memory to be on a continuum rather than in discretely divided brain systems [Bussey, T. J., & Saksida, L. M. Memory, perception, and the ventral visual-perirhinal-hippocampal stream: Thinking outside of the boxes. Hippocampus, 17, 898-908, 2007]. We tested this account using a novel between-group design with young grapheme-color synesthetes, older adults, and young controls. We investigated how the disparate sensory-perceptual abilities between these groups translated into associative memory performance for visual stimuli that do not induce synesthesia. ROI analyses of the entire ventral visual stream showed that associative retrieval (a pair-associate retrieved in the absence of a visual stimulus) yielded enhanced activity in young and older adults' visual regions relative to synesthetes, whereas associative recognition (deciding whether a visual stimulus was the correct pair-associate) was characterized by enhanced activity in synesthetes' visual regions relative to older adults. Whole-brain analyses at associative retrieval revealed an effect of age in early visual cortex, with older adults showing enhanced activity relative to synesthetes and young adults. At associative recognition, the group effect was reversed: Synesthetes showed significantly enhanced activity relative to young and older adults in early visual regions. The inverted group effects observed between retrieval and recognition indicate that reduced sensitivity in visual cortex (as in aging) comes with increased activity during top-down retrieval and decreased activity during bottom-up recognition, whereas enhanced sensitivity (as in synesthesia) shows the opposite pattern. Our results provide novel evidence for the direct contribution of perceptual mechanisms to visual associative memory based on the examples of synesthesia and aging.

  11. Interoceptive signals impact visual processing: Cardiac modulation of visual body perception.

    PubMed

    Ronchi, Roberta; Bernasconi, Fosco; Pfeiffer, Christian; Bello-Ruiz, Javier; Kaliuzhna, Mariia; Blanke, Olaf

    2017-09-01

    Multisensory perception research has largely focused on exteroceptive signals, but recent evidence has revealed the integration of interoceptive signals with exteroceptive information. Such research revealed that heartbeat signals affect sensory (e.g., visual) processing: however, it is unknown how they impact the perception of body images. Here we linked our participants' heartbeat to visual stimuli and investigated the spatio-temporal brain dynamics of cardio-visual stimulation on the processing of human body images. We recorded visual evoked potentials with 64-channel electroencephalography while showing a body or a scrambled-body (control) that appeared at the frequency of the on-line recorded participants' heartbeat or not (not-synchronous, control). Extending earlier studies, we found a body-independent effect, with cardiac signals enhancing visual processing during two time periods (77-130 ms and 145-246 ms). Within the second (later) time-window we detected a second effect characterised by enhanced activity in parietal, temporo-occipital, inferior frontal, and right basal ganglia-insula regions, but only when non-scrambled body images were flashed synchronously with the heartbeat (208-224 ms). In conclusion, our results highlight the role of interoceptive information for the visual processing of human body pictures within a network integrating cardio-visual signals of relevance for perceptual and cognitive aspects of visual body processing. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Re-entrant Projections Modulate Visual Cortex in Affective Perception: Evidence From Granger Causality Analysis

    PubMed Central

    Keil, Andreas; Sabatinelli, Dean; Ding, Mingzhou; Lang, Peter J.; Ihssen, Niklas; Heim, Sabine

    2013-01-01

    Re-entrant modulation of visual cortex has been suggested as a critical process for enhancing perception of emotionally arousing visual stimuli. This study explores how the time information inherent in large-scale electrocortical measures can be used to examine the functional relationships among the structures involved in emotional perception. Granger causality analysis was conducted on steady-state visual evoked potentials elicited by emotionally arousing pictures flickering at a rate of 10 Hz. This procedure allows one to examine the direction of neural connections. Participants viewed pictures that varied in emotional content, depicting people in neutral contexts, erotica, or interpersonal attack scenes. Results demonstrated increased coupling between visual and cortical areas when viewing emotionally arousing content. Specifically, intraparietal to inferotemporal and precuneus to calcarine connections were stronger for emotionally arousing picture content. Thus, we provide evidence for re-entrant signal flow during emotional perception, which originates from higher tiers and enters lower tiers of visual cortex. PMID:18095279

  13. Brief Report: Autism-Like Traits Are Associated with Enhanced Ability to Disembed Visual Forms

    ERIC Educational Resources Information Center

    Sabatino DiCriscio, Antoinette; Troiani, Vanessa

    2017-01-01

    Atypical visual perceptual skills are thought to underlie unusual visual attention in autism spectrum disorders. We assessed whether individual differences in visual processing skills scaled with quantitative traits associated with the broader autism phenotype (BAP). Visual perception was assessed using the Figure-ground subtest of the Test of…

  14. High visual resolution matters in audiovisual speech perception, but only for some.

    PubMed

    Alsius, Agnès; Wayne, Rachel V; Paré, Martin; Munhall, Kevin G

    2016-07-01

    The basis for individual differences in the degree to which visual speech input enhances comprehension of acoustically degraded speech is largely unknown. Previous research indicates that fine facial detail is not critical for visual enhancement when auditory information is available; however, these studies did not examine individual differences in ability to make use of fine facial detail in relation to audiovisual speech perception ability. Here, we compare participants based on their ability to benefit from visual speech information in the presence of an auditory signal degraded with noise, modulating the resolution of the visual signal through low-pass spatial frequency filtering and monitoring gaze behavior. Participants who benefited most from the addition of visual information (high visual gain) were more adversely affected by the removal of high spatial frequency information, compared to participants with low visual gain, for materials with both poor and rich contextual cues (i.e., words and sentences, respectively). Differences as a function of gaze behavior between participants with the highest and lowest visual gains were observed only for words, with participants with the highest visual gain fixating longer on the mouth region. Our results indicate that the individual variance in audiovisual speech in noise performance can be accounted for, in part, by better use of fine facial detail information extracted from the visual signal and increased fixation on mouth regions for short stimuli. Thus, for some, audiovisual speech perception may suffer when the visual input (in addition to the auditory signal) is less than perfect.

  15. Enhanced Sensitivity to Subphonemic Segments in Dyslexia: A New Instance of Allophonic Perception

    PubMed Central

    Serniclaes, Willy; Seck, M’ballo

    2018-01-01

    Although dyslexia can be individuated in many different ways, it has only three discernable sources: a visual deficit that affects the perception of letters, a phonological deficit that affects the perception of speech sounds, and an audio-visual deficit that disturbs the association of letters with speech sounds. However, the very nature of each of these core deficits remains debatable. The phonological deficit in dyslexia, which is generally attributed to a deficit of phonological awareness, might result from a specific mode of speech perception characterized by the use of allophonic (i.e., subphonemic) units. Here we will summarize the available evidence and present new data in support of the “allophonic theory” of dyslexia. Previous studies have shown that the dyslexia deficit in the categorical perception of phonemic features (e.g., the voicing contrast between /t/ and /d/) is due to the enhanced sensitivity to allophonic features (e.g., the difference between two variants of /d/). Another consequence of allophonic perception is that it should also give rise to an enhanced sensitivity to allophonic segments, such as those that take place within a consonant cluster. This latter prediction is validated by the data presented in this paper. PMID:29587419

  16. The Development of Face Perception in Infancy: Intersensory Interference and Unimodal Visual Facilitation

    ERIC Educational Resources Information Center

    Bahrick, Lorraine E.; Lickliter, Robert; Castellanos, Irina

    2013-01-01

    Although research has demonstrated impressive face perception skills of young infants, little attention has focused on conditions that enhance versus impair infant face perception. The present studies tested the prediction, generated from the intersensory redundancy hypothesis (IRH), that face discrimination, which relies on detection of visual…

  17. Normal Visual Acuity and Electrophysiological Contrast Gain in Adults with High-Functioning Autism Spectrum Disorder.

    PubMed

    Tebartz van Elst, Ludger; Bach, Michael; Blessing, Julia; Riedel, Andreas; Bubl, Emanuel

    2015-01-01

    A common neurodevelopmental disorder, autism spectrum disorder (ASD), is defined by specific patterns in social perception, social competence, communication, highly circumscribed interests, and a strong subjective need for behavioral routines. Furthermore, distinctive features of visual perception, such as markedly reduced eye contact and a tendency to focus more on small, visual items than on holistic perception, have long been recognized as typical ASD characteristics. Recent debate in the scientific community discusses whether the physiology of low-level visual perception might explain such higher visual abnormalities. While reports of this enhanced, "eagle-like" visual acuity contained methodological errors and could not be substantiated, several authors have reported alterations in even earlier stages of visual processing, such as contrast perception and motion perception at the occipital cortex level. Therefore, in this project, we have investigated the electrophysiology of very early visual processing by analyzing the pattern electroretinogram-based contrast gain, the background noise amplitude, and the psychophysical visual acuities of participants with high-functioning ASD and controls with equal education. Based on earlier findings, we hypothesized that alterations in early vision would be present in ASD participants. This study included 33 individuals with ASD (11 female) and 33 control individuals (12 female). The groups were matched in terms of age, gender, and education level. We found no evidence of altered electrophysiological retinal contrast processing or psychophysical measured visual acuities. There appears to be no evidence for abnormalities in retinal visual processing in ASD patients, at least with respect to contrast detection.

  18. Entwining Psychology and Visual Arts: A Classroom Experience

    ERIC Educational Resources Information Center

    Bahia, Sara; Trindade, Jose Pedro

    2012-01-01

    The purpose of this paper is to show how activating perception, imagery and creativity facilitate the mastery of specific skills of visual arts education. Specifically, the study aimed at answering two questions: How can teachers enhance visual and creative expression?; and What criteria should be used to evaluate specific learning of visual arts…

  19. Enhancing fuzzy robot navigation systems by mimicking human visual perception of natural terrain traversibility

    NASA Technical Reports Server (NTRS)

    Tunstel, E.; Howard, A.; Edwards, D.; Carlson, A.

    2001-01-01

    This paper presents a technique for learning to assess terrain traversability for outdoor mobile robot navigation using human-embedded logic and real-time perception of terrain features extracted from image data.

  20. Add a picture for suspense: neural correlates of the interaction between language and visual information in the perception of fear.

    PubMed

    Willems, Roel M; Clevis, Krien; Hagoort, Peter

    2011-09-01

    We investigated how visual and linguistic information interact in the perception of emotion. We borrowed a phenomenon from film theory which states that presentation of an as such neutral visual scene intensifies the percept of fear or suspense induced by a different channel of information, such as language. Our main aim was to investigate how neutral visual scenes can enhance responses to fearful language content in parts of the brain involved in the perception of emotion. Healthy participants' brain activity was measured (using functional magnetic resonance imaging) while they read fearful and less fearful sentences presented with or without a neutral visual scene. The main idea is that the visual scenes intensify the fearful content of the language by subtly implying and concretizing what is described in the sentence. Activation levels in the right anterior temporal pole were selectively increased when a neutral visual scene was paired with a fearful sentence, compared to reading the sentence alone, as well as to reading of non-fearful sentences presented with the same neutral scene. We conclude that the right anterior temporal pole serves a binding function of emotional information across domains such as visual and linguistic information.

  1. PERCEPT: indoor navigation for the blind and visually impaired.

    PubMed

    Ganz, Aura; Gandhi, Siddhesh Rajan; Schafer, James; Singh, Tushar; Puleo, Elaine; Mullett, Gary; Wilson, Carole

    2011-01-01

    In order to enhance the perception of indoor and unfamiliar environments for the blind and visually-impaired, we introduce the PERCEPT system that supports a number of unique features such as: a) Low deployment and maintenance cost; b) Scalability, i.e. we can deploy the system in very large buildings; c) An on-demand system that does not overwhelm the user, as it offers small amounts of information on demand; and d) Portability and ease-of-use, i.e., the custom handheld device carried by the user is compact and instructions are received audibly.

  2. Brief Report: Autism-like Traits are Associated With Enhanced Ability to Disembed Visual Forms.

    PubMed

    Sabatino DiCriscio, Antoinette; Troiani, Vanessa

    2017-05-01

    Atypical visual perceptual skills are thought to underlie unusual visual attention in autism spectrum disorders. We assessed whether individual differences in visual processing skills scaled with quantitative traits associated with the broader autism phenotype (BAP). Visual perception was assessed using the Figure-ground subtest of the Test of visual perceptual skills-3rd Edition (TVPS). In a large adult cohort (n = 209), TVPS-Figure Ground scores were positively correlated with autistic-like social features as assessed by the Broader autism phenotype questionnaire. This relationship was gender-specific, with males showing a correspondence between visual perceptual skills and autistic-like traits. This work supports the link between atypical visual perception and autism and highlights the importance in characterizing meaningful individual differences in clinically relevant behavioral phenotypes.

  3. Pictorial communication in virtual and real environments

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R. (Editor)

    1991-01-01

    Papers about the communication between human users and machines in real and synthetic environments are presented. Individual topics addressed include: pictorial communication, distortions in memory for visual displays, cartography and map displays, efficiency of graphical perception, volumetric visualization of 3D data, spatial displays to increase pilot situational awareness, teleoperation of land vehicles, computer graphics system for visualizing spacecraft in orbit, visual display aid for orbital maneuvering, multiaxis control in telemanipulation and vehicle guidance, visual enhancements in pick-and-place tasks, target axis effects under transformed visual-motor mappings, adapting to variable prismatic displacement. Also discussed are: spatial vision within egocentric and exocentric frames of reference, sensory conflict in motion sickness, interactions of form and orientation, perception of geometrical structure from congruence, prediction of three-dimensionality across continuous surfaces, effects of viewpoint in the virtual space of pictures, visual slant underestimation, spatial constraints of stereopsis in video displays, stereoscopic stance perception, paradoxical monocular stereopsis and perspective vergence. (No individual items are abstracted in this volume)

  4. Video enhancement method with color-protection post-processing

    NASA Astrophysics Data System (ADS)

    Kim, Youn Jin; Kwak, Youngshin

    2015-01-01

    The current study is aimed to propose a post-processing method for video enhancement by adopting a color-protection technique. The color-protection intends to attenuate perceptible artifacts due to over-enhancements in visually sensitive image regions such as low-chroma colors, including skin and gray objects. In addition, reducing the loss in color texture caused by the out-of-color-gamut signals is also taken into account. Consequently, color reproducibility of video sequences could be remarkably enhanced while the undesirable visual exaggerations are minimized.

  5. Auditory-musical processing in autism spectrum disorders: a review of behavioral and brain imaging studies.

    PubMed

    Ouimet, Tia; Foster, Nicholas E V; Tryfon, Ana; Hyde, Krista L

    2012-04-01

    Autism spectrum disorder (ASD) is a complex neurodevelopmental condition characterized by atypical social and communication skills, repetitive behaviors, and atypical visual and auditory perception. Studies in vision have reported enhanced detailed ("local") processing but diminished holistic ("global") processing of visual features in ASD. Individuals with ASD also show enhanced processing of simple visual stimuli but diminished processing of complex visual stimuli. Relative to the visual domain, auditory global-local distinctions, and the effects of stimulus complexity on auditory processing in ASD, are less clear. However, one remarkable finding is that many individuals with ASD have enhanced musical abilities, such as superior pitch processing. This review provides a critical evaluation of behavioral and brain imaging studies of auditory processing with respect to current theories in ASD. We have focused on auditory-musical processing in terms of global versus local processing and simple versus complex sound processing. This review contributes to a better understanding of auditory processing differences in ASD. A deeper comprehension of sensory perception in ASD is key to better defining ASD phenotypes and, in turn, may lead to better interventions. © 2012 New York Academy of Sciences.

  6. Visual information underpinning skilled anticipation: The effect of blur on a coupled and uncoupled in situ anticipatory response.

    PubMed

    Mann, David L; Abernethy, Bruce; Farrow, Damian

    2010-07-01

    Coupled interceptive actions are understood to be the result of neural processing-and visual information-which is distinct from that used for uncoupled perceptual responses. To examine the visual information used for action and perception, skilled cricket batters anticipated the direction of balls bowled toward them using a coupled movement (an interceptive action that preserved the natural coupling between perception and action) or an uncoupled (verbal) response, in each of four different visual blur conditions (plano, +1.00, +2.00, +3.00). Coupled responses were found to be better than uncoupled ones, with the blurring of vision found to result in different effects for the coupled and uncoupled response conditions. Low levels of visual blur did not affect coupled anticipation, a finding consistent with the comparatively poorer visual information on which online interceptive actions are proposed to rely. In contrast, some evidence was found to suggest that low levels of blur may enhance the uncoupled verbal perception of movement.

  7. Add a picture for suspense: neural correlates of the interaction between language and visual information in the perception of fear

    PubMed Central

    Clevis, Krien; Hagoort, Peter

    2011-01-01

    We investigated how visual and linguistic information interact in the perception of emotion. We borrowed a phenomenon from film theory which states that presentation of an as such neutral visual scene intensifies the percept of fear or suspense induced by a different channel of information, such as language. Our main aim was to investigate how neutral visual scenes can enhance responses to fearful language content in parts of the brain involved in the perception of emotion. Healthy participants’ brain activity was measured (using functional magnetic resonance imaging) while they read fearful and less fearful sentences presented with or without a neutral visual scene. The main idea is that the visual scenes intensify the fearful content of the language by subtly implying and concretizing what is described in the sentence. Activation levels in the right anterior temporal pole were selectively increased when a neutral visual scene was paired with a fearful sentence, compared to reading the sentence alone, as well as to reading of non-fearful sentences presented with the same neutral scene. We conclude that the right anterior temporal pole serves a binding function of emotional information across domains such as visual and linguistic information. PMID:20530540

  8. Using virtual reality to augment perception, enhance sensorimotor adaptation, and change our minds.

    PubMed

    Wright, W Geoffrey

    2014-01-01

    Technological advances that involve human sensorimotor processes can have both intended and unintended effects on the central nervous system (CNS). This mini review focuses on the use of virtual environments (VE) to augment brain functions by enhancing perception, eliciting automatic motor behavior, and inducing sensorimotor adaptation. VE technology is becoming increasingly prevalent in medical rehabilitation, training simulators, gaming, and entertainment. Although these VE applications have often been shown to optimize outcomes, whether it be to speed recovery, reduce training time, or enhance immersion and enjoyment, there are inherent drawbacks to environments that can potentially change sensorimotor calibration. Across numerous VE studies over the years, we have investigated the effects of combining visual and physical motion on perception, motor control, and adaptation. Recent results from our research involving exposure to dynamic passive motion within a visually-depicted VE reveal that short-term exposure to augmented sensorimotor discordance can result in systematic aftereffects that last beyond the exposure period. Whether these adaptations are advantageous or not, remains to be seen. Benefits as well as risks of using VE-driven sensorimotor stimulation to enhance brain processes will be discussed.

  9. Perception, Cognition, and Visualization.

    ERIC Educational Resources Information Center

    Arnheim, Rudolf

    1991-01-01

    Described are how pictures can combine aspects of naturalistic representation with more formal shapes to enhance cognitive understanding. These "diagrammatic" shapes derive from geometrical elementary and thereby bestow visual concreteness to concepts conveyed by the pictures. Leonardo da Vinci's anatomical drawings are used as examples…

  10. It "Feels" like It's Me: Interpersonal Multisensory Stimulation Enhances Visual Remapping of Touch from Other to Self

    ERIC Educational Resources Information Center

    Cardini, Flavia; Tajadura-Jimenez, Ana; Serino, Andrea; Tsakiris, Manos

    2013-01-01

    Understanding other people's feelings in social interactions depends on the ability to map onto our body the sensory experiences we observed on other people's bodies. It has been shown that the perception of tactile stimuli on the face is improved when concurrently viewing a face being touched. This Visual Remapping of Touch (VRT) is enhanced the…

  11. Arousal Rules: An Empirical Investigation into the Aesthetic Experience of Cross-Modal Perception with Emotional Visual Music

    PubMed Central

    Lee, Irene Eunyoung; Latchoumane, Charles-Francois V.; Jeong, Jaeseung

    2017-01-01

    Emotional visual music is a promising tool for the study of aesthetic perception in human psychology; however, the production of such stimuli and the mechanisms of auditory-visual emotion perception remain poorly understood. In Experiment 1, we suggested a literature-based, directive approach to emotional visual music design, and inspected the emotional meanings thereof using the self-rated psychometric and electroencephalographic (EEG) responses of the viewers. A two-dimensional (2D) approach to the assessment of emotion (the valence-arousal plane) with frontal alpha power asymmetry EEG (as a proposed index of valence) validated our visual music as an emotional stimulus. In Experiment 2, we used our synthetic stimuli to investigate possible underlying mechanisms of affective evaluation mechanisms in relation to audio and visual integration conditions between modalities (namely congruent, complementation, or incongruent combinations). In this experiment, we found that, when arousal information between auditory and visual modalities was contradictory [for example, active (+) on the audio channel but passive (−) on the video channel], the perceived emotion of cross-modal perception (visual music) followed the channel conveying the stronger arousal. Moreover, we found that an enhancement effect (heightened and compacted in subjects' emotional responses) in the aesthetic perception of visual music might occur when the two channels contained contradictory arousal information and positive congruency in valence and texture/control. To the best of our knowledge, this work is the first to propose a literature-based directive production of emotional visual music prototypes and the validations thereof for the study of cross-modally evoked aesthetic experiences in human subjects. PMID:28421007

  12. Arousal Rules: An Empirical Investigation into the Aesthetic Experience of Cross-Modal Perception with Emotional Visual Music.

    PubMed

    Lee, Irene Eunyoung; Latchoumane, Charles-Francois V; Jeong, Jaeseung

    2017-01-01

    Emotional visual music is a promising tool for the study of aesthetic perception in human psychology; however, the production of such stimuli and the mechanisms of auditory-visual emotion perception remain poorly understood. In Experiment 1, we suggested a literature-based, directive approach to emotional visual music design, and inspected the emotional meanings thereof using the self-rated psychometric and electroencephalographic (EEG) responses of the viewers. A two-dimensional (2D) approach to the assessment of emotion (the valence-arousal plane) with frontal alpha power asymmetry EEG (as a proposed index of valence) validated our visual music as an emotional stimulus. In Experiment 2, we used our synthetic stimuli to investigate possible underlying mechanisms of affective evaluation mechanisms in relation to audio and visual integration conditions between modalities (namely congruent, complementation, or incongruent combinations). In this experiment, we found that, when arousal information between auditory and visual modalities was contradictory [for example, active (+) on the audio channel but passive (-) on the video channel], the perceived emotion of cross-modal perception (visual music) followed the channel conveying the stronger arousal. Moreover, we found that an enhancement effect (heightened and compacted in subjects' emotional responses) in the aesthetic perception of visual music might occur when the two channels contained contradictory arousal information and positive congruency in valence and texture/control. To the best of our knowledge, this work is the first to propose a literature-based directive production of emotional visual music prototypes and the validations thereof for the study of cross-modally evoked aesthetic experiences in human subjects.

  13. Shifty: A Weight-Shifting Dynamic Passive Haptic Proxy to Enhance Object Perception in Virtual Reality.

    PubMed

    Zenner, Andre; Kruger, Antonio

    2017-04-01

    We define the concept of Dynamic Passive Haptic Feedback (DPHF) for virtual reality by introducing the weight-shifting physical DPHF proxy object Shifty. This concept combines actuators known from active haptics and physical proxies known from passive haptics to construct proxies that automatically adapt their passive haptic feedback. We describe the concept behind our ungrounded weight-shifting DPHF proxy Shifty and the implementation of our prototype. We then investigate how Shifty can, by automatically changing its internal weight distribution, enhance the user's perception of virtual objects interacted with in two experiments. In a first experiment, we show that Shifty can enhance the perception of virtual objects changing in shape, especially in length and thickness. Here, Shifty was shown to increase the user's fun and perceived realism significantly, compared to an equivalent passive haptic proxy. In a second experiment, Shifty is used to pick up virtual objects of different virtual weights. The results show that Shifty enhances the perception of weight and thus the perceived realism by adapting its kinesthetic feedback to the picked-up virtual object. In the same experiment, we additionally show that specific combinations of haptic, visual and auditory feedback during the pick-up interaction help to compensate for visual-haptic mismatch perceived during the shifting process.

  14. Audiovisual Perception of Noise Vocoded Speech in Dyslexic and Non-Dyslexic Adults: The Role of Low-Frequency Visual Modulations

    ERIC Educational Resources Information Center

    Megnin-Viggars, Odette; Goswami, Usha

    2013-01-01

    Visual speech inputs can enhance auditory speech information, particularly in noisy or degraded conditions. The natural statistics of audiovisual speech highlight the temporal correspondence between visual and auditory prosody, with lip, jaw, cheek and head movements conveying information about the speech envelope. Low-frequency spatial and…

  15. Cued Speech for Enhancing Speech Perception and First Language Development of Children With Cochlear Implants

    PubMed Central

    Leybaert, Jacqueline; LaSasso, Carol J.

    2010-01-01

    Nearly 300 million people worldwide have moderate to profound hearing loss. Hearing impairment, if not adequately managed, has strong socioeconomic and affective impact on individuals. Cochlear implants have become the most effective vehicle for helping profoundly deaf children and adults to understand spoken language, to be sensitive to environmental sounds, and, to some extent, to listen to music. The auditory information delivered by the cochlear implant remains non-optimal for speech perception because it delivers a spectrally degraded signal and lacks some of the fine temporal acoustic structure. In this article, we discuss research revealing the multimodal nature of speech perception in normally-hearing individuals, with important inter-subject variability in the weighting of auditory or visual information. We also discuss how audio-visual training, via Cued Speech, can improve speech perception in cochlear implantees, particularly in noisy contexts. Cued Speech is a system that makes use of visual information from speechreading combined with hand shapes positioned in different places around the face in order to deliver completely unambiguous information about the syllables and the phonemes of spoken language. We support our view that exposure to Cued Speech before or after the implantation could be important in the aural rehabilitation process of cochlear implantees. We describe five lines of research that are converging to support the view that Cued Speech can enhance speech perception in individuals with cochlear implants. PMID:20724357

  16. Parametric Study of Diffusion-Enhancement Networks for Spatiotemporal Grouping in Real-Time Artificial Vision

    DTIC Science & Technology

    1993-04-01

    suggesting it occurs in later visual motion processing (long-range or second-order system). STIMULUS PERCEPT L" FLASH DURATION FLASH DURATION (a) TIME ( b ...TIME Figure 2. Gamma motion. (a) A light of fixed spatial extent is illuminated then extim- guished. ( b ) The percept is of a light expanding and then...while smaller, type- B cells provide input to its parvocellular subdivision. From here the magnocellular pathway progresses up through visual cortex area V

  17. Dissociating emotion-induced blindness and hypervision.

    PubMed

    Bocanegra, Bruno R; Zeelenberg, René

    2009-12-01

    Previous findings suggest that emotional stimuli sometimes improve (emotion-induced hypervision) and sometimes impair (emotion-induced blindness) the visual perception of subsequent neutral stimuli. We hypothesized that these differential carryover effects might be due to 2 distinct emotional influences in visual processing. On the one hand, emotional stimuli trigger a general enhancement in the efficiency of visual processing that can carry over onto other stimuli. On the other hand, emotional stimuli benefit from a stimulus-specific enhancement in later attentional processing at the expense of competing visual stimuli. We investigated whether detrimental (blindness) and beneficial (hypervision) carryover effects of emotion in perception can be dissociated within a single experimental paradigm. In 2 experiments, we manipulated the temporal competition for attention between an emotional cue word and a subsequent neutral target word by varying cue-target interstimulus interval (ISI) and cue visibility. Interestingly, emotional cues impaired target identification at short ISIs but improved target identification when competition was diminished by either increasing ISI or reducing cue visibility, suggesting that emotional significance of stimuli can improve and impair visual performance through distinct perceptual mechanisms.

  18. Category learning increases discriminability of relevant object dimensions in visual cortex.

    PubMed

    Folstein, Jonathan R; Palmeri, Thomas J; Gauthier, Isabel

    2013-04-01

    Learning to categorize objects can transform how they are perceived, causing relevant perceptual dimensions predictive of object category to become enhanced. For example, an expert mycologist might become attuned to species-specific patterns of spacing between mushroom gills but learn to ignore cap textures attributable to varying environmental conditions. These selective changes in perception can persist beyond the act of categorizing objects and influence our ability to discriminate between them. Using functional magnetic resonance imaging adaptation, we demonstrate that such category-specific perceptual enhancements are associated with changes in the neural discriminability of object representations in visual cortex. Regions within the anterior fusiform gyrus became more sensitive to small variations in shape that were relevant during prior category learning. In addition, extrastriate occipital areas showed heightened sensitivity to small variations in shape that spanned the category boundary. Visual representations in cortex, just like our perception, are sensitive to an object's history of categorization.

  19. Using Short-Term Group Counseling with Visually Impaired Adolescents.

    ERIC Educational Resources Information Center

    Johnson, C. L., Jr.; Johnson, J. A.

    1991-01-01

    A group counseling approach was used to enhance the self-concept of 10 congenitally visually impaired adolescents. Group sessions focused on such topics as self-perception, assertiveness, friendship, familial relationships, and independent living skills. Evaluation found significant improvement in self-concept, attitudes toward blindness, and…

  20. When visual perception causes feeling: enhanced cross-modal processing in grapheme-color synesthesia.

    PubMed

    Weiss, Peter H; Zilles, Karl; Fink, Gereon R

    2005-12-01

    In synesthesia, stimulation of one sensory modality (e.g., hearing) triggers a percept in another, non-stimulated sensory modality (e.g., vision). Likewise, perception of a form (e.g., a letter) may induce a color percept (i.e., grapheme-color synesthesia). To date, the neural mechanisms underlying synesthesia remain to be elucidated. We disclosed by fMRI, while controlling for surface color processing, enhanced activity in the left intraparietal cortex during the experience of grapheme-color synesthesia (n = 9). In contrast, the perception of surface color per se activated the color centers in the fusiform gyrus bilaterally. The data support theoretical accounts that grapheme-color synesthesia may originate from enhanced cross-modal binding of form and color. A mismatch of surface color and grapheme induced synesthetically felt color additionally activated the left dorsolateral prefrontal cortex (DLPFC). This suggests that cognitive control processes become active to resolve the perceptual conflict resulting from synesthesia.

  1. Working Memory Enhances Visual Perception: Evidence from Signal Detection Analysis

    ERIC Educational Resources Information Center

    Soto, David; Wriglesworth, Alice; Bahrami-Balani, Alex; Humphreys, Glyn W.

    2010-01-01

    We show that perceptual sensitivity to visual stimuli can be modulated by matches between the contents of working memory (WM) and stimuli in the visual field. Observers were presented with an object cue (to hold in WM or to merely attend) and subsequently had to identify a brief target presented within a colored shape. The cue could be…

  2. Using virtual reality to augment perception, enhance sensorimotor adaptation, and change our minds

    PubMed Central

    Wright, W. Geoffrey

    2014-01-01

    Technological advances that involve human sensorimotor processes can have both intended and unintended effects on the central nervous system (CNS). This mini review focuses on the use of virtual environments (VE) to augment brain functions by enhancing perception, eliciting automatic motor behavior, and inducing sensorimotor adaptation. VE technology is becoming increasingly prevalent in medical rehabilitation, training simulators, gaming, and entertainment. Although these VE applications have often been shown to optimize outcomes, whether it be to speed recovery, reduce training time, or enhance immersion and enjoyment, there are inherent drawbacks to environments that can potentially change sensorimotor calibration. Across numerous VE studies over the years, we have investigated the effects of combining visual and physical motion on perception, motor control, and adaptation. Recent results from our research involving exposure to dynamic passive motion within a visually-depicted VE reveal that short-term exposure to augmented sensorimotor discordance can result in systematic aftereffects that last beyond the exposure period. Whether these adaptations are advantageous or not, remains to be seen. Benefits as well as risks of using VE-driven sensorimotor stimulation to enhance brain processes will be discussed. PMID:24782724

  3. Rapid Simultaneous Enhancement of Visual Sensitivity and Perceived Contrast during Saccade Preparation

    PubMed Central

    Rolfs, Martin; Carrasco, Marisa

    2012-01-01

    Humans and other animals with foveate vision make saccadic eye movements to prioritize the visual analysis of behaviorally relevant information. Even before movement onset, visual processing is selectively enhanced at the target of a saccade, presumably gated by brain areas controlling eye movements. Here we assess concurrent changes in visual performance and perceived contrast before saccades, and show that saccade preparation enhances perception rapidly, altering early visual processing in a manner akin to increasing the physical contrast of the visual input. Observers compared orientation and contrast of a test stimulus, appearing briefly before a saccade, to a standard stimulus, presented previously during a fixation period. We found simultaneous progressive enhancement in both orientation discrimination performance and perceived contrast as time approached saccade onset. These effects were robust as early as 60 ms after the eye movement was cued, much faster than the voluntary deployment of covert attention (without eye movements), which takes ~300 ms. Our results link the dynamics of saccade preparation, visual performance, and subjective experience and show that upcoming eye movements alter visual processing by increasing the signal strength. PMID:23035086

  4. Enhanced Visual Short-Term Memory for Angry Faces

    ERIC Educational Resources Information Center

    Jackson, Margaret C.; Wu, Chia-Yun; Linden, David E. J.; Raymond, Jane E.

    2009-01-01

    Although some views of face perception posit independent processing of face identity and expression, recent studies suggest interactive processing of these 2 domains. The authors examined expression-identity interactions in visual short-term memory (VSTM) by assessing recognition performance in a VSTM task in which face identity was relevant and…

  5. The Development of a Two-Dimensional Multielectrode Array for Visual Perception Research in the Mammalian Brain.

    DTIC Science & Technology

    1980-12-01

    primary and secondary visual cortex or in the secondary visual cortex itself. When the secondary visual cortex is electrically stimulated , the subject...effect enhances their excitability, which reduces the additional stimulation ( electrical or chemical) required to elicit an action potential. These...and the peripheral area with rods. The rods have a very low light intensity threshold and provide stimulation to optic nerve fibers for low light

  6. Neocortical Rebound Depolarization Enhances Visual Perception

    PubMed Central

    Funayama, Kenta; Ban, Hiroshi; Chan, Allen W.; Matsuki, Norio; Murphy, Timothy H.; Ikegaya, Yuji

    2015-01-01

    Animals are constantly exposed to the time-varying visual world. Because visual perception is modulated by immediately prior visual experience, visual cortical neurons may register recent visual history into a specific form of offline activity and link it to later visual input. To examine how preceding visual inputs interact with upcoming information at the single neuron level, we designed a simple stimulation protocol in which a brief, orientated flashing stimulus was subsequently coupled to visual stimuli with identical or different features. Using in vivo whole-cell patch-clamp recording and functional two-photon calcium imaging from the primary visual cortex (V1) of awake mice, we discovered that a flash of sinusoidal grating per se induces an early, transient activation as well as a long-delayed reactivation in V1 neurons. This late response, which started hundreds of milliseconds after the flash and persisted for approximately 2 s, was also observed in human V1 electroencephalogram. When another drifting grating stimulus arrived during the late response, the V1 neurons exhibited a sublinear, but apparently increased response, especially to the same grating orientation. In behavioral tests of mice and humans, the flashing stimulation enhanced the detection power of the identically orientated visual stimulation only when the second stimulation was presented during the time window of the late response. Therefore, V1 late responses likely provide a neural basis for admixing temporally separated stimuli and extracting identical features in time-varying visual environments. PMID:26274866

  7. Cross-Modal and Intra-Modal Characteristics of Visual Function and Speech Perception Performance in Postlingually Deafened, Cochlear Implant Users

    PubMed Central

    Kim, Min-Beom; Shim, Hyun-Yong; Jin, Sun Hwa; Kang, Soojin; Woo, Jihwan; Han, Jong Chul; Lee, Ji Young; Kim, Martha; Cho, Yang-Sun

    2016-01-01

    Evidence of visual-auditory cross-modal plasticity in deaf individuals has been widely reported. Superior visual abilities of deaf individuals have been shown to result in enhanced reactivity to visual events and/or enhanced peripheral spatial attention. The goal of this study was to investigate the association between visual-auditory cross-modal plasticity and speech perception in post-lingually deafened, adult cochlear implant (CI) users. Post-lingually deafened adults with CIs (N = 14) and a group of normal hearing, adult controls (N = 12) participated in this study. The CI participants were divided into a good performer group (good CI, N = 7) and a poor performer group (poor CI, N = 7) based on word recognition scores. Visual evoked potentials (VEP) were recorded from the temporal and occipital cortex to assess reactivity. Visual field (VF) testing was used to assess spatial attention and Goldmann perimetry measures were analyzed to identify differences across groups in the VF. The association of the amplitude of the P1 VEP response over the right temporal or occipital cortex among three groups (control, good CI, poor CI) was analyzed. In addition, the association between VF by different stimuli and word perception score was evaluated. The P1 VEP amplitude recorded from the right temporal cortex was larger in the group of poorly performing CI users than the group of good performers. The P1 amplitude recorded from electrodes near the occipital cortex was smaller for the poor performing group. P1 VEP amplitude in right temporal lobe was negatively correlated with speech perception outcomes for the CI participants (r = -0.736, P = 0.003). However, P1 VEP amplitude measures recorded from near the occipital cortex had a positive correlation with speech perception outcome in the CI participants (r = 0.775, P = 0.001). In VF analysis, CI users showed narrowed central VF (VF to low intensity stimuli). However, their far peripheral VF (VF to high intensity stimuli) was not different from the controls. In addition, the extent of their central VF was positively correlated with speech perception outcome (r = 0.669, P = 0.009). Persistent visual activation in right temporal cortex even after CI causes negative effect on outcome in post-lingual deaf adults. We interpret these results to suggest that insufficient intra-modal (visual) compensation by the occipital cortex may cause negative effects on outcome. Based on our results, it appears that a narrowed central VF could help identify CI users with poor outcomes with their device. PMID:26848755

  8. Plastic reorganization of neural systems for perception of others in the congenitally blind.

    PubMed

    Fairhall, S L; Porter, K B; Bellucci, C; Mazzetti, M; Cipolli, C; Gobbini, M I

    2017-09-01

    Recent evidence suggests that the function of the core system for face perception might extend beyond visual face-perception to a broader role in person perception. To critically test the broader role of core face-system in person perception, we examined the role of the core system during the perception of others in 7 congenitally blind individuals and 15 sighted subjects by measuring their neural responses using fMRI while they listened to voices and performed identity and emotion recognition tasks. We hypothesised that in people who have had no visual experience of faces, core face-system areas may assume a role in the perception of others via voices. Results showed that emotions conveyed by voices can be decoded in homologues of the core face system only in the blind. Moreover, there was a specific enhancement of response to verbal as compared to non-verbal stimuli in bilateral fusiform face areas and the right posterior superior temporal sulcus showing that the core system also assumes some language-related functions in the blind. These results indicate that, in individuals with no history of visual experience, areas of the core system for face perception may assume a role in aspects of voice perception that are relevant to social cognition and perception of others' emotions. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  9. Unconscious Learning versus Visual Perception: Dissociable Roles for Gamma Oscillations Revealed in MEG

    ERIC Educational Resources Information Center

    Chaumon, Maximilien; Schwartz, Denis; Tallon-Baudry, Catherine

    2009-01-01

    Oscillatory synchrony in the gamma band (30-120 Hz) has been involved in various cognitive functions including conscious perception and learning. Explicit memory encoding, in particular, relies on enhanced gamma oscillations. Does this finding extend to unconscious memory encoding? Can we dissociate gamma oscillations related to unconscious…

  10. Dissociated α-band modulations in the dorsal and ventral visual pathways in visuospatial attention and perception.

    PubMed

    Capilla, Almudena; Schoffelen, Jan-Mathijs; Paterson, Gavin; Thut, Gregor; Gross, Joachim

    2014-02-01

    Modulations of occipito-parietal α-band (8-14 Hz) power that are opposite in direction (α-enhancement vs. α-suppression) and origin of generation (ipsilateral vs. contralateral to the locus of attention) are a robust correlate of anticipatory visuospatial attention. Yet, the neural generators of these α-band modulations, their interdependence across homotopic areas, and their respective contribution to subsequent perception remain unclear. To shed light on these questions, we employed magnetoencephalography, while human volunteers performed a spatially cued detection task. Replicating previous findings, we found α-power enhancement ipsilateral to the attended hemifield and contralateral α-suppression over occipito-parietal sensors. Source localization (beamforming) analysis showed that α-enhancement and suppression were generated in 2 distinct brain regions, located in the dorsal and ventral visual streams, respectively. Moreover, α-enhancement and suppression showed different dynamics and contribution to perception. In contrast to the initial and transient dorsal α-enhancement, α-suppression in ventro-lateral occipital cortex was sustained and influenced subsequent target detection. This anticipatory biasing of ventro-lateral extrastriate α-activity probably reflects increased receptivity in the brain region specialized in processing upcoming target features. Our results add to current models on the role of α-oscillations in attention orienting by showing that α-enhancement and suppression can be dissociated in time, space, and perceptual relevance.

  11. Image processing strategies based on saliency segmentation for object recognition under simulated prosthetic vision.

    PubMed

    Li, Heng; Su, Xiaofan; Wang, Jing; Kan, Han; Han, Tingting; Zeng, Yajie; Chai, Xinyu

    2018-01-01

    Current retinal prostheses can only generate low-resolution visual percepts constituted of limited phosphenes which are elicited by an electrode array and with uncontrollable color and restricted grayscale. Under this visual perception, prosthetic recipients can just complete some simple visual tasks, but more complex tasks like face identification/object recognition are extremely difficult. Therefore, it is necessary to investigate and apply image processing strategies for optimizing the visual perception of the recipients. This study focuses on recognition of the object of interest employing simulated prosthetic vision. We used a saliency segmentation method based on a biologically plausible graph-based visual saliency model and a grabCut-based self-adaptive-iterative optimization framework to automatically extract foreground objects. Based on this, two image processing strategies, Addition of Separate Pixelization and Background Pixel Shrink, were further utilized to enhance the extracted foreground objects. i) The results showed by verification of psychophysical experiments that under simulated prosthetic vision, both strategies had marked advantages over Direct Pixelization in terms of recognition accuracy and efficiency. ii) We also found that recognition performance under two strategies was tied to the segmentation results and was affected positively by the paired-interrelated objects in the scene. The use of the saliency segmentation method and image processing strategies can automatically extract and enhance foreground objects, and significantly improve object recognition performance towards recipients implanted a high-density implant. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Image Perception Wavelet Simulation and Enhancement for the Visually Impaired.

    DTIC Science & Technology

    1994-12-01

    and Computational Harmonic Analysis, 1:54-81 (1993). 6. Cornsweet, Tom N. "The Staircase-Method in Psychophysics," The American Journal of Psychology ...of a Visual Model," Proceedings of the IEEE, 60(7):828-842 (July 1972). 33. Taylor, M. M. and C Douglas Creelman . "PEST: Efficient Estimates on

  13. Can walking motions improve visually induced rotational self-motion illusions in virtual reality?

    PubMed

    Riecke, Bernhard E; Freiberg, Jacob B; Grechkin, Timofey Y

    2015-02-04

    Illusions of self-motion (vection) can provide compelling sensations of moving through virtual environments without the need for complex motion simulators or large tracked physical walking spaces. Here we explore the interaction between biomechanical cues (stepping along a rotating circular treadmill) and visual cues (viewing simulated self-rotation) for providing stationary users a compelling sensation of rotational self-motion (circular vection). When tested individually, biomechanical and visual cues were similarly effective in eliciting self-motion illusions. However, in combination they yielded significantly more intense self-motion illusions. These findings provide the first compelling evidence that walking motions can be used to significantly enhance visually induced rotational self-motion perception in virtual environments (and vice versa) without having to provide for physical self-motion or motion platforms. This is noteworthy, as linear treadmills have been found to actually impair visually induced translational self-motion perception (Ash, Palmisano, Apthorp, & Allison, 2013). Given the predominant focus on linear walking interfaces for virtual-reality locomotion, our findings suggest that investigating circular and curvilinear walking interfaces offers a promising direction for future research and development and can help to enhance self-motion illusions, presence and immersion in virtual-reality systems. © 2015 ARVO.

  14. Odours reduce the magnitude of object substitution masking for matching visual targets in females.

    PubMed

    Robinson, Amanda K; Laning, Julia; Reinhard, Judith; Mattingley, Jason B

    2016-08-01

    Recent evidence suggests that olfactory stimuli can influence early stages of visual processing, but there has been little focus on whether such olfactory-visual interactions convey an advantage in visual object identification. Moreover, despite evidence that some aspects of olfactory perception are superior in females than males, no study to date has examined whether olfactory influences on vision are gender-dependent. We asked whether inhalation of familiar odorants can modulate participants' ability to identify briefly flashed images of matching visual objects under conditions of object substitution masking (OSM). Across two experiments, we had male and female participants (N = 36 in each group) identify masked visual images of odour-related objects (e.g., orange, rose, mint) amongst nonodour-related distracters (e.g., box, watch). In each trial, participants inhaled a single odour that either matched or mismatched the masked, odour-related target. Target detection performance was analysed using a signal detection (d') approach. In females, but not males, matching odours significantly reduced OSM relative to mismatching odours, suggesting that familiar odours can enhance the salience of briefly presented visual objects. We conclude that olfactory cues exert a subtle influence on visual processes by transiently enhancing the salience of matching object representations. The results add to a growing body of literature that points towards consistent gender differences in olfactory perception.

  15. Drifting while stepping in place in old adults: Association of self-motion perception with reference frame reliance and ground optic flow sensitivity.

    PubMed

    Agathos, Catherine P; Bernardin, Delphine; Baranton, Konogan; Assaiante, Christine; Isableu, Brice

    2017-04-07

    Optic flow provides visual self-motion information and is shown to modulate gait and provoke postural reactions. We have previously reported an increased reliance on the visual, as opposed to the somatosensory-based egocentric, frame of reference (FoR) for spatial orientation with age. In this study, we evaluated FoR reliance for self-motion perception with respect to the ground surface. We examined how effects of ground optic flow direction on posture may be enhanced by an intermittent podal contact with the ground, and reliance on the visual FoR and aging. Young, middle-aged and old adults stood quietly (QS) or stepped in place (SIP) for 30s under static stimulation, approaching and receding optic flow on the ground and a control condition. We calculated center of pressure (COP) translation and optic flow sensitivity was defined as the ratio of COP translation velocity over absolute optic flow velocity: the visual self-motion quotient (VSQ). COP translation was more influenced by receding flow during QS and by approaching flow during SIP. In addition, old adults drifted forward while SIP without any imposed visual stimulation. Approaching flow limited this natural drift and receding flow enhanced it, as indicated by the VSQ. The VSQ appears to be a motor index of reliance on the visual FoR during SIP and is associated with greater reliance on the visual and reduced reliance on the egocentric FoR. Exploitation of the egocentric FoR for self-motion perception with respect to the ground surface is compromised by age and associated with greater sensitivity to optic flow. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  16. The perception of visual emotion: comparing different measures of awareness.

    PubMed

    Szczepanowski, Remigiusz; Traczyk, Jakub; Wierzchoń, Michał; Cleeremans, Axel

    2013-03-01

    Here, we explore the sensitivity of different awareness scales in revealing conscious reports on visual emotion perception. Participants were exposed to a backward masking task involving fearful faces and asked to rate their conscious awareness in perceiving emotion in facial expression using three different subjective measures: confidence ratings (CRs), with the conventional taxonomy of certainty, the perceptual awareness scale (PAS), through which participants categorize "raw" visual experience, and post-decision wagering (PDW), which involves economic categorization. Our results show that the CR measure was the most exhaustive and the most graded. In contrast, the PAS and PDW measures suggested instead that consciousness of emotional stimuli is dichotomous. Possible explanations of the inconsistency were discussed. Finally, our results also indicate that PDW biases awareness ratings by enhancing first-order accuracy of emotion perception. This effect was possibly a result of higher motivation induced by monetary incentives. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. Enhanced Fine-Form Perception Does Not Contribute to Gestalt Face Perception in Autism Spectrum Disorder

    PubMed Central

    Maekawa, Toshihiko; Miyanaga, Yuka; Takahashi, Kenji; Takamiya, Naomi; Ogata, Katsuya; Tobimatsu, Shozo

    2017-01-01

    Individuals with autism spectrum disorder (ASD) show superior performance in processing fine detail, but often exhibit impaired gestalt face perception. The ventral visual stream from the primary visual cortex (V1) to the fusiform gyrus (V4) plays an important role in form (including faces) and color perception. The aim of this study was to investigate how the ventral stream is functionally altered in ASD. Visual evoked potentials were recorded in high-functioning ASD adults (n = 14) and typically developing (TD) adults (n = 14). We used three types of visual stimuli as follows: isoluminant chromatic (red/green, RG) gratings, high-contrast achromatic (black/white, BW) gratings with high spatial frequency (HSF, 5.3 cycles/degree), and face (neutral, happy, and angry faces) stimuli. Compared with TD controls, ASD adults exhibited longer N1 latency for RG, shorter N1 latency for BW, and shorter P1 latency, but prolonged N170 latency, for face stimuli. Moreover, a greater difference in latency between P1 and N170, or between N1 for BW and N170 (i.e., the prolongation of cortico-cortical conduction time between V1 and V4) was observed in ASD adults. These findings indicate that ASD adults have enhanced fine-form (local HSF) processing, but impaired color processing at V1. In addition, they exhibit impaired gestalt face processing due to deficits in integration of multiple local HSF facial information at V4. Thus, altered ventral stream function may contribute to abnormal social processing in ASD. PMID:28146575

  18. Lighting design for globally illuminated volume rendering.

    PubMed

    Zhang, Yubo; Ma, Kwan-Liu

    2013-12-01

    With the evolution of graphics hardware, high quality global illumination becomes available for real-time volume rendering. Compared to local illumination, global illumination can produce realistic shading effects which are closer to real world scenes, and has proven useful for enhancing volume data visualization to enable better depth and shape perception. However, setting up optimal lighting could be a nontrivial task for average users. There were lighting design works for volume visualization but they did not consider global light transportation. In this paper, we present a lighting design method for volume visualization employing global illumination. The resulting system takes into account view and transfer-function dependent content of the volume data to automatically generate an optimized three-point lighting environment. Our method fully exploits the back light which is not used by previous volume visualization systems. By also including global shadow and multiple scattering, our lighting system can effectively enhance the depth and shape perception of volumetric features of interest. In addition, we propose an automatic tone mapping operator which recovers visual details from overexposed areas while maintaining sufficient contrast in the dark areas. We show that our method is effective for visualizing volume datasets with complex structures. The structural information is more clearly and correctly presented under the automatically generated light sources.

  19. Visual acuity in adults with Asperger's syndrome: no evidence for "eagle-eyed" vision.

    PubMed

    Falkmer, Marita; Stuart, Geoffrey W; Danielsson, Henrik; Bram, Staffan; Lönebrink, Mikael; Falkmer, Torbjörn

    2011-11-01

    Autism spectrum conditions (ASC) are defined by criteria comprising impairments in social interaction and communication. Altered visual perception is one possible and often discussed cause of difficulties in social interaction and social communication. Recently, Ashwin et al. suggested that enhanced ability in local visual processing in ASC was due to superior visual acuity, but that study has been the subject of methodological criticism, placing the findings in doubt. The present study investigated visual acuity thresholds in 24 adults with Asperger's syndrome and compared their results with 25 control subjects with the 2 Meter 2000 Series Revised ETDRS Chart. The distribution of visual acuities within the two groups was highly similar, and none of the participants had superior visual acuity. Superior visual acuity in individuals with Asperger's syndrome could not be established, suggesting that differences in visual perception in ASC are not explained by this factor. A continued search for explanations of superior ability in local visual processing in persons with ASC is therefore warranted. Copyright © 2011 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  20. Binding and unbinding the auditory and visual streams in the McGurk effect.

    PubMed

    Nahorna, Olha; Berthommier, Frédéric; Schwartz, Jean-Luc

    2012-08-01

    Subjects presented with coherent auditory and visual streams generally fuse them into a single percept. This results in enhanced intelligibility in noise, or in visual modification of the auditory percept in the McGurk effect. It is classically considered that processing is done independently in the auditory and visual systems before interaction occurs at a certain representational stage, resulting in an integrated percept. However, some behavioral and neurophysiological data suggest the existence of a two-stage process. A first stage would involve binding together the appropriate pieces of audio and video information before fusion per se in a second stage. Then it should be possible to design experiments leading to unbinding. It is shown here that if a given McGurk stimulus is preceded by an incoherent audiovisual context, the amount of McGurk effect is largely reduced. Various kinds of incoherent contexts (acoustic syllables dubbed on video sentences or phonetic or temporal modifications of the acoustic content of a regular sequence of audiovisual syllables) can significantly reduce the McGurk effect even when they are short (less than 4 s). The data are interpreted in the framework of a two-stage "binding and fusion" model for audiovisual speech perception.

  1. Figure-ground modulation in awake primate thalamus.

    PubMed

    Jones, Helen E; Andolina, Ian M; Shipp, Stewart D; Adams, Daniel L; Cudeiro, Javier; Salt, Thomas E; Sillito, Adam M

    2015-06-02

    Figure-ground discrimination refers to the perception of an object, the figure, against a nondescript background. Neural mechanisms of figure-ground detection have been associated with feedback interactions between higher centers and primary visual cortex and have been held to index the effect of global analysis on local feature encoding. Here, in recordings from visual thalamus of alert primates, we demonstrate a robust enhancement of neuronal firing when the figure, as opposed to the ground, component of a motion-defined figure-ground stimulus is located over the receptive field. In this paradigm, visual stimulation of the receptive field and its near environs is identical across both conditions, suggesting the response enhancement reflects higher integrative mechanisms. It thus appears that cortical activity generating the higher-order percept of the figure is simultaneously reentered into the lowest level that is anatomically possible (the thalamus), so that the signature of the evolving representation of the figure is imprinted on the input driving it in an iterative process.

  2. Figure-ground modulation in awake primate thalamus

    PubMed Central

    Jones, Helen E.; Andolina, Ian M.; Shipp, Stewart D.; Adams, Daniel L.; Cudeiro, Javier; Salt, Thomas E.; Sillito, Adam M.

    2015-01-01

    Figure-ground discrimination refers to the perception of an object, the figure, against a nondescript background. Neural mechanisms of figure-ground detection have been associated with feedback interactions between higher centers and primary visual cortex and have been held to index the effect of global analysis on local feature encoding. Here, in recordings from visual thalamus of alert primates, we demonstrate a robust enhancement of neuronal firing when the figure, as opposed to the ground, component of a motion-defined figure-ground stimulus is located over the receptive field. In this paradigm, visual stimulation of the receptive field and its near environs is identical across both conditions, suggesting the response enhancement reflects higher integrative mechanisms. It thus appears that cortical activity generating the higher-order percept of the figure is simultaneously reentered into the lowest level that is anatomically possible (the thalamus), so that the signature of the evolving representation of the figure is imprinted on the input driving it in an iterative process. PMID:25901330

  3. Shape perception simultaneously up- and downregulates neural activity in the primary visual cortex.

    PubMed

    Kok, Peter; de Lange, Floris P

    2014-07-07

    An essential part of visual perception is the grouping of local elements (such as edges and lines) into coherent shapes. Previous studies have shown that this grouping process modulates neural activity in the primary visual cortex (V1) that is signaling the local elements [1-4]. However, the nature of this modulation is controversial. Some studies find that shape perception reduces neural activity in V1 [2, 5, 6], while others report increased V1 activity during shape perception [1, 3, 4, 7-10]. Neurocomputational theories that cast perception as a generative process [11-13] propose that feedback connections carry predictions (i.e., the generative model), while feedforward connections signal the mismatch between top-down predictions and bottom-up inputs. Within this framework, the effect of feedback on early visual cortex may be either enhancing or suppressive, depending on whether the feedback signal is met by congruent bottom-up input. Here, we tested this hypothesis by quantifying the spatial profile of neural activity in V1 during the perception of illusory shapes using population receptive field mapping. We find that shape perception concurrently increases neural activity in regions of V1 that have a receptive field on the shape but do not receive bottom-up input and suppresses activity in regions of V1 that receive bottom-up input that is predicted by the shape. These effects were not modulated by task requirements. Together, these findings suggest that shape perception changes lower-order sensory representations in a highly specific and automatic manner, in line with theories that cast perception in terms of hierarchical generative models. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Effects of Audio-Visual Integration on the Detection of Masked Speech and Non-Speech Sounds

    ERIC Educational Resources Information Center

    Eramudugolla, Ranmalee; Henderson, Rachel; Mattingley, Jason B.

    2011-01-01

    Integration of simultaneous auditory and visual information about an event can enhance our ability to detect that event. This is particularly evident in the perception of speech, where the articulatory gestures of the speaker's lips and face can significantly improve the listener's detection and identification of the message, especially when that…

  5. Enhanced visual acuity and image perception following correction of highly aberrated eyes using an adaptive optics visual simulator.

    PubMed

    Rocha, Karolinne Maia; Vabre, Laurent; Chateau, Nicolas; Krueger, Ronald R

    2010-01-01

    To evaluate the changes in visual acuity and visual perception generated by correcting higher order aberrations in highly aberrated eyes using a large-stroke adaptive optics visual simulator. A crx1 Adaptive Optics Visual Simulator (Imagine Eyes) was used to correct and modify the wavefront aberrations in 12 keratoconic eyes and 8 symptomatic postoperative refractive surgery (LASIK) eyes. After measuring ocular aberrations, the device was programmed to compensate for the eye's wavefront error from the second order to the fifth order (6-mm pupil). Visual acuity was assessed through the adaptive optics system using computer-generated ETDRS opto-types and the Freiburg Visual Acuity and Contrast Test. Mean higher order aberration root-mean-square (RMS) errors in the keratoconus and symptomatic LASIK eyes were 1.88+/-0.99 microm and 1.62+/-0.79 microm (6-mm pupil), respectively. The visual simulator correction of the higher order aberrations present in the keratoconus eyes improved their visual acuity by a mean of 2 lines when compared to their best spherocylinder correction (mean decimal visual acuity with spherocylindrical correction was 0.31+/-0.18 and improved to 0.44+/-0.23 with higher order aberration correction). In the symptomatic LASIK eyes, the mean decimal visual acuity with spherocylindrical correction improved from 0.54+/-0.16 to 0.71+/-0.13 with higher order aberration correction. The visual perception of ETDRS letters was improved when correcting higher order aberrations. The adaptive optics visual simulator can effectively measure and compensate for higher order aberrations (second to fifth order), which are associated with diminished visual acuity and perception in highly aberrated eyes. The adaptive optics technology may be of clinical benefit when counseling patients with highly aberrated eyes regarding their maximum subjective potential for vision correction. Copyright 2010, SLACK Incorporated.

  6. Enhanced and diminished visuo-spatial information processing in autism depends on stimulus complexity.

    PubMed

    Bertone, Armando; Mottron, Laurent; Jelenic, Patricia; Faubert, Jocelyn

    2005-10-01

    Visuo-perceptual processing in autism is characterized by intact or enhanced performance on static spatial tasks and inferior performance on dynamic tasks, suggesting a deficit of dorsal visual stream processing in autism. However, previous findings by Bertone et al. indicate that neuro-integrative mechanisms used to detect complex motion, rather than motion perception per se, may be impaired in autism. We present here the first demonstration of concurrent enhanced and decreased performance in autism on the same visuo-spatial static task, wherein the only factor dichotomizing performance was the neural complexity required to discriminate grating orientation. The ability of persons with autism was found to be superior for identifying the orientation of simple, luminance-defined (or first-order) gratings but inferior for complex, texture-defined (or second-order) gratings. Using a flicker contrast sensitivity task, we demonstrated that this finding is probably not due to abnormal information processing at a sub-cortical level (magnocellular and parvocellular functioning). Together, these findings are interpreted as a clear indication of altered low-level perceptual information processing in autism, and confirm that the deficits and assets observed in autistic visual perception are contingent on the complexity of the neural network required to process a given type of visual stimulus. We suggest that atypical neural connectivity, resulting in enhanced lateral inhibition, may account for both enhanced and decreased low-level information processing in autism.

  7. Parameters of Semantic Multisensory Integration Depend on Timing and Modality Order among People on the Autism Spectrum: Evidence from Event-Related Potentials

    ERIC Educational Resources Information Center

    Russo, N.; Mottron, L.; Burack, J. A.; Jemel, B.

    2012-01-01

    Individuals with autism spectrum disorders (ASD) report difficulty integrating simultaneously presented visual and auditory stimuli (Iarocci & McDonald, 2006), albeit showing enhanced perceptual processing of unisensory stimuli, as well as an enhanced role of perception in higher-order cognitive tasks (Enhanced Perceptual Functioning (EPF) model;…

  8. Enhanced associative memory for colour (but not shape or location) in synaesthesia.

    PubMed

    Pritchard, Jamie; Rothen, Nicolas; Coolbear, Daniel; Ward, Jamie

    2013-05-01

    People with grapheme-colour synaesthesia have been shown to have enhanced memory on a range of tasks using both stimuli that induce synaesthesia (e.g. words) and, more surprisingly, stimuli that do not (e.g. certain abstract visual stimuli). This study examines the latter by using multi-featured stimuli consisting of shape, colour and location conjunctions (e.g. shape A+colour A+location A; shape B+colour B+location B) presented in a recognition memory paradigm. This enables distractor items to be created in which one of these features is 'unbound' with respect to the others (e.g. shape A+colour B+location A; shape A+colour A+location C). Synaesthetes had higher recognition rates suggesting an enhanced ability to bind certain visual features together into memory. Importantly, synaesthetes' false alarm rates were lower only when colour was the unbound feature, not shape or location. We suggest that synaesthetes are "colour experts" and that enhanced perception can lead to enhanced memory in very specific ways; but, not for instance, an enhanced ability to form associations per se. The results support contemporary models that propose a continuum between perception and memory. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Image-guided surgery.

    PubMed

    Wagner, A; Ploder, O; Enislidis, G; Truppe, M; Ewers, R

    1996-04-01

    Interventional video tomography (IVT), a new imaging modality, achieves virtual visualization of anatomic structures in three dimensions for intraoperative stereotactic navigation. Partial immersion into a virtual data space, which is orthotopically coregistered to the surgical field, enhances, by means of a see-through head-mounted display (HMD), the surgeon's visual perception and technique by providing visual access to nonvisual data of anatomy, physiology, and function. The presented cases document the potential of augmented reality environments in maxillofacial surgery.

  10. Neural Processing of Congruent and Incongruent Audiovisual Speech in School-Age Children and Adults

    ERIC Educational Resources Information Center

    Heikkilä, Jenni; Tiippana, Kaisa; Loberg, Otto; Leppänen, Paavo H. T.

    2018-01-01

    Seeing articulatory gestures enhances speech perception. Perception of auditory speech can even be changed by incongruent visual gestures, which is known as the McGurk effect (e.g., dubbing a voice saying /mi/ onto a face articulating /ni/, observers often hear /ni/). In children, the McGurk effect is weaker than in adults, but no previous…

  11. Event-related potentials reveal linguistic suppression effect but not enhancement effect on categorical perception of color.

    PubMed

    Lu, Aitao; Yang, Ling; Yu, Yanping; Zhang, Meichao; Shao, Yulan; Zhang, Honghong

    2014-08-01

    The present study used the event-related potential technique to investigate the nature of linguistic effect on color perception. Four types of stimuli based on hue differences between a target color and a preceding color were used: zero hue step within-category color (0-WC); one hue step within-category color (1-WC); one hue step between-category color (1-BC); and two hue step between-category color (2-BC). The ERP results showed no significant effect of stimulus type in the 100-200 ms time window. However, in the 200-350 ms time window, ERP responses to 1-WC target color overlapped with that to 0-WC target color for right visual field (RVF) but not left visual field (LVF) presentation. For the 1-BC condition, ERP amplitudes were comparable in the two visual fields, both being significantly different from the 0-WC condition. The 2-BC condition showed the same pattern as the 1-BC condition. These results suggest that the categorical perception of color in RVF is due to linguistic suppression on within-category color discrimination but not between-category color enhancement, and that the effect is independent of early perceptual processes. © 2014 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  12. Suppressive and enhancing effects in early visual cortex during illusory shape perception: A comment on.

    PubMed

    Moors, Pieter

    2015-01-01

    In a recent functional magnetic resonance imaging study, Kok and de Lange (2014) observed that BOLD activity for a Kanizsa illusory shape stimulus, in which pacmen-like inducers elicit an illusory shape percept, was either enhanced or suppressed relative to a nonillusory control configuration depending on whether the spatial profile of BOLD activity in early visual cortex was related to the illusory shape or the inducers, respectively. The authors argued that these findings fit well with the predictive coding framework, because top-down predictions related to the illusory shape are not met with bottom-up sensory input and hence the feedforward error signal is enhanced. Conversely, for the inducing elements, there is a match between top-down predictions and input, leading to a decrease in error. Rather than invoking predictive coding as the explanatory framework, the suppressive effect related to the inducers might be caused by neural adaptation to perceptually stable input due to the trial sequence used in the experiment.

  13. Verbal Labels Modulate Perceptual Object Processing in 1-Year-Old Children

    ERIC Educational Resources Information Center

    Gliga, Teodora; Volein, Agnes; Csibra, Gergely

    2010-01-01

    Whether verbal labels help infants visually process and categorize objects is a contentious issue. Using electroencephalography, we investigated whether possessing familiar or novel labels for objects directly enhances 1-year-old children's neural processes underlying the perception of those objects. We found enhanced gamma-band (20-60 Hz)…

  14. Sustained multifocal attentional enhancement of stimulus processing in early visual areas predicts tracking performance.

    PubMed

    Störmer, Viola S; Winther, Gesche N; Li, Shu-Chen; Andersen, Søren K

    2013-03-20

    Keeping track of multiple moving objects is an essential ability of visual perception. However, the mechanisms underlying this ability are not well understood. We instructed human observers to track five or seven independent randomly moving target objects amid identical nontargets and recorded steady-state visual evoked potentials (SSVEPs) elicited by these stimuli. Visual processing of moving targets, as assessed by SSVEP amplitudes, was continuously facilitated relative to the processing of identical but irrelevant nontargets. The cortical sources of this enhancement were located to areas including early visual cortex V1-V3 and motion-sensitive area MT, suggesting that the sustained multifocal attentional enhancement during multiple object tracking already operates at hierarchically early stages of visual processing. Consistent with this interpretation, the magnitude of attentional facilitation during tracking in a single trial predicted the speed of target identification at the end of the trial. Together, these findings demonstrate that attention can flexibly and dynamically facilitate the processing of multiple independent object locations in early visual areas and thereby allow for tracking of these objects.

  15. Neural substrates of perceptual integration during bistable object perception

    PubMed Central

    Flevaris, Anastasia V.; Martínez, Antigona; Hillyard, Steven A.

    2013-01-01

    The way we perceive an object depends both on feedforward, bottom-up processing of its physical stimulus properties and on top-down factors such as attention, context, expectation, and task relevance. Here we compared neural activity elicited by varying perceptions of the same physical image—a bistable moving image in which perception spontaneously alternates between dissociated fragments and a single, unified object. A time-frequency analysis of EEG changes associated with the perceptual switch from object to fragment and vice versa revealed a greater decrease in alpha (8–12 Hz) accompanying the switch to object percept than to fragment percept. Recordings of event-related potentials elicited by irrelevant probes superimposed on the moving image revealed an enhanced positivity between 184 and 212 ms when the probes were contained within the boundaries of the perceived unitary object. The topography of the positivity (P2) in this latency range elicited by probes during object perception was distinct from the topography elicited by probes during fragment perception, suggesting that the neural processing of probes differed as a function of perceptual state. Two source localization algorithms estimated the neural generator of this object-related difference to lie in the lateral occipital cortex, a region long associated with object perception. These data suggest that perceived objects attract attention, incorporate visual elements occurring within their boundaries into unified object representations, and enhance the visual processing of elements occurring within their boundaries. Importantly, the perceived object in this case emerged as a function of the fluctuating perceptual state of the viewer. PMID:24246467

  16. Do Visually Impaired People Develop Superior Smell Ability?

    PubMed

    Majchrzak, Dorota; Eberhard, Julia; Kalaus, Barbara; Wagner, Karl-Heinz

    2017-10-01

    It is well known that visually impaired people perform better in orientation by sound than sighted individuals, but it is not clear whether this enhanced awareness also extends to other senses. Therefore, the aim of this study was to observe whether visually impaired subjects develop superior abilities in olfactory perception to compensate for their lack of vision. We investigated the odor perception of visually impaired individuals aged 7 to 89 ( n = 99; 52 women, 47 men) and compared them with subjects of a control group aged 8 to 82 years ( n = 100; 45 women, 55 men) without any visual impairment. The participants were evaluated by Sniffin' Sticks odor identification and discrimination test. Identification ability was assessed for 16 common odors presented in felt-tip pens. In the odor discrimination task, subjects had to determine which of three pens in 16 triplets had a different odor. The median number of correctly identified odorant pens in both groups was the same, 13 of the offered 16. In the discrimination test, there was also no significant difference observed. Gender did not influence results. Age-related changes were observed in both groups with olfactory perception decreasing after the age of 51. We could not confirm that visually impaired people were better in smell identification and discrimination ability than sighted individuals.

  17. Color vision in ADHD: part 2--does attention influence color perception?

    PubMed

    Kim, Soyeon; Al-Haj, Mohamed; Fuller, Stuart; Chen, Samantha; Jain, Umesh; Carrasco, Marisa; Tannock, Rosemary

    2014-10-24

    To investigate the impact of exogenous covert attention on chromatic (blue and red) and achromatic visual perception in adults with and without Attention Deficit Hyperactivity Disorder (ADHD). Exogenous covert attention, which is a transient, automatic, stimulus-driven form of attention, is a key mechanism for selecting relevant information in visual arrays. 30 adults diagnosed with ADHD and 30 healthy adults, matched on age and gender, performed a psychophysical task designed to measure the effects of exogenous covert attention on perceived color saturation (blue, red) and contrast sensitivity. The effects of exogenous covert attention on perceived blue and red saturation levels and contrast sensitivity were similar in both groups, with no differences between males and females. Specifically, exogenous covert attention enhanced the perception of blue saturation and contrast sensitivity, but it had no effect on the perception of red saturation. The findings suggest that exogenous covert attention is intact in adults with ADHD and does not account for the observed impairments in the perception of chromatic (blue and red) saturation.

  18. Selective spatial enhancement: Attentional spotlight size impacts spatial but not temporal perception.

    PubMed

    Goodhew, Stephanie C; Shen, Elizabeth; Edwards, Mark

    2016-08-01

    An important but often neglected aspect of attention is how changes in the attentional spotlight size impact perception. The zoom-lens model predicts that a small ("focal") attentional spotlight enhances all aspects of perception relative to a larger ("diffuse" spotlight). However, based on the physiological properties of the two major classes of visual cells (magnocellular and parvocellular neurons) we predicted trade-offs in spatial and temporal acuity as a function of spotlight size. Contrary to both of these accounts, however, across two experiments we found that attentional spotlight size affected spatial acuity, such that spatial acuity was enhanced for a focal relative to a diffuse spotlight, whereas the same modulations in spotlight size had no impact on temporal acuity. This likely reflects the function of attention: to induce the high spatial resolution of the fovea in periphery, where spatial resolution is poor but temporal resolution is good. It is adaptive, therefore, for the attentional spotlight to enhance spatial acuity, whereas enhancing temporal acuity does not confer the same benefit.

  19. The effect of multispectral image fusion enhancement on human efficiency.

    PubMed

    Bittner, Jennifer L; Schill, M Trent; Mohd-Zaid, Fairul; Blaha, Leslie M

    2017-01-01

    The visual system can be highly influenced by changes to visual presentation. Thus, numerous techniques have been developed to augment imagery in an attempt to improve human perception. The current paper examines the potential impact of one such enhancement, multispectral image fusion, where imagery captured in varying spectral bands (e.g., visible, thermal, night vision) is algorithmically combined to produce an output to strengthen visual perception. We employ ideal observer analysis over a series of experimental conditions to (1) establish a framework for testing the impact of image fusion over the varying aspects surrounding its implementation (e.g., stimulus content, task) and (2) examine the effectiveness of fusion on human information processing efficiency in a basic application. We used a set of rotated Landolt C images captured with a number of individual sensor cameras and combined across seven traditional fusion algorithms (e.g., Laplacian pyramid, principal component analysis, averaging) in a 1-of-8 orientation task. We found that, contrary to the idea of fused imagery always producing a greater impact on perception, single-band imagery can be just as influential. Additionally, efficiency data were shown to fluctuate based on sensor combination instead of fusion algorithm, suggesting the need for examining multiple factors to determine the success of image fusion. Our use of ideal observer analysis, a popular technique from the vision sciences, provides not only a standard for testing fusion in direct relation to the visual system but also allows for comparable examination of fusion across its associated problem space of application.

  20. Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults.

    PubMed

    Smayda, Kirsten E; Van Engen, Kristin J; Maddox, W Todd; Chandrasekaran, Bharath

    2016-01-01

    Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18-35) and thirty-three older adults (ages 60-90) to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger adults when both semantic and visual cues are available to the listener.

  1. Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults

    PubMed Central

    Smayda, Kirsten E.; Van Engen, Kristin J.; Maddox, W. Todd; Chandrasekaran, Bharath

    2016-01-01

    Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18–35) and thirty-three older adults (ages 60–90) to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger adults when both semantic and visual cues are available to the listener. PMID:27031343

  2. Oblique Orientation Discrimination Thresholds Are Superior in Those with a High Level of Autistic Traits

    ERIC Educational Resources Information Center

    Dickinson, Abigail; Jones, Myles; Milne, Elizabeth

    2014-01-01

    Enhanced low-level perception, although present in individuals with autism, is not seen in individuals with high, but non-clinical, levels of autistic traits (Brock et al.in "Percept Lond" 40(6):739. doi:10.1068/p6953, 2011). This is surprising, as many of the higher-level visual differences found in autism have been shown to correlate…

  3. An approach to integrate the human vision psychology and perception knowledge into image enhancement

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Huang, Xifeng; Ping, Jiang

    2009-07-01

    Image enhancement is very important image preprocessing technology especially when the image is captured in the poor imaging condition or dealing with the high bits image. The benefactor of image enhancement either may be a human observer or a computer vision process performing some kind of higher-level image analysis, such as target detection or scene understanding. One of the main objects of the image enhancement is getting a high dynamic range image and a high contrast degree image for human perception or interpretation. So, it is very necessary to integrate either empirical or statistical human vision psychology and perception knowledge into image enhancement. The human vision psychology and perception claims that humans' perception and response to the intensity fluctuation δu of visual signals are weighted by the background stimulus u, instead of being plainly uniform. There are three main laws: Weber's law, Weber- Fechner's law and Stevens's Law that describe this phenomenon in the psychology and psychophysics. This paper will integrate these three laws of the human vision psychology and perception into a very popular image enhancement algorithm named Adaptive Plateau Equalization (APE). The experiments were done on the high bits star image captured in night scene and the infrared-red image both the static image and the video stream. For the jitter problem in the video stream, this algorithm reduces this problem using the difference between the current frame's plateau value and the previous frame's plateau value to correct the current frame's plateau value. Considering the random noise impacts, the pixel value mapping process is not only depending on the current pixel but the pixels in the window surround the current pixel. The window size is usually 3×3. The process results of this improved algorithms is evaluated by the entropy analysis and visual perception analysis. The experiments' result showed the improved APE algorithms improved the quality of the image, the target and the surrounding assistant targets could be identified easily, and the noise was not amplified much. For the low quality image, these improved algorithms augment the information entropy and improve the image and the video stream aesthetic quality, while for the high quality image they will not debase the quality of the image.

  4. Action Video Games Improve Direction Discrimination of Parafoveal Translational Global Motion but Not Reaction Times.

    PubMed

    Pavan, Andrea; Boyce, Matthew; Ghin, Filippo

    2016-10-01

    Playing action video games enhances visual motion perception. However, there is psychophysical evidence that action video games do not improve motion sensitivity for translational global moving patterns presented in fovea. This study investigates global motion perception in action video game players and compares their performance to that of non-action video game players and non-video game players. Stimuli were random dot kinematograms presented in the parafovea. Observers discriminated the motion direction of a target random dot kinematogram presented in one of the four visual quadrants. Action video game players showed lower motion coherence thresholds than the other groups. However, when the task was performed at threshold, we did not find differences between groups in terms of distributions of reaction times. These results suggest that action video games improve visual motion sensitivity in the near periphery of the visual field, rather than speed response. © The Author(s) 2016.

  5. Local visual perception bias in children with high-functioning autism spectrum disorders; do we have the whole picture?

    PubMed

    Falkmer, Marita; Black, Melissa; Tang, Julia; Fitzgerald, Patrick; Girdler, Sonya; Leung, Denise; Ordqvist, Anna; Tan, Tele; Jahan, Ishrat; Falkmer, Torbjorn

    2016-01-01

    While local bias in visual processing in children with autism spectrum disorders (ASD) has been reported to result in difficulties in recognizing faces and facially expressed emotions, but superior ability in disembedding figures, associations between these abilities within a group of children with and without ASD have not been explored. Possible associations in performance on the Visual Perception Skills Figure-Ground test, a face recognition test and an emotion recognition test were investigated within 25 8-12-years-old children with high-functioning autism/Asperger syndrome, and in comparison to 33 typically developing children. Analyses indicated a weak positive correlation between accuracy in Figure-Ground recognition and emotion recognition. No other correlation estimates were significant. These findings challenge both the enhanced perceptual function hypothesis and the weak central coherence hypothesis, and accentuate the importance of further scrutinizing the existance and nature of local visual bias in ASD.

  6. Priming with real motion biases visual cortical response to bistable apparent motion

    PubMed Central

    Zhang, Qing-fang; Wen, Yunqing; Zhang, Deng; She, Liang; Wu, Jian-young; Dan, Yang; Poo, Mu-ming

    2012-01-01

    Apparent motion quartet is an ambiguous stimulus that elicits bistable perception, with the perceived motion alternating between two orthogonal paths. In human psychophysical experiments, the probability of perceiving motion in each path is greatly enhanced by a brief exposure to real motion along that path. To examine the neural mechanism underlying this priming effect, we used voltage-sensitive dye (VSD) imaging to measure the spatiotemporal activity in the primary visual cortex (V1) of awake mice. We found that a brief real motion stimulus transiently biased the cortical response to subsequent apparent motion toward the spatiotemporal pattern representing the real motion. Furthermore, intracellular recording from V1 neurons in anesthetized mice showed a similar increase in subthreshold depolarization in the neurons representing the path of real motion. Such short-term plasticity in early visual circuits may contribute to the priming effect in bistable visual perception. PMID:23188797

  7. Expanding the Caring Lens: Nursing and Medical Students Reflecting on Images of Older People.

    PubMed

    Brand, Gabrielle; Miller, Karen; Saunders, Rosemary; Dugmore, Helen; Etherton-Beer, Christopher

    2016-01-01

    In changing higher education environments, health profession's educators have been increasingly challenged to prepare future health professionals to care for aging populations. This article reports on an exploratory, mixed-method research study that used an innovative photo-elicitation technique and interprofessional small-group work in the classroom to enhance the reflective learning experience of medical and nursing students. Data were collected from pre- and postquestionnaires and focus groups to explore shifts in perceptions toward older persons following the reflective learning session. The qualitative data revealed how using visual images of older persons provides a valuable learning space for reflection. Students found meaning in their own learning by creating shared storylines that challenged their perceptions of older people and themselves as future health professionals. These data support the use of visual methodologies to enhance engagement, reflection, and challenge students to explore and deepen their understanding in gerontology.

  8. Ventral aspect of the visual form pathway is not critical for the perception of biological motion

    PubMed Central

    Gilaie-Dotan, Sharon; Saygin, Ayse Pinar; Lorenzi, Lauren J.; Rees, Geraint; Behrmann, Marlene

    2015-01-01

    Identifying the movements of those around us is fundamental for many daily activities, such as recognizing actions, detecting predators, and interacting with others socially. A key question concerns the neurobiological substrates underlying biological motion perception. Although the ventral “form” visual cortex is standardly activated by biologically moving stimuli, whether these activations are functionally critical for biological motion perception or are epiphenomenal remains unknown. To address this question, we examined whether focal damage to regions of the ventral visual cortex, resulting in significant deficits in form perception, adversely affects biological motion perception. Six patients with damage to the ventral cortex were tested with sensitive point-light display paradigms. All patients were able to recognize unmasked point-light displays and their perceptual thresholds were not significantly different from those of three different control groups, one of which comprised brain-damaged patients with spared ventral cortex (n > 50). Importantly, these six patients performed significantly better than patients with damage to regions critical for biological motion perception. To assess the necessary contribution of different regions in the ventral pathway to biological motion perception, we complement the behavioral findings with a fine-grained comparison between the lesion location and extent, and the cortical regions standardly implicated in biological motion processing. This analysis revealed that the ventral aspects of the form pathway (e.g., fusiform regions, ventral extrastriate body area) are not critical for biological motion perception. We hypothesize that the role of these ventral regions is to provide enhanced multiview/posture representations of the moving person rather than to represent biological motion perception per se. PMID:25583504

  9. Role of multisensory stimuli in vigilance enhancement- a single trial event related potential study.

    PubMed

    Abbasi, Nida Itrat; Bodala, Indu Prasad; Bezerianos, Anastasios; Yu Sun; Al-Nashash, Hasan; Thakor, Nitish V

    2017-07-01

    Development of interventions to prevent vigilance decrement has important applications in sensitive areas like transportation and defence. The objective of this work is to use multisensory (visual and haptic) stimuli for cognitive enhancement during mundane tasks. Two different epoch intervals representing sensory perception and motor response were analysed using minimum variance distortionless response (MVDR) based single trial ERP estimation to understand the performance dependency on both factors. Bereitschaftspotential (BP) latency L3 (r=0.6 in phase 1 (visual) and r=0.71 in phase 2 (visual and haptic)) was significantly correlated with reaction time as compared to that of sensory ERP latency L2 (r=0.1 in both phase 1 and phase 2). This implies that low performance in monotonous tasks is predominantly dependent on the prolonged neural interaction with the muscles to initiate movement. Further, negative relationship was found between the ERP latencies related to sensory perception and Bereitschaftspotential (BP) and occurrence of epochs when multisensory cues are provided. This means that vigilance decrement is reduced with the help of multisensory stimulus presentation in prolonged monotonous tasks.

  10. Children with Autism Detect Targets at Very Rapid Presentation Rates with Similar Accuracy as Adults

    ERIC Educational Resources Information Center

    Hagmann, Carl Erick; Wyble, Bradley; Shea, Nicole; LeBlanc, Megan; Kates, Wendy R.; Russo, Natalie

    2016-01-01

    Enhanced perception may allow for visual search superiority by individuals with Autism Spectrum Disorder (ASD), but does it occur over time? We tested high-functioning children with ASD, typically developing (TD) children, and TD adults in two tasks at three presentation rates (50, 83.3, and 116.7 ms/item) using rapid serial visual presentation.…

  11. Cholinergic enhancement of visual attention and neural oscillations in the human brain.

    PubMed

    Bauer, Markus; Kluge, Christian; Bach, Dominik; Bradbury, David; Heinze, Hans Jochen; Dolan, Raymond J; Driver, Jon

    2012-03-06

    Cognitive processes such as visual perception and selective attention induce specific patterns of brain oscillations. The neurochemical bases of these spectral changes in neural activity are largely unknown, but neuromodulators are thought to regulate processing. The cholinergic system is linked to attentional function in vivo, whereas separate in vitro studies show that cholinergic agonists induce high-frequency oscillations in slice preparations. This has led to theoretical proposals that cholinergic enhancement of visual attention might operate via gamma oscillations in visual cortex, although low-frequency alpha/beta modulation may also play a key role. Here we used MEG to record cortical oscillations in the context of administration of a cholinergic agonist (physostigmine) during a spatial visual attention task in humans. This cholinergic agonist enhanced spatial attention effects on low-frequency alpha/beta oscillations in visual cortex, an effect correlating with a drug-induced speeding of performance. By contrast, the cholinergic agonist did not alter high-frequency gamma oscillations in visual cortex. Thus, our findings show that cholinergic neuromodulation enhances attentional selection via an impact on oscillatory synchrony in visual cortex, for low rather than high frequencies. We discuss this dissociation between high- and low-frequency oscillations in relation to proposals that lower-frequency oscillations are generated by feedback pathways within visual cortex. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Enhanced visual awareness for morality and pajamas? Perception vs. memory in 'top-down' effects.

    PubMed

    Firestone, Chaz; Scholl, Brian J

    2015-03-01

    A raft of prominent findings has revived the notion that higher-level cognitive factors such as desire, meaning, and moral relevance can directly affect what we see. For example, under conditions of brief presentation, morally relevant words reportedly "pop out" and are easier to identify than morally irrelevant words. Though such results purport to show that perception itself is sensitive to such factors, much of this research instead demonstrates effects on visual recognition--which necessarily involves not only visual processing per se, but also memory retrieval. Here we report three experiments which suggest that many alleged top-down effects of this sort are actually effects on 'back-end' memory rather than 'front-end' perception. In particular, the same methods used to demonstrate popout effects for supposedly privileged stimuli (such as morality-related words, e.g. "punishment" and "victim") also yield popout effects for unmotivated, superficial categories (such as fashion-related words, e.g. "pajamas" and "stiletto"). We conclude that such effects reduce to well-known memory processes (in this case, semantic priming) that do not involve morality, and have no implications for debates about whether higher-level factors influence perception. These case studies illustrate how it is critical to distinguish perception from memory in alleged 'top-down' effects. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Enhanced multisensory integration and motor reactivation after active motor learning of audiovisual associations.

    PubMed

    Butler, Andrew J; James, Thomas W; James, Karin Harman

    2011-11-01

    Everyday experience affords us many opportunities to learn about objects through multiple senses using physical interaction. Previous work has shown that active motor learning of unisensory items enhances memory and leads to the involvement of motor systems during subsequent perception. However, the impact of active motor learning on subsequent perception and recognition of associations among multiple senses has not been investigated. Twenty participants were included in an fMRI study that explored the impact of active motor learning on subsequent processing of unisensory and multisensory stimuli. Participants were exposed to visuo-motor associations between novel objects and novel sounds either through self-generated actions on the objects or by observing an experimenter produce the actions. Immediately after exposure, accuracy, RT, and BOLD fMRI measures were collected with unisensory and multisensory stimuli in associative perception and recognition tasks. Response times during audiovisual associative and unisensory recognition were enhanced by active learning, as was accuracy during audiovisual associative recognition. The difference in motor cortex activation between old and new associations was greater for the active than the passive group. Furthermore, functional connectivity between visual and motor cortices was stronger after active learning than passive learning. Active learning also led to greater activation of the fusiform gyrus during subsequent unisensory visual perception. Finally, brain regions implicated in audiovisual integration (e.g., STS) showed greater multisensory gain after active learning than after passive learning. Overall, the results show that active motor learning modulates the processing of multisensory associations.

  14. Making the invisible visible: verbal but not visual cues enhance visual detection.

    PubMed

    Lupyan, Gary; Spivey, Michael J

    2010-07-07

    Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d'). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception.

  15. Figure-ground organization and the emergence of proto-objects in the visual cortex.

    PubMed

    von der Heydt, Rüdiger

    2015-01-01

    A long history of studies of perception has shown that the visual system organizes the incoming information early on, interpreting the 2D image in terms of a 3D world and producing a structure that provides perceptual continuity and enables object-based attention. Recordings from monkey visual cortex show that many neurons, especially in area V2, are selective for border ownership. These neurons are edge selective and have ordinary classical receptive fields (CRF), but in addition their responses are modulated (enhanced or suppressed) depending on the location of a 'figure' relative to the edge in their receptive field. Each neuron has a fixed preference for location on one side or the other. This selectivity is derived from the image context far beyond the CRF. This paper reviews evidence indicating that border ownership selectivity reflects the formation of early object representations ('proto-objects'). The evidence includes experiments showing (1) reversal of border ownership signals with change of perceived object structure, (2) border ownership specific enhancement of responses in object-based selective attention, (3) persistence of border ownership signals in accordance with continuity of object perception, and (4) remapping of border ownership signals across saccades and object movements. Findings 1 and 2 can be explained by hypothetical grouping circuits that sum contour feature signals in search of objectness, and, via recurrent projections, enhance the corresponding low-level feature signals. Findings 3 and 4 might be explained by assuming that the activity of grouping circuits persists and can be remapped. Grouping, persistence, and remapping are fundamental operations of vision. Finding these operations manifest in low-level visual areas challenges traditional views of visual processing. New computational models need to be developed for a comprehensive understanding of the function of the visual cortex.

  16. Figure–ground organization and the emergence of proto-objects in the visual cortex

    PubMed Central

    von der Heydt, Rüdiger

    2015-01-01

    A long history of studies of perception has shown that the visual system organizes the incoming information early on, interpreting the 2D image in terms of a 3D world and producing a structure that provides perceptual continuity and enables object-based attention. Recordings from monkey visual cortex show that many neurons, especially in area V2, are selective for border ownership. These neurons are edge selective and have ordinary classical receptive fields (CRF), but in addition their responses are modulated (enhanced or suppressed) depending on the location of a ‘figure’ relative to the edge in their receptive field. Each neuron has a fixed preference for location on one side or the other. This selectivity is derived from the image context far beyond the CRF. This paper reviews evidence indicating that border ownership selectivity reflects the formation of early object representations (‘proto-objects’). The evidence includes experiments showing (1) reversal of border ownership signals with change of perceived object structure, (2) border ownership specific enhancement of responses in object-based selective attention, (3) persistence of border ownership signals in accordance with continuity of object perception, and (4) remapping of border ownership signals across saccades and object movements. Findings 1 and 2 can be explained by hypothetical grouping circuits that sum contour feature signals in search of objectness, and, via recurrent projections, enhance the corresponding low-level feature signals. Findings 3 and 4 might be explained by assuming that the activity of grouping circuits persists and can be remapped. Grouping, persistence, and remapping are fundamental operations of vision. Finding these operations manifest in low-level visual areas challenges traditional views of visual processing. New computational models need to be developed for a comprehensive understanding of the function of the visual cortex. PMID:26579062

  17. Communication of uncertainty regarding individualized cancer risk estimates: effects and influential factors.

    PubMed

    Han, Paul K J; Klein, William M P; Lehman, Tom; Killam, Bill; Massett, Holly; Freedman, Andrew N

    2011-01-01

    To examine the effects of communicating uncertainty regarding individualized colorectal cancer risk estimates and to identify factors that influence these effects. Two Web-based experiments were conducted, in which adults aged 40 years and older were provided with hypothetical individualized colorectal cancer risk estimates differing in the extent and representation of expressed uncertainty. The uncertainty consisted of imprecision (otherwise known as "ambiguity") of the risk estimates and was communicated using different representations of confidence intervals. Experiment 1 (n = 240) tested the effects of ambiguity (confidence interval v. point estimate) and representational format (textual v. visual) on cancer risk perceptions and worry. Potential effect modifiers, including personality type (optimism), numeracy, and the information's perceived credibility, were examined, along with the influence of communicating uncertainty on responses to comparative risk information. Experiment 2 (n = 135) tested enhanced representations of ambiguity that incorporated supplemental textual and visual depictions. Communicating uncertainty led to heightened cancer-related worry in participants, exemplifying the phenomenon of "ambiguity aversion." This effect was moderated by representational format and dispositional optimism; textual (v. visual) format and low (v. high) optimism were associated with greater ambiguity aversion. However, when enhanced representations were used to communicate uncertainty, textual and visual formats showed similar effects. Both the communication of uncertainty and use of the visual format diminished the influence of comparative risk information on risk perceptions. The communication of uncertainty regarding cancer risk estimates has complex effects, which include heightening cancer-related worry-consistent with ambiguity aversion-and diminishing the influence of comparative risk information on risk perceptions. These responses are influenced by representational format and personality type, and the influence of format appears to be modifiable and content dependent.

  18. Musicians have enhanced audiovisual multisensory binding: experience-dependent effects in the double-flash illusion.

    PubMed

    Bidelman, Gavin M

    2016-10-01

    Musical training is associated with behavioral and neurophysiological enhancements in auditory processing for both musical and nonmusical sounds (e.g., speech). Yet, whether the benefits of musicianship extend beyond enhancements to auditory-specific skills and impact multisensory (e.g., audiovisual) processing has yet to be fully validated. Here, we investigated multisensory integration of auditory and visual information in musicians and nonmusicians using a double-flash illusion, whereby the presentation of multiple auditory stimuli (beeps) concurrent with a single visual object (flash) induces an illusory perception of multiple flashes. We parametrically varied the onset asynchrony between auditory and visual events (leads and lags of ±300 ms) to quantify participants' "temporal window" of integration, i.e., stimuli in which auditory and visual cues were fused into a single percept. Results show that musically trained individuals were both faster and more accurate at processing concurrent audiovisual cues than their nonmusician peers; nonmusicians had a higher susceptibility for responding to audiovisual illusions and perceived double flashes over an extended range of onset asynchronies compared to trained musicians. Moreover, temporal window estimates indicated that musicians' windows (<100 ms) were ~2-3× shorter than nonmusicians' (~200 ms), suggesting more refined multisensory integration and audiovisual binding. Collectively, findings indicate a more refined binding of auditory and visual cues in musically trained individuals. We conclude that experience-dependent plasticity of intensive musical experience extends beyond simple listening skills, improving multimodal processing and the integration of multiple sensory systems in a domain-general manner.

  19. Neuronal Response Gain Enhancement prior to Microsaccades.

    PubMed

    Chen, Chih-Yang; Ignashchenkova, Alla; Thier, Peter; Hafed, Ziad M

    2015-08-17

    Neuronal response gain enhancement is a classic signature of the allocation of covert visual attention without eye movements. However, microsaccades continuously occur during gaze fixation. Because these tiny eye movements are preceded by motor preparatory signals well before they are triggered, it may be the case that a corollary of such signals may cause enhancement, even without attentional cueing. In six different macaque monkeys and two different brain areas previously implicated in covert visual attention (superior colliculus and frontal eye fields), we show neuronal response gain enhancement for peripheral stimuli appearing immediately before microsaccades. This enhancement occurs both during simple fixation with behaviorally irrelevant peripheral stimuli and when the stimuli are relevant for the subsequent allocation of covert visual attention. Moreover, this enhancement occurs in both purely visual neurons and visual-motor neurons, and it is replaced by suppression for stimuli appearing immediately after microsaccades. Our results suggest that there may be an obligatory link between microsaccade occurrence and peripheral selective processing, even though microsaccades can be orders of magnitude smaller than the eccentricities of peripheral stimuli. Because microsaccades occur in a repetitive manner during fixation, and because these eye movements reset neurophysiological rhythms every time they occur, our results highlight a possible mechanism through which oculomotor events may aid periodic sampling of the visual environment for the benefit of perception, even when gaze is prevented from overtly shifting. One functional consequence of such periodic sampling could be the magnification of rhythmic fluctuations of peripheral covert visual attention. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Aminergic neuromodulation of associative visual learning in harnessed honey bees.

    PubMed

    Mancini, Nino; Giurfa, Martin; Sandoz, Jean-Christophe; Avarguès-Weber, Aurore

    2018-05-21

    The honey bee Apis mellifera is a major insect model for studying visual cognition. Free-flying honey bees learn to associate different visual cues with a sucrose reward and may deploy sophisticated cognitive strategies to this end. Yet, the neural bases of these capacities cannot be studied in flying insects. Conversely, immobilized bees are accessible to neurobiological investigation but training them to respond appetitively to visual stimuli paired with sucrose reward is difficult. Here we succeeded in coupling visual conditioning in harnessed bees with pharmacological analyses on the role of octopamine (OA), dopamine (DA) and serotonin (5-HT) in visual learning. We also studied if and how these biogenic amines modulate sucrose responsiveness and phototaxis behaviour as intact reward and visual perception are essential prerequisites for appetitive visual learning. Our results suggest that both octopaminergic and dopaminergic signaling mediate either the appetitive sucrose signaling or the association between color and sucrose reward in the bee brain. Enhancing and inhibiting serotonergic signaling both compromised learning performances, probably via an impairment of visual perception. We thus provide a first analysis of the role of aminergic signaling in visual learning and retention in the honey bee and discuss further research trends necessary to understand the neural bases of visual cognition in this insect. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Auditory enhancement of visual perception at threshold depends on visual abilities.

    PubMed

    Caclin, Anne; Bouchet, Patrick; Djoulah, Farida; Pirat, Elodie; Pernier, Jacques; Giard, Marie-Hélène

    2011-06-17

    Whether or not multisensory interactions can improve detection thresholds, and thus widen the range of perceptible events is a long-standing debate. Here we revisit this question, by testing the influence of auditory stimuli on visual detection threshold, in subjects exhibiting a wide range of visual-only performance. Above the perceptual threshold, crossmodal interactions have indeed been reported to depend on the subject's performance when the modalities are presented in isolation. We thus tested normal-seeing subjects and short-sighted subjects wearing their usual glasses. We used a paradigm limiting potential shortcomings of previous studies: we chose a criterion-free threshold measurement procedure and precluded exogenous cueing effects by systematically presenting a visual cue whenever a visual target (a faint Gabor patch) might occur. Using this carefully controlled procedure, we found that concurrent sounds only improved visual detection thresholds in the sub-group of subjects exhibiting the poorest performance in the visual-only conditions. In these subjects, for oblique orientations of the visual stimuli (but not for vertical or horizontal targets), the auditory improvement was still present when visual detection was already helped with flanking visual stimuli generating a collinear facilitation effect. These findings highlight that crossmodal interactions are most efficient to improve perceptual performance when an isolated modality is deficient. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Visual Color Comparisons in Forensic Science.

    PubMed

    Thornton, J I

    1997-06-01

    Color is used extensively in forensic science for the characterization and comparison of physical evidence, and should thus be well understood. Fundamental elements of color perception and color comparison systems are first reviewed. The second portion of this article discusses instances in which defects in color perception may occur, and the recognition of opportunities by means of which color perception and color discrimination may be expressed and enhanced. Application and limitations of color comparisons in forensic science, including soil, paint, and fibers comparisons and color tests, are reviewed. Copyright © 1997 Central Police University.

  3. Visual spatial attention enhances the amplitude of positive and negative fMRI responses to visual stimulation in an eccentricity-dependent manner

    PubMed Central

    Bressler, David W.; Fortenbaugh, Francesca C.; Robertson, Lynn C.; Silver, Michael A.

    2013-01-01

    Endogenous visual spatial attention improves perception and enhances neural responses to visual stimuli at attended locations. Although many aspects of visual processing differ significantly between central and peripheral vision, little is known regarding the neural substrates of the eccentricity dependence of spatial attention effects. We measured amplitudes of positive and negative fMRI responses to visual stimuli as a function of eccentricity in a large number of topographically-organized cortical areas. Responses to each stimulus were obtained when the stimulus was attended and when spatial attention was directed to a stimulus in the opposite visual hemifield. Attending to the stimulus increased both positive and negative response amplitudes in all cortical areas we studied: V1, V2, V3, hV4, VO1, LO1, LO2, V3A/B, IPS0, TO1, and TO2. However, the eccentricity dependence of these effects differed considerably across cortical areas. In early visual, ventral, and lateral occipital cortex, attentional enhancement of positive responses was greater for central compared to peripheral eccentricities. The opposite pattern was observed in dorsal stream areas IPS0 and putative MT homolog TO1, where attentional enhancement of positive responses was greater in the periphery. Both the magnitude and the eccentricity dependence of attentional modulation of negative fMRI responses closely mirrored that of positive responses across cortical areas. PMID:23562388

  4. Crossmodal integration enhances neural representation of task-relevant features in audiovisual face perception.

    PubMed

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Liu, Yongjian; Liang, Changhong; Sun, Pei

    2015-02-01

    Previous studies have shown that audiovisual integration improves identification performance and enhances neural activity in heteromodal brain areas, for example, the posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG). Furthermore, it has also been demonstrated that attention plays an important role in crossmodal integration. In this study, we considered crossmodal integration in audiovisual facial perception and explored its effect on the neural representation of features. The audiovisual stimuli in the experiment consisted of facial movie clips that could be classified into 2 gender categories (male vs. female) or 2 emotion categories (crying vs. laughing). The visual/auditory-only stimuli were created from these movie clips by removing the auditory/visual contents. The subjects needed to make a judgment about the gender/emotion category for each movie clip in the audiovisual, visual-only, or auditory-only stimulus condition as functional magnetic resonance imaging (fMRI) signals were recorded. The neural representation of the gender/emotion feature was assessed using the decoding accuracy and the brain pattern-related reproducibility indices, obtained by a multivariate pattern analysis method from the fMRI data. In comparison to the visual-only and auditory-only stimulus conditions, we found that audiovisual integration enhanced the neural representation of task-relevant features and that feature-selective attention might play a role of modulation in the audiovisual integration. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. The contribution of dynamic visual cues to audiovisual speech perception.

    PubMed

    Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador

    2015-08-01

    Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. The effect of contextual sound cues on visual fidelity perception.

    PubMed

    Rojas, David; Cowan, Brent; Kapralos, Bill; Collins, Karen; Dubrowski, Adam

    2014-01-01

    Previous work has shown that sound can affect the perception of visual fidelity. Here we build upon this previous work by examining the effect of contextual sound cues (i.e., sounds that are related to the visuals) on visual fidelity perception. Results suggest that contextual sound cues do influence visual fidelity perception and, more specifically, our perception of visual fidelity increases with contextual sound cues. These results have implications for designers of multimodal virtual worlds and serious games that, with the appropriate use of contextual sounds, can reduce visual rendering requirements without a corresponding decrease in the perception of visual fidelity.

  7. Perceptual and cognitive biases in individuals with body dysmorphic disorder symptoms.

    PubMed

    Clerkin, Elise M; Teachman, Bethany A

    2008-01-01

    Given the extreme focus on perceived physical defects in body dysmorphic disorder (BDD), we expected that perceptual and cognitive biases related to physical appearance would be associated with BDD symptomology. To examine these hypotheses, participants ( N = 70) high and low in BDD symptoms completed tasks assessing visual perception and cognition. As expected, there were significant group differences in self-, but not other-, relevant cognitive biases. Perceptual bias results were mixed, with some evidence indicating that individuals high (versus low) in BDD symptoms literally see themselves in a less positive light. Further, individuals high in BDD symptoms failed to demonstrate a normative self-enhancement bias. Overall, this research points to the importance of assessing both cognitive and perceptual biases associated with BDD symptoms, and suggests that visual perception may be influenced by non-visual factors.

  8. Perceptual and cognitive biases in individuals with body dysmorphic disorder symptoms

    PubMed Central

    Clerkin, Elise M.; Teachman, Bethany A.

    2012-01-01

    Given the extreme focus on perceived physical defects in body dysmorphic disorder (BDD), we expected that perceptual and cognitive biases related to physical appearance would be associated with BDD symptomology. To examine these hypotheses, participants (N = 70) high and low in BDD symptoms completed tasks assessing visual perception and cognition. As expected, there were significant group differences in self-, but not other-, relevant cognitive biases. Perceptual bias results were mixed, with some evidence indicating that individuals high (versus low) in BDD symptoms literally see themselves in a less positive light. Further, individuals high in BDD symptoms failed to demonstrate a normative self-enhancement bias. Overall, this research points to the importance of assessing both cognitive and perceptual biases associated with BDD symptoms, and suggests that visual perception may be influenced by non-visual factors. PMID:25125771

  9. Augmented Reality Imaging System: 3D Viewing of a Breast Cancer.

    PubMed

    Douglas, David B; Boone, John M; Petricoin, Emanuel; Liotta, Lance; Wilson, Eugene

    2016-01-01

    To display images of breast cancer from a dedicated breast CT using Depth 3-Dimensional (D3D) augmented reality. A case of breast cancer imaged using contrast-enhanced breast CT (Computed Tomography) was viewed with the augmented reality imaging, which uses a head display unit (HDU) and joystick control interface. The augmented reality system demonstrated 3D viewing of the breast mass with head position tracking, stereoscopic depth perception, focal point convergence and the use of a 3D cursor and joy-stick enabled fly through with visualization of the spiculations extending from the breast cancer. The augmented reality system provided 3D visualization of the breast cancer with depth perception and visualization of the mass's spiculations. The augmented reality system should be further researched to determine the utility in clinical practice.

  10. Audio–visual interactions for motion perception in depth modulate activity in visual area V3A

    PubMed Central

    Ogawa, Akitoshi; Macaluso, Emiliano

    2013-01-01

    Multisensory signals can enhance the spatial perception of objects and events in the environment. Changes of visual size and auditory intensity provide us with the main cues about motion direction in depth. However, frequency changes in audition and binocular disparity in vision also contribute to the perception of motion in depth. Here, we presented subjects with several combinations of auditory and visual depth-cues to investigate multisensory interactions during processing of motion in depth. The task was to discriminate the direction of auditory motion in depth according to increasing or decreasing intensity. Rising or falling auditory frequency provided an additional within-audition cue that matched or did not match the intensity change (i.e. intensity-frequency (IF) “matched vs. unmatched” conditions). In two-thirds of the trials, a task-irrelevant visual stimulus moved either in the same or opposite direction of the auditory target, leading to audio–visual “congruent vs. incongruent” between-modalities depth-cues. Furthermore, these conditions were presented either with or without binocular disparity. Behavioral data showed that the best performance was observed in the audio–visual congruent condition with IF matched. Brain imaging results revealed maximal response in visual area V3A when all cues provided congruent and reliable depth information (i.e. audio–visual congruent, IF-matched condition including disparity cues). Analyses of effective connectivity revealed increased coupling from auditory cortex to V3A specifically in audio–visual congruent trials. We conclude that within- and between-modalities cues jointly contribute to the processing of motion direction in depth, and that they do so via dynamic changes of connectivity between visual and auditory cortices. PMID:23333414

  11. Olfactory-visual integration facilitates perception of subthreshold negative emotion.

    PubMed

    Novak, Lucas R; Gitelman, Darren R; Schuyler, Brianna; Li, Wen

    2015-10-01

    A fast growing literature of multisensory emotion integration notwithstanding, the chemical senses, intimately associated with emotion, have been largely overlooked. Moreover, an ecologically highly relevant principle of "inverse effectiveness", rendering maximal integration efficacy with impoverished sensory input, remains to be assessed in emotion integration. Presenting minute, subthreshold negative (vs. neutral) cues in faces and odors, we demonstrated olfactory-visual emotion integration in improved emotion detection (especially among individuals with weaker perception of unimodal negative cues) and response enhancement in the amygdala. Moreover, while perceptual gain for visual negative emotion involved the posterior superior temporal sulcus/pSTS, perceptual gain for olfactory negative emotion engaged both the associative olfactory (orbitofrontal) cortex and amygdala. Dynamic causal modeling (DCM) analysis of fMRI timeseries further revealed connectivity strengthening among these areas during crossmodal emotion integration. That multisensory (but not low-level unisensory) areas exhibited both enhanced response and region-to-region coupling favors a top-down (vs. bottom-up) account for olfactory-visual emotion integration. Current findings thus confirm the involvement of multisensory convergence areas, while highlighting unique characteristics of olfaction-related integration. Furthermore, successful crossmodal binding of subthreshold aversive cues not only supports the principle of "inverse effectiveness" in emotion integration but also accentuates the automatic, unconscious quality of crossmodal emotion synthesis. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Modeling the Development of Audiovisual Cue Integration in Speech Perception

    PubMed Central

    Getz, Laura M.; Nordeen, Elke R.; Vrabic, Sarah C.; Toscano, Joseph C.

    2017-01-01

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. PMID:28335558

  13. Modeling the Development of Audiovisual Cue Integration in Speech Perception.

    PubMed

    Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C

    2017-03-21

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.

  14. Short-term plasticity in auditory cognition.

    PubMed

    Jääskeläinen, Iiro P; Ahveninen, Jyrki; Belliveau, John W; Raij, Tommi; Sams, Mikko

    2007-12-01

    Converging lines of evidence suggest that auditory system short-term plasticity can enable several perceptual and cognitive functions that have been previously considered as relatively distinct phenomena. Here we review recent findings suggesting that auditory stimulation, auditory selective attention and cross-modal effects of visual stimulation each cause transient excitatory and (surround) inhibitory modulations in the auditory cortex. These modulations might adaptively tune hierarchically organized sound feature maps of the auditory cortex (e.g. tonotopy), thus filtering relevant sounds during rapidly changing environmental and task demands. This could support auditory sensory memory, pre-attentive detection of sound novelty, enhanced perception during selective attention, influence of visual processing on auditory perception and longer-term plastic changes associated with perceptual learning.

  15. Is attentional prioritisation of infant faces unique in humans?: Comparative demonstrations by modified dot-probe task in monkeys.

    PubMed

    Koda, Hiroki; Sato, Anna; Kato, Akemi

    2013-09-01

    Humans innately perceive infantile features as cute. The ethologist Konrad Lorenz proposed that the infantile features of mammals and birds, known as the baby schema (kindchenschema), motivate caretaking behaviour. As biologically relevant stimuli, newborns are likely to be processed specially in terms of visual attention, perception, and cognition. Recent demonstrations on human participants have shown visual attentional prioritisation to newborn faces (i.e., newborn faces capture visual attention). Although characteristics equivalent to those found in the faces of human infants are found in nonhuman primates, attentional capture by newborn faces has not been tested in nonhuman primates. We examined whether conspecific newborn faces captured the visual attention of two Japanese monkeys using a target-detection task based on dot-probe tasks commonly used in human visual attention studies. Although visual cues enhanced target detection in subject monkeys, our results, unlike those for humans, showed no evidence of an attentional prioritisation for newborn faces by monkeys. Our demonstrations showed the validity of dot-probe task for visual attention studies in monkeys and propose a novel approach to bridge the gap between human and nonhuman primate social cognition research. This suggests that attentional capture by newborn faces is not common to macaques, but it is unclear if nursing experiences influence their perception and recognition of infantile appraisal stimuli. We need additional comparative studies to reveal the evolutionary origins of baby-schema perception and recognition. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Parietal disruption alters audiovisual binding in the sound-induced flash illusion.

    PubMed

    Kamke, Marc R; Vieth, Harrison E; Cottrell, David; Mattingley, Jason B

    2012-09-01

    Selective attention and multisensory integration are fundamental to perception, but little is known about whether, or under what circumstances, these processes interact to shape conscious awareness. Here, we used transcranial magnetic stimulation (TMS) to investigate the causal role of attention-related brain networks in multisensory integration between visual and auditory stimuli in the sound-induced flash illusion. The flash illusion is a widely studied multisensory phenomenon in which a single flash of light is falsely perceived as multiple flashes in the presence of irrelevant sounds. We investigated the hypothesis that extrastriate regions involved in selective attention, specifically within the right parietal cortex, exert an influence on the multisensory integrative processes that cause the flash illusion. We found that disruption of the right angular gyrus, but not of the adjacent supramarginal gyrus or of a sensory control site, enhanced participants' veridical perception of the multisensory events, thereby reducing their susceptibility to the illusion. Our findings suggest that the same parietal networks that normally act to enhance perception of attended events also play a role in the binding of auditory and visual stimuli in the sound-induced flash illusion. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. Connectopathy in Autism Spectrum Disorders: A Review of Evidence from Visual Evoked Potentials and Diffusion Magnetic Resonance Imaging

    PubMed Central

    Yamasaki, Takao; Maekawa, Toshihiko; Fujita, Takako; Tobimatsu, Shozo

    2017-01-01

    Individuals with autism spectrum disorder (ASD) show superior performance in processing fine details; however, they often exhibit impairments of gestalt face, global motion perception, and visual attention as well as core social deficits. Increasing evidence has suggested that social deficits in ASD arise from abnormal functional and structural connectivities between and within distributed cortical networks that are recruited during social information processing. Because the human visual system is characterized by a set of parallel, hierarchical, multistage network systems, we hypothesized that the altered connectivity of visual networks contributes to social cognition impairment in ASD. In the present review, we focused on studies of altered connectivity of visual and attention networks in ASD using visual evoked potentials (VEPs), event-related potentials (ERPs), and diffusion tensor imaging (DTI). A series of VEP, ERP, and DTI studies conducted in our laboratory have demonstrated complex alterations (impairment and enhancement) of visual and attention networks in ASD. Recent data have suggested that the atypical visual perception observed in ASD is caused by altered connectivity within parallel visual pathways and attention networks, thereby contributing to the impaired social communication observed in ASD. Therefore, we conclude that the underlying pathophysiological mechanism of ASD constitutes a “connectopathy.” PMID:29170625

  18. Making the Invisible Visible: Verbal but Not Visual Cues Enhance Visual Detection

    PubMed Central

    Lupyan, Gary; Spivey, Michael J.

    2010-01-01

    Background Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Methodology/Principal Findings Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d′). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Conclusions/Significance Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception. PMID:20628646

  19. Audiovisual integration in children listening to spectrally degraded speech.

    PubMed

    Maidment, David W; Kang, Hi Jee; Stewart, Hannah J; Amitay, Sygal

    2015-02-01

    The study explored whether visual information improves speech identification in typically developing children with normal hearing when the auditory signal is spectrally degraded. Children (n=69) and adults (n=15) were presented with noise-vocoded sentences from the Children's Co-ordinate Response Measure (Rosen, 2011) in auditory-only or audiovisual conditions. The number of bands was adaptively varied to modulate the degradation of the auditory signal, with the number of bands required for approximately 79% correct identification calculated as the threshold. The youngest children (4- to 5-year-olds) did not benefit from accompanying visual information, in comparison to 6- to 11-year-old children and adults. Audiovisual gain also increased with age in the child sample. The current data suggest that children younger than 6 years of age do not fully utilize visual speech cues to enhance speech perception when the auditory signal is degraded. This evidence not only has implications for understanding the development of speech perception skills in children with normal hearing but may also inform the development of new treatment and intervention strategies that aim to remediate speech perception difficulties in pediatric cochlear implant users.

  20. Combining image-processing and image compression schemes

    NASA Technical Reports Server (NTRS)

    Greenspan, H.; Lee, M.-C.

    1995-01-01

    An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

  1. Audition dominates vision in duration perception irrespective of salience, attention, and temporal discriminability

    PubMed Central

    Ortega, Laura; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2014-01-01

    Whereas the visual modality tends to dominate over the auditory modality in bimodal spatial perception, the auditory modality tends to dominate over the visual modality in bimodal temporal perception. Recent results suggest that the visual modality dominates bimodal spatial perception because spatial discriminability is typically greater for the visual than auditory modality; accordingly, visual dominance is eliminated or reversed when visual-spatial discriminability is reduced by degrading visual stimuli to be equivalent or inferior to auditory spatial discriminability. Thus, for spatial perception, the modality that provides greater discriminability dominates. Here we ask whether auditory dominance in duration perception is similarly explained by factors that influence the relative quality of auditory and visual signals. In contrast to the spatial results, the auditory modality dominated over the visual modality in bimodal duration perception even when the auditory signal was clearly weaker, when the auditory signal was ignored (i.e., the visual signal was selectively attended), and when the temporal discriminability was equivalent for the auditory and visual signals. Thus, unlike spatial perception where the modality carrying more discriminable signals dominates, duration perception seems to be mandatorily linked to auditory processing under most circumstances. PMID:24806403

  2. Similar effects of feature-based attention on motion perception and pursuit eye movements at different levels of awareness

    PubMed Central

    Spering, Miriam; Carrasco, Marisa

    2012-01-01

    Feature-based attention enhances visual processing and improves perception, even for visual features that we are not aware of. Does feature-based attention also modulate motor behavior in response to visual information that does or does not reach awareness? Here we compare the effect of feature-based attention on motion perception and smooth pursuit eye movements in response to moving dichoptic plaids–stimuli composed of two orthogonally-drifting gratings, presented separately to each eye–in human observers. Monocular adaptation to one grating prior to the presentation of both gratings renders the adapted grating perceptually weaker than the unadapted grating and decreases the level of awareness. Feature-based attention was directed to either the adapted or the unadapted grating’s motion direction or to both (neutral condition). We show that observers were better in detecting a speed change in the attended than the unattended motion direction, indicating that they had successfully attended to one grating. Speed change detection was also better when the change occurred in the unadapted than the adapted grating, indicating that the adapted grating was perceptually weaker. In neutral conditions, perception and pursuit in response to plaid motion were dissociated: While perception followed one grating’s motion direction almost exclusively (component motion), the eyes tracked the average of both gratings (pattern motion). In attention conditions, perception and pursuit were shifted towards the attended component. These results suggest that attention affects perception and pursuit similarly even though only the former reflects awareness. The eyes can track an attended feature even if observers do not perceive it. PMID:22649238

  3. Similar effects of feature-based attention on motion perception and pursuit eye movements at different levels of awareness.

    PubMed

    Spering, Miriam; Carrasco, Marisa

    2012-05-30

    Feature-based attention enhances visual processing and improves perception, even for visual features that we are not aware of. Does feature-based attention also modulate motor behavior in response to visual information that does or does not reach awareness? Here we compare the effect of feature-based attention on motion perception and smooth-pursuit eye movements in response to moving dichoptic plaids--stimuli composed of two orthogonally drifting gratings, presented separately to each eye--in human observers. Monocular adaptation to one grating before the presentation of both gratings renders the adapted grating perceptually weaker than the unadapted grating and decreases the level of awareness. Feature-based attention was directed to either the adapted or the unadapted grating's motion direction or to both (neutral condition). We show that observers were better at detecting a speed change in the attended than the unattended motion direction, indicating that they had successfully attended to one grating. Speed change detection was also better when the change occurred in the unadapted than the adapted grating, indicating that the adapted grating was perceptually weaker. In neutral conditions, perception and pursuit in response to plaid motion were dissociated: While perception followed one grating's motion direction almost exclusively (component motion), the eyes tracked the average of both gratings (pattern motion). In attention conditions, perception and pursuit were shifted toward the attended component. These results suggest that attention affects perception and pursuit similarly even though only the former reflects awareness. The eyes can track an attended feature even if observers do not perceive it.

  4. Visual Search Targeting Either Local or Global Perceptual Processes Differs as a Function of Autistic-Like Traits in the Typically Developing Population

    ERIC Educational Resources Information Center

    Almeida, Renita A.; Dickinson, J. Edwin; Maybery, Murray T.; Badcock, Johanna C.; Badcock, David R.

    2013-01-01

    Relative to low scorers, high scorers on the Autism-Spectrum Quotient (AQ) show enhanced performance on the Embedded Figures Test and the Radial Frequency search task (RFST), which has been attributed to both enhanced local processing and differences in combining global percepts. We investigate the role of local and global processing further using…

  5. Occupational therapy intervention to inspire self-efficacy in a patient with spinal ataxia and visual disturbance

    PubMed Central

    Tohyama, Satsuki; Usuki, Fusako

    2015-01-01

    We report a case of a patient with severe ataxia and visual disturbance due to vitamin E deficiency, whose self-efficacy was inspired by intervention with an appropriate occupational therapy activity. Before the handloom intervention, her severe neurological deficits decreased her activities of daily living (ADL) ability, which made her feel pessimistic and depressed. The use of a handloom, however, inspired her sense of accomplishment because she could perform the weft movement by using her residual physical function, thereby relieving her pessimistic attitude. This perception of capability motivated her to participate in further rehabilitation. Finally, her eager practice enhanced her ADL ability and quality of life (QOL). The result suggests that it is important to provide an appropriate occupational therapy activity that can inspire self-efficacy in patients with chronic refractory neurological disorders because the perception of capability can enhance the motivation to improve performance in general activities, ADL ability and QOL. PMID:25666249

  6. Occupational therapy intervention to inspire self-efficacy in a patient with spinal ataxia and visual disturbance.

    PubMed

    Tohyama, Satsuki; Usuki, Fusako

    2015-02-09

    We report a case of a patient with severe ataxia and visual disturbance due to vitamin E deficiency, whose self-efficacy was inspired by intervention with an appropriate occupational therapy activity. Before the handloom intervention, her severe neurological deficits decreased her activities of daily living (ADL) ability, which made her feel pessimistic and depressed. The use of a handloom, however, inspired her sense of accomplishment because she could perform the weft movement by using her residual physical function, thereby relieving her pessimistic attitude. This perception of capability motivated her to participate in further rehabilitation. Finally, her eager practice enhanced her ADL ability and quality of life (QOL). The result suggests that it is important to provide an appropriate occupational therapy activity that can inspire self-efficacy in patients with chronic refractory neurological disorders because the perception of capability can enhance the motivation to improve performance in general activities, ADL ability and QOL. 2015 BMJ Publishing Group Ltd.

  7. The role of human ventral visual cortex in motion perception

    PubMed Central

    Saygin, Ayse P.; Lorenzi, Lauren J.; Egan, Ryan; Rees, Geraint; Behrmann, Marlene

    2013-01-01

    Visual motion perception is fundamental to many aspects of visual perception. Visual motion perception has long been associated with the dorsal (parietal) pathway and the involvement of the ventral ‘form’ (temporal) visual pathway has not been considered critical for normal motion perception. Here, we evaluated this view by examining whether circumscribed damage to ventral visual cortex impaired motion perception. The perception of motion in basic, non-form tasks (motion coherence and motion detection) and complex structure-from-motion, for a wide range of motion speeds, all centrally displayed, was assessed in five patients with a circumscribed lesion to either the right or left ventral visual pathway. Patients with a right, but not with a left, ventral visual lesion displayed widespread impairments in central motion perception even for non-form motion, for both slow and for fast speeds, and this held true independent of the integrity of areas MT/V5, V3A or parietal regions. In contrast with the traditional view in which only the dorsal visual stream is critical for motion perception, these novel findings implicate a more distributed circuit in which the integrity of the right ventral visual pathway is also necessary even for the perception of non-form motion. PMID:23983030

  8. Perceptual Averaging in Individuals with Autism Spectrum Disorder.

    PubMed

    Corbett, Jennifer E; Venuti, Paola; Melcher, David

    2016-01-01

    There is mounting evidence that observers rely on statistical summaries of visual information to maintain stable and coherent perception. Sensitivity to the mean (or other prototypical value) of a visual feature (e.g., mean size) appears to be a pervasive process in human visual perception. Previous studies in individuals diagnosed with Autism Spectrum Disorder (ASD) have uncovered characteristic patterns of visual processing that suggest they may rely more on enhanced local representations of individual objects instead of computing such perceptual averages. To further explore the fundamental nature of abstract statistical representation in visual perception, we investigated perceptual averaging of mean size in a group of 12 high-functioning individuals diagnosed with ASD using simplified versions of two identification and adaptation tasks that elicited characteristic perceptual averaging effects in a control group of neurotypical participants. In Experiment 1, participants performed with above chance accuracy in recalling the mean size of a set of circles ( mean task ) despite poor accuracy in recalling individual circle sizes ( member task ). In Experiment 2, their judgments of single circle size were biased by mean size adaptation. Overall, these results suggest that individuals with ASD perceptually average information about sets of objects in the surrounding environment. Our results underscore the fundamental nature of perceptual averaging in vision, and further our understanding of how autistic individuals make sense of the external environment.

  9. Visual spatial attention enhances the amplitude of positive and negative fMRI responses to visual stimulation in an eccentricity-dependent manner.

    PubMed

    Bressler, David W; Fortenbaugh, Francesca C; Robertson, Lynn C; Silver, Michael A

    2013-06-07

    Endogenous visual spatial attention improves perception and enhances neural responses to visual stimuli at attended locations. Although many aspects of visual processing differ significantly between central and peripheral vision, little is known regarding the neural substrates of the eccentricity dependence of spatial attention effects. We measured amplitudes of positive and negative fMRI responses to visual stimuli as a function of eccentricity in a large number of topographically-organized cortical areas. Responses to each stimulus were obtained when the stimulus was attended and when spatial attention was directed to a stimulus in the opposite visual hemifield. Attending to the stimulus increased both positive and negative response amplitudes in all cortical areas we studied: V1, V2, V3, hV4, VO1, LO1, LO2, V3A/B, IPS0, TO1, and TO2. However, the eccentricity dependence of these effects differed considerably across cortical areas. In early visual, ventral, and lateral occipital cortex, attentional enhancement of positive responses was greater for central compared to peripheral eccentricities. The opposite pattern was observed in dorsal stream areas IPS0 and putative MT homolog TO1, where attentional enhancement of positive responses was greater in the periphery. Both the magnitude and the eccentricity dependence of attentional modulation of negative fMRI responses closely mirrored that of positive responses across cortical areas. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Multisensory and modality specific processing of visual speech in different regions of the premotor cortex

    PubMed Central

    Callan, Daniel E.; Jones, Jeffery A.; Callan, Akiko

    2014-01-01

    Behavioral and neuroimaging studies have demonstrated that brain regions involved with speech production also support speech perception, especially under degraded conditions. The premotor cortex (PMC) has been shown to be active during both observation and execution of action (“Mirror System” properties), and may facilitate speech perception by mapping unimodal and multimodal sensory features onto articulatory speech gestures. For this functional magnetic resonance imaging (fMRI) study, participants identified vowels produced by a speaker in audio-visual (saw the speaker's articulating face and heard her voice), visual only (only saw the speaker's articulating face), and audio only (only heard the speaker's voice) conditions with varying audio signal-to-noise ratios in order to determine the regions of the PMC involved with multisensory and modality specific processing of visual speech gestures. The task was designed so that identification could be made with a high level of accuracy from visual only stimuli to control for task difficulty and differences in intelligibility. The results of the functional magnetic resonance imaging (fMRI) analysis for visual only and audio-visual conditions showed overlapping activity in inferior frontal gyrus and PMC. The left ventral inferior premotor cortex (PMvi) showed properties of multimodal (audio-visual) enhancement with a degraded auditory signal. The left inferior parietal lobule and right cerebellum also showed these properties. The left ventral superior and dorsal premotor cortex (PMvs/PMd) did not show this multisensory enhancement effect, but there was greater activity for the visual only over audio-visual conditions in these areas. The results suggest that the inferior regions of the ventral premotor cortex are involved with integrating multisensory information, whereas, more superior and dorsal regions of the PMC are involved with mapping unimodal (in this case visual) sensory features of the speech signal with articulatory speech gestures. PMID:24860526

  11. Exploring the Link between Visual Perception, Visual-Motor Integration, and Reading in Normal Developing and Impaired Children using DTVP-2.

    PubMed

    Bellocchi, Stéphanie; Muneaux, Mathilde; Huau, Andréa; Lévêque, Yohana; Jover, Marianne; Ducrot, Stéphanie

    2017-08-01

    Reading is known to be primarily a linguistic task. However, to successfully decode written words, children also need to develop good visual-perception skills. Furthermore, motor skills are implicated in letter recognition and reading acquisition. Three studies have been designed to determine the link between reading, visual perception, and visual-motor integration using the Developmental Test of Visual Perception version 2 (DTVP-2). Study 1 tests how visual perception and visual-motor integration in kindergarten predict reading outcomes in Grade 1, in typical developing children. Study 2 is aimed at finding out if these skills can be seen as clinical markers in dyslexic children (DD). Study 3 determines if visual-motor integration and motor-reduced visual perception can distinguish DD children according to whether they exhibit or not developmental coordination disorder (DCD). Results showed that phonological awareness and visual-motor integration predicted reading outcomes one year later. DTVP-2 demonstrated similarities and differences in visual-motor integration and motor-reduced visual perception between children with DD, DCD, and both of these deficits. DTVP-2 is a suitable tool to investigate links between visual perception, visual-motor integration and reading, and to differentiate cognitive profiles of children with developmental disabilities (i.e. DD, DCD, and comorbid children). Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Exogenous temporal cues enhance recognition memory in an object-based manner.

    PubMed

    Ohyama, Junji; Watanabe, Katsumi

    2010-11-01

    Exogenous attention enhances the perception of attended items in both a space-based and an object-based manner. Exogenous attention also improves recognition memory for attended items in the space-based mode. However, it has not been examined whether object-based exogenous attention enhances recognition memory. To address this issue, we examined whether a sudden visual change in a task-irrelevant stimulus (an exogenous cue) would affect participants' recognition memory for items that were serially presented around a cued time. The results showed that recognition accuracy for an item was strongly enhanced when the visual cue occurred at the same location and time as the item (Experiments 1 and 2). The memory enhancement effect occurred when the exogenous visual cue and an item belonged to the same object (Experiments 3 and 4) and even when the cue was counterpredictive of the timing of an item to be asked about (Experiment 5). The present study suggests that an exogenous temporal cue automatically enhances the recognition accuracy for an item that is presented at close temporal proximity to the cue and that recognition memory enhancement occurs in an object-based manner.

  13. Influences of selective adaptation on perception of audiovisual speech

    PubMed Central

    Dias, James W.; Cook, Theresa C.; Rosenblum, Lawrence D.

    2016-01-01

    Research suggests that selective adaptation in speech is a low-level process dependent on sensory-specific information shared between the adaptor and test-stimuli. However, previous research has only examined how adaptors shift perception of unimodal test stimuli, either auditory or visual. In the current series of experiments, we investigated whether adaptation to cross-sensory phonetic information can influence perception of integrated audio-visual phonetic information. We examined how selective adaptation to audio and visual adaptors shift perception of speech along an audiovisual test continuum. This test-continuum consisted of nine audio-/ba/-visual-/va/ stimuli, ranging in visual clarity of the mouth. When the mouth was clearly visible, perceivers “heard” the audio-visual stimulus as an integrated “va” percept 93.7% of the time (e.g., McGurk & MacDonald, 1976). As visibility of the mouth became less clear across the nine-item continuum, the audio-visual “va” percept weakened, resulting in a continuum ranging in audio-visual percepts from /va/ to /ba/. Perception of the test-stimuli was tested before and after adaptation. Changes in audiovisual speech perception were observed following adaptation to visual-/va/ and audiovisual-/va/, but not following adaptation to auditory-/va/, auditory-/ba/, or visual-/ba/. Adaptation modulates perception of integrated audio-visual speech by modulating the processing of sensory-specific information. The results suggest that auditory and visual speech information are not completely integrated at the level of selective adaptation. PMID:27041781

  14. Edge enhancement improves disruptive camouflage by emphasising false edges and creating pictorial relief

    PubMed Central

    Egan, John; Sharman, Rebecca J.; Scott-Brown, Kenneth C.; Lovell, Paul George

    2016-01-01

    Disruptive colouration is a visual camouflage composed of false edges and boundaries. Many disruptively camouflaged animals feature enhanced edges; light patches are surrounded by a lighter outline and/or a dark patches are surrounded by a darker outline. This camouflage is particularly common in amphibians, reptiles and lepidopterans. We explored the role that this pattern has in creating effective camouflage. In a visual search task utilising an ultra-large display area mimicking search tasks that might be found in nature, edge enhanced disruptive camouflage increases crypsis, even on substrates that do not provide an obvious visual match. Specifically, edge enhanced camouflage is effective on backgrounds both with and without shadows; i.e. this is not solely due to background matching of the dark edge enhancement element with the shadows. Furthermore, when the dark component of the edge enhancement is omitted the camouflage still provided better crypsis than control patterns without edge enhancement. This kind of edge enhancement improved camouflage on all background types. Lastly, we show that edge enhancement can create a perception of multiple surfaces. We conclude that edge enhancement increases the effectiveness of disruptive camouflage through mechanisms that may include the improved disruption of the object outline by implying pictorial relief. PMID:27922058

  15. Edge enhancement improves disruptive camouflage by emphasising false edges and creating pictorial relief.

    PubMed

    Egan, John; Sharman, Rebecca J; Scott-Brown, Kenneth C; Lovell, Paul George

    2016-12-06

    Disruptive colouration is a visual camouflage composed of false edges and boundaries. Many disruptively camouflaged animals feature enhanced edges; light patches are surrounded by a lighter outline and/or a dark patches are surrounded by a darker outline. This camouflage is particularly common in amphibians, reptiles and lepidopterans. We explored the role that this pattern has in creating effective camouflage. In a visual search task utilising an ultra-large display area mimicking search tasks that might be found in nature, edge enhanced disruptive camouflage increases crypsis, even on substrates that do not provide an obvious visual match. Specifically, edge enhanced camouflage is effective on backgrounds both with and without shadows; i.e. this is not solely due to background matching of the dark edge enhancement element with the shadows. Furthermore, when the dark component of the edge enhancement is omitted the camouflage still provided better crypsis than control patterns without edge enhancement. This kind of edge enhancement improved camouflage on all background types. Lastly, we show that edge enhancement can create a perception of multiple surfaces. We conclude that edge enhancement increases the effectiveness of disruptive camouflage through mechanisms that may include the improved disruption of the object outline by implying pictorial relief.

  16. Prediction and constraint in audiovisual speech perception

    PubMed Central

    Peelle, Jonathan E.; Sommers, Mitchell S.

    2015-01-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing precision of prediction. Electrophysiological studies demonstrate oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to auditory information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms. PMID:25890390

  17. Designing a Culturally Appropriate Visually Enhanced Low-Text Mobile Health App Promoting Physical Activity for Latinos: A Qualitative Study.

    PubMed

    Bender, Melinda S; Martinez, Suzanna; Kennedy, Christine

    2016-07-01

    Rapid proliferation of smartphone ownership and use among Latinos offers a unique opportunity to employ innovative visually enhanced low-text (VELT) mobile health applications (mHealth app) to promote health behavior change for Latinos at risk for lifestyle-related diseases. Using focus groups and in-depth interviews with 16 promotores and 5 health care providers recruited from California clinics, this qualitative study explored perceptions of visuals for a VELT mHealth app promoting physical activity (PA) and limiting sedentary behavior (SB) for Latinos. In this Phase 1 study, participants endorsed visuals portraying PA guidelines and recommended visuals depicting family and socially oriented PA. Overall, participants supported a VELT mHealth app as an alternative to text-based education. Findings will inform the future Phase 2 study development of a culturally appropriate VELT mHealth app to promote PA for Latinos, improve health literacy, and provide an alternative to traditional clinic text-based health education materials. © The Author(s) 2015.

  18. The potential for gaming techniques in radiology education and practice.

    PubMed

    Reiner, Bruce; Siegel, Eliot

    2008-02-01

    Traditional means of communication, education and training, and research have been dramatically transformed with the advent of computerized medicine, and no other medical specialty has been more greatly affected than radiology. Of the myriad of newer computer applications currently available, computer gaming stands out for its unique potential to enhance end-user performance and job satisfaction. Research in other disciplines has demonstrated computer gaming to offer the potential for enhanced decision making, resource management, visual acuity, memory, and motor skills. Within medical imaging, video gaming provides a novel means to enhance radiologist and technologist performance and visual perception by increasing attentional capacity, visual field of view, and visual-motor coordination. These enhancements take on heightened importance with the increasing size and complexity of three-dimensional imaging datasets. Although these operational gains are important in themselves, psychologic gains intrinsic to video gaming offer the potential to reduce stress and improve job satisfaction by creating a fun and engaging means of spirited competition. By creating customized gaming programs and rewards systems, video game applications can be customized to the skill levels and preferences of individual users, thereby creating a comprehensive means to improve individual and collective job performance.

  19. Human Occipital and Parietal GABA Selectively Influence Visual Perception of Orientation and Size.

    PubMed

    Song, Chen; Sandberg, Kristian; Andersen, Lau Møller; Blicher, Jakob Udby; Rees, Geraint

    2017-09-13

    GABA is the primary inhibitory neurotransmitter in human brain. The level of GABA varies substantially across individuals, and this variability is associated with interindividual differences in visual perception. However, it remains unclear whether the association between GABA level and visual perception reflects a general influence of visual inhibition or whether the GABA levels of different cortical regions selectively influence perception of different visual features. To address this, we studied how the GABA levels of parietal and occipital cortices related to interindividual differences in size, orientation, and brightness perception. We used visual contextual illusion as a perceptual assay since the illusion dissociates perceptual content from stimulus content and the magnitude of the illusion reflects the effect of visual inhibition. Across individuals, we observed selective correlations between the level of GABA and the magnitude of contextual illusion. Specifically, parietal GABA level correlated with size illusion magnitude but not with orientation or brightness illusion magnitude; in contrast, occipital GABA level correlated with orientation illusion magnitude but not with size or brightness illusion magnitude. Our findings reveal a region- and feature-dependent influence of GABA level on human visual perception. Parietal and occipital cortices contain, respectively, topographic maps of size and orientation preference in which neural responses to stimulus sizes and stimulus orientations are modulated by intraregional lateral connections. We propose that these lateral connections may underlie the selective influence of GABA on visual perception. SIGNIFICANCE STATEMENT GABA, the primary inhibitory neurotransmitter in human visual system, varies substantially across individuals. This interindividual variability in GABA level is linked to interindividual differences in many aspects of visual perception. However, the widespread influence of GABA raises the question of whether interindividual variability in GABA reflects an overall variability in visual inhibition and has a general influence on visual perception or whether the GABA levels of different cortical regions have selective influence on perception of different visual features. Here we report a region- and feature-dependent influence of GABA level on human visual perception. Our findings suggest that GABA level of a cortical region selectively influences perception of visual features that are topographically mapped in this region through intraregional lateral connections. Copyright © 2017 Song, Sandberg et al.

  20. Transcranial Random Noise Stimulation of Visual Cortex: Stochastic Resonance Enhances Central Mechanisms of Perception.

    PubMed

    van der Groen, Onno; Wenderoth, Nicole

    2016-05-11

    Random noise enhances the detectability of weak signals in nonlinear systems, a phenomenon known as stochastic resonance (SR). Though counterintuitive at first, SR has been demonstrated in a variety of naturally occurring processes, including human perception, where it has been shown that adding noise directly to weak visual, tactile, or auditory stimuli enhances detection performance. These results indicate that random noise can push subthreshold receptor potentials across the transfer threshold, causing action potentials in an otherwise silent afference. Despite the wealth of evidence demonstrating SR for noise added to a stimulus, relatively few studies have explored whether or not noise added directly to cortical networks enhances sensory detection. Here we administered transcranial random noise stimulation (tRNS; 100-640 Hz zero-mean Gaussian white noise) to the occipital region of human participants. For increasing tRNS intensities (ranging from 0 to 1.5 mA), the detection accuracy of a visual stimuli changed according to an inverted-U-shaped function, typical of the SR phenomenon. When the optimal level of noise was added to visual cortex, detection performance improved significantly relative to a zero noise condition (9.7 ± 4.6%) and to a similar extent as optimal noise added to the visual stimuli (11.2 ± 4.7%). Our results demonstrate that adding noise to cortical networks can improve human behavior and that tRNS is an appropriate tool to exploit this mechanism. Our findings suggest that neural processing at the network level exhibits nonlinear system properties that are sensitive to the stochastic resonance phenomenon and highlight the usefulness of tRNS as a tool to modulate human behavior. Since tRNS can be applied to all cortical areas, exploiting the SR phenomenon is not restricted to the perceptual domain, but can be used for other functions that depend on nonlinear neural dynamics (e.g., decision making, task switching, response inhibition, and many other processes). This will open new avenues for using tRNS to investigate brain function and enhance the behavior of healthy individuals or patients. Copyright © 2016 the authors 0270-6474/16/365289-10$15.00/0.

  1. Enhanced perception in savant syndrome: patterns, structure and creativity

    PubMed Central

    Mottron, Laurent; Dawson, Michelle; Soulières, Isabelle

    2009-01-01

    According to the enhanced perceptual functioning (EPF) model, autistic perception is characterized by: enhanced low-level operations; locally oriented processing as a default setting; greater activation of perceptual areas during a range of visuospatial, language, working memory or reasoning tasks; autonomy towards higher processes; and superior involvement in intelligence. EPF has been useful in accounting for autistic relative peaks of ability in the visual and auditory modalities. However, the role played by atypical perceptual mechanisms in the emergence and character of savant abilities remains underdeveloped. We now propose that enhanced detection of patterns, including similarity within and among patterns, is one of the mechanisms responsible for operations on human codes, a type of material with which savants show particular facility. This mechanism would favour an orientation towards material possessing the highest level of internal structure, through the implicit detection of within- and between-code isomorphisms. A second mechanism, related to but exceeding the existing concept of redintegration, involves completion, or filling-in, of missing information in memorized or perceived units or structures. In the context of autistics' enhanced perception, the nature and extent of these two mechanisms, and their possible contribution to the creativity evident in savant performance, are explored. PMID:19528021

  2. Enhanced perception in savant syndrome: patterns, structure and creativity.

    PubMed

    Mottron, Laurent; Dawson, Michelle; Soulières, Isabelle

    2009-05-27

    According to the enhanced perceptual functioning (EPF) model, autistic perception is characterized by: enhanced low-level operations; locally oriented processing as a default setting; greater activation of perceptual areas during a range of visuospatial, language, working memory or reasoning tasks; autonomy towards higher processes; and superior involvement in intelligence. EPF has been useful in accounting for autistic relative peaks of ability in the visual and auditory modalities. However, the role played by atypical perceptual mechanisms in the emergence and character of savant abilities remains underdeveloped. We now propose that enhanced detection of patterns, including similarity within and among patterns, is one of the mechanisms responsible for operations on human codes, a type of material with which savants show particular facility. This mechanism would favour an orientation towards material possessing the highest level of internal structure, through the implicit detection of within- and between-code isomorphisms. A second mechanism, related to but exceeding the existing concept of redintegration, involves completion, or filling-in, of missing information in memorized or perceived units or structures. In the context of autistics' enhanced perception, the nature and extent of these two mechanisms, and their possible contribution to the creativity evident in savant performance, are explored.

  3. Abnormal Size-Dependent Modulation of Motion Perception in Children with Autism Spectrum Disorder (ASD).

    PubMed

    Sysoeva, Olga V; Galuta, Ilia A; Davletshina, Maria S; Orekhova, Elena V; Stroganova, Tatiana A

    2017-01-01

    Excitation/Inhibition (E/I) imbalance in neural networks is now considered among the core neural underpinnings of autism psychopathology. In motion perception at least two phenomena critically depend on E/I balance in visual cortex: spatial suppression (SS), and spatial facilitation (SF) corresponding to impoverished or improved motion perception with increasing stimuli size, respectively. While SS is dominant at high contrast, SF is evident for low contrast stimuli, due to the prevalence of inhibitory contextual modulations in the former, and excitatory ones in the latter case. Only one previous study (Foss-Feig et al., 2013) investigated SS and SF in Autism Spectrum Disorder (ASD). Our study aimed to replicate previous findings, and to explore the putative contribution of deficient inhibitory influences into an enhanced SF index in ASD-a cornerstone for interpretation proposed by Foss-Feig et al. (2013). The SS and SF were examined in 40 boys with ASD, broad spectrum of intellectual abilities (63 < IQ < 127) and 44 typically developing (TD) boys, aged 6-15 years. The stimuli of small (1°) and large (12°) radius were presented under high (100%) and low (1%) contrast conditions. Social Responsiveness Scale and Sensory Profile Questionnaire were used to assess the autism severity and sensory processing abnormalities. We found that the SS index was atypically reduced, while SF index abnormally enhanced in children with ASD. The presence of abnormally enhanced SF in children with ASD was the only consistent finding between our study and that of Foss-Feig et al. While the SS and SF indexes were strongly interrelated in TD participants, this correlation was absent in their peers with ASD. In addition, the SF index but not the SS index correlated with the severity of autism and the poor registration abilities. The pattern of results is partially consistent with the idea of hypofunctional inhibitory transmission in visual areas in ASD. Nonetheless, the absence of correlation between SF and SS indexes paired with a strong direct link between abnormally enhanced SF and autism symptoms in our ASD sample emphasizes the role of the enhanced excitatory influences by themselves in the observed abnormalities in low-level visual phenomena found in ASD.

  4. Congenital Blindness Leads to Enhanced Vibrotactile Perception

    ERIC Educational Resources Information Center

    Wan, Catherine Y.; Wood, Amanda G.; Reutens, David C.; Wilson, Sarah J.

    2010-01-01

    Previous studies have shown that in comparison with the sighted, blind individuals display superior non-visual perceptual abilities and differ in brain organisation. In this study, we investigated the performance of blind and sighted participants on a vibrotactile discrimination task. Thirty-three blind participants were classified into one of…

  5. Effects of color combination and ambient illumination on visual perception time with TFT-LCD.

    PubMed

    Lin, Chin-Chiuan; Huang, Kuo-Chen

    2009-10-01

    An empirical study was carried out to examine the effects of color combination and ambient illumination on visual perception time using TFT-LCD. The effect of color combination was broken down into two subfactors, luminance contrast ratio and chromaticity contrast. Analysis indicated that the luminance contrast ratio and ambient illumination had significant, though small effects on visual perception. Visual perception time was better at high luminance contrast ratio than at low luminance contrast ratio. Visual perception time under normal ambient illumination was better than at other ambient illumination levels, although the stimulus color had a confounding effect on visual perception time. In general, visual perception time was better for the primary colors than the middle-point colors. Based on the results, normal ambient illumination level and high luminance contrast ratio seemed to be the optimal choice for design of workplace with video display terminals TFT-LCD.

  6. The informativity of sound modulates crossmodal facilitation of visual discrimination: a fMRI study.

    PubMed

    Li, Qi; Yu, Hongtao; Li, Xiujun; Sun, Hongzan; Yang, Jingjing; Li, Chunlin

    2017-01-18

    Many studies have investigated behavioral crossmodal facilitation when a visual stimulus is accompanied by a concurrent task-irrelevant sound. Lippert and colleagues reported that a concurrent task-irrelevant sound reduced the uncertainty of the timing of the visual display and improved perceptional responses (informative sound). However, the neural mechanism by which the informativity of sound affected crossmodal facilitation of visual discrimination remained unclear. In this study, we used event-related functional MRI to investigate the neural mechanisms underlying the role of informativity of sound in crossmodal facilitation of visual discrimination. Significantly faster reaction times were observed when there was an informative relationship between auditory and visual stimuli. The functional MRI results showed sound informativity-induced activation enhancement including the left fusiform gyrus and the right lateral occipital complex. Further correlation analysis showed that the right lateral occipital complex was significantly correlated with the behavioral benefit in reaction times. This suggests that this region was modulated by the informative relationship within audiovisual stimuli that was learnt during the experiment, resulting in late-stage multisensory integration and enhanced behavioral responses.

  7. Plasticity Beyond V1: Reinforcement of Motion Perception upon Binocular Central Retinal Lesions in Adulthood.

    PubMed

    Burnat, Kalina; Hu, Tjing-Tjing; Kossut, Małgorzata; Eysel, Ulf T; Arckens, Lutgarde

    2017-09-13

    Induction of a central retinal lesion in both eyes of adult mammals is a model for macular degeneration and leads to retinotopic map reorganization in the primary visual cortex (V1). Here we characterized the spatiotemporal dynamics of molecular activity levels in the central and peripheral representation of five higher-order visual areas, V2/18, V3/19, V4/21a,V5/PMLS, area 7, and V1/17, in adult cats with central 10° retinal lesions (both sexes), by means of real-time PCR for the neuronal activity reporter gene zif268. The lesions elicited a similar, permanent reduction in activity in the center of the lesion projection zone of area V1/17, V2/18, V3/19, and V4/21a, but not in the motion-driven V5/PMLS, which instead displayed an increase in molecular activity at 3 months postlesion, independent of visual field coordinates. Also area 7 only displayed decreased activity in its LPZ in the first weeks postlesion and increased activities in its periphery from 1 month onward. Therefore we examined the impact of central vision loss on motion perception using random dot kinematograms to test the capacity for form from motion detection based on direction and velocity cues. We revealed that the central retinal lesions either do not impair motion detection or even result in better performance, specifically when motion discrimination was based on velocity discrimination. In conclusion, we propose that central retinal damage leads to enhanced peripheral vision by sensitizing the visual system for motion processing relying on feedback from V5/PMLS and area 7. SIGNIFICANCE STATEMENT Central retinal lesions, a model for macular degeneration, result in functional reorganization of the primary visual cortex. Examining the level of cortical reactivation with the molecular activity marker zif268 revealed reorganization in visual areas outside V1. Retinotopic lesion projection zones typically display an initial depression in zif268 expression, followed by partial recovery with postlesion time. Only the motion-sensitive area V5/PMLS shows no decrease, and even a significant activity increase at 3 months post-retinal lesion. Behavioral tests of motion perception found no impairment and even better sensitivity to higher random dot stimulus velocities. We demonstrate that the loss of central vision induces functional mobilization of motion-sensitive visual cortex, resulting in enhanced perception of moving stimuli. Copyright © 2017 the authors 0270-6474/17/378989-11$15.00/0.

  8. Combined mirror visual and auditory feedback therapy for upper limb phantom pain: a case report

    PubMed Central

    2011-01-01

    Introduction Phantom limb sensation and phantom limb pain is a very common issue after amputations. In recent years there has been accumulating data implicating 'mirror visual feedback' or 'mirror therapy' as helpful in the treatment of phantom limb sensation and phantom limb pain. Case presentation We present the case of a 24-year-old Caucasian man, a left upper limb amputee, treated with mirror visual feedback combined with auditory feedback with improved pain relief. Conclusion This case may suggest that auditory feedback might enhance the effectiveness of mirror visual feedback and serve as a valuable addition to the complex multi-sensory processing of body perception in patients who are amputees. PMID:21272334

  9. The malleability of emotional perception: Short-term plasticity in retinotopic neurons accompanies the formation of perceptual biases to threat.

    PubMed

    Thigpen, Nina N; Bartsch, Felix; Keil, Andreas

    2017-04-01

    Emotional experience changes visual perception, leading to the prioritization of sensory information associated with threats and opportunities. These emotional biases have been extensively studied by basic and clinical scientists, but their underlying mechanism is not known. The present study combined measures of brain-electric activity and autonomic physiology to establish how threat biases emerge in human observers. Participants viewed stimuli designed to differentially challenge known properties of different neuronal populations along the visual pathway: location, eye, and orientation specificity. Biases were induced using aversive conditioning with only 1 combination of eye, orientation, and location predicting a noxious loud noise and replicated in a separate group of participants. Selective heart rate-orienting responses for the conditioned threat stimulus indicated bias formation. Retinotopic visual brain responses were persistently and selectively enhanced after massive aversive learning for only the threat stimulus and dissipated after extinction training. These changes were location-, eye-, and orientation-specific, supporting the hypothesis that short-term plasticity in primary visual neurons mediates the formation of perceptual biases to threat. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. From lecture theatre to portable media: students' perceptions of an enhanced podcast for revision.

    PubMed

    Shantikumar, Saran

    2009-06-01

    Podcasting is a novel medium used worldwide for information transfer and entertainment, available in both audio-only and enhanced audiovisual formats. We aim to investigate medical students' perceptions of a series of enhanced podcasts for revision. Thirteen audiovisual podcasts covering general surgery were developed, consisting of a PowerPoint slideshow with a voiceover narrative. A questionnaire was distributed to 211 final year medical students two months after the podcast became available. This aimed to address their perceptions of the enhanced podcast series, as well as their current experience with podcasts and digital media players. The website from which the podcasts were available provided details on the number of downloads. Students who used the resource felt that the enhanced podcasts were straightforward to access, were a useful learning supplement, and felt that similar resources for the remainder of the undergraduate medical syllabus would be useful for revision purposes. Students who had also used other audio-only medical podcasts indicated that the addition of a visual component improved the value of the resource. Audiovisual podcasts show promise as a revision aid that can be incorporated into undergraduate education. Further study will gain insight into the potential roles and outcomes of enhanced podcasting within the medical curriculum.

  11. Exogenous Attention Enables Perceptual Learning

    PubMed Central

    Szpiro, Sarit F. A.; Carrasco, Marisa

    2015-01-01

    Practice can improve visual perception, and these improvements are considered to be a form of brain plasticity. Training-induced learning is time-consuming and requires hundreds of trials across multiple days. The process of learning acquisition is understudied. Can learning acquisition be potentiated by manipulating visual attentional cues? We developed a protocol in which we used task-irrelevant cues for between-groups manipulation of attention during training. We found that training with exogenous attention can enable the acquisition of learning. Remarkably, this learning was maintained even when observers were subsequently tested under neutral conditions, which indicates that a change in perception was involved. Our study is the first to isolate the effects of exogenous attention and to demonstrate its efficacy to enable learning. We propose that exogenous attention boosts perceptual learning by enhancing stimulus encoding. PMID:26502745

  12. Spatial attention facilitates assembly of the briefest percepts: Electrophysiological evidence from color fusion.

    PubMed

    Akyürek, Elkan G; van Asselt, E Manon

    2015-12-01

    When two different color stimuli are presented in rapid succession, the resulting percept is sometimes that of a mixture of both colors, due to a perceptual process called color fusion. Although color fusion might seem to occur very early in the visual pathway, and only happens across the briefest of stimulus presentation intervals (< 50 ms), the present study showed that spatial attention can alter the fusion process. In a series of experiments, spatial cues were presented that either validly indicated the location of a pair of (different) color stimuli in successive stimulus arrays, or did not, pointing toward isoluminant gray distractors in the other visual hemifield. Increased color fusion was observed for valid cues across a range of stimulus durations, at the expense of individual color reports. By contrast, perception of repeated, same-color stimulus pairs did not change, suggesting that the enhancement was specific to fusion, not color discrimination per se. Electrophysiological measures furthermore showed that the amplitude of the N1, N2pc, and P3 components of the ERP were differentially modulated during the perception of individual and fused colors, as a function of cueing and stimulus duration. Fusion itself, collapsed across cueing conditions, was reflected uniquely in N1 amplitude. Overall, the results suggest that spatial attention enhances color fusion and decreases competition between stimuli, constituting an adaptive slowdown in service of temporal integration. © 2015 Society for Psychophysiological Research.

  13. Activity in human visual and parietal cortex reveals object-based attention in working memory.

    PubMed

    Peters, Benjamin; Kaiser, Jochen; Rahm, Benjamin; Bledowski, Christoph

    2015-02-25

    Visual attention enables observers to select behaviorally relevant information based on spatial locations, features, or objects. Attentional selection is not limited to physically present visual information, but can also operate on internal representations maintained in working memory (WM) in service of higher-order cognition. However, only little is known about whether attention to WM contents follows the same principles as attention to sensory stimuli. To address this question, we investigated in humans whether the typically observed effects of object-based attention in perception are also evident for object-based attentional selection of internal object representations in WM. In full accordance with effects in visual perception, the key behavioral and neuronal characteristics of object-based attention were observed in WM. Specifically, we found that reaction times were shorter when shifting attention to memory positions located on the currently attended object compared with equidistant positions on a different object. Furthermore, functional magnetic resonance imaging and multivariate pattern analysis of visuotopic activity in visual (areas V1-V4) and parietal cortex revealed that directing attention to one position of an object held in WM also enhanced brain activation for other positions on the same object, suggesting that attentional selection in WM activates the entire object. This study demonstrated that all characteristic features of object-based attention are present in WM and thus follows the same principles as in perception. Copyright © 2015 the authors 0270-6474/15/353360-10$15.00/0.

  14. Perceptual Contrast Enhancement with Dynamic Range Adjustment

    PubMed Central

    Zhang, Hong; Li, Yuecheng; Chen, Hao; Yuan, Ding; Sun, Mingui

    2013-01-01

    Recent years, although great efforts have been made to improve its performance, few Histogram equalization (HE) methods take human visual perception (HVP) into account explicitly. The human visual system (HVS) is more sensitive to edges than brightness. This paper proposes to take use of this nature intuitively and develops a perceptual contrast enhancement approach with dynamic range adjustment through histogram modification. The use of perceptual contrast connects the image enhancement problem with the HVS. To pre-condition the input image before the HE procedure is implemented, a perceptual contrast map (PCM) is constructed based on the modified Difference of Gaussian (DOG) algorithm. As a result, the contrast of the image is sharpened and high frequency noise is suppressed. A modified Clipped Histogram Equalization (CHE) is also developed which improves visual quality by automatically detecting the dynamic range of the image with improved perceptual contrast. Experimental results show that the new HE algorithm outperforms several state-of-the-art algorithms in improving perceptual contrast and enhancing details. In addition, the new algorithm is simple to implement, making it suitable for real-time applications. PMID:24339452

  15. Evaluation of perception performance in neck dissection planning using eye tracking and attention landscapes

    NASA Astrophysics Data System (ADS)

    Burgert, Oliver; Örn, Veronika; Velichkovsky, Boris M.; Gessat, Michael; Joos, Markus; Strauß, Gero; Tietjen, Christian; Preim, Bernhard; Hertel, Ilka

    2007-03-01

    Neck dissection is a surgical intervention at which cervical lymph node metastases are removed. Accurate surgical planning is of high importance because wrong judgment of the situation causes severe harm for the patient. Diagnostic perception of radiological images by a surgeon is an acquired skill that can be enhanced by training and experience. To improve accuracy in detecting pathological lymph nodes by newcomers and less experienced professionals, it is essential to understand how surgical experts solve relevant visual and recognition tasks. By using eye tracking and especially the newly-developed attention landscapes visualizations, it could be determined whether visualization options, for example 3D models instead of CT data, help in increasing accuracy and speed of neck dissection planning. Thirteen ORL surgeons with different levels of expertise participated in this study. They inspected different visualizations of 3D models and original CT datasets of patients. Among others, we used scanpath analysis and attention landscapes to interpret the inspection strategies. It was possible to distinguish different patterns of visual exploratory activity. The experienced surgeons exhibited a higher concentration of attention on the limited number of areas of interest and demonstrated less saccadic eye movements indicating a better orientation.

  16. Attention modulates perception of visual space

    PubMed Central

    Zhou, Liu; Deng, Chenglong; Ooi, Teng Leng; He, Zijiang J.

    2017-01-01

    Attention readily facilitates the detection and discrimination of objects, but it is not known whether it helps to form the vast volume of visual space that contains the objects and where actions are implemented. Conventional wisdom suggests not, given the effortless ease with which we perceive three-dimensional (3D) scenes on opening our eyes. Here, we show evidence to the contrary. In Experiment 1, the observer judged the location of a briefly presented target, placed either on the textured ground or ceiling surface. Judged location was more accurate for a target on the ground, provided that the ground was visible and that the observer directed attention to the lower visual field, not the upper field. This reveals that attention facilitates space perception with reference to the ground. Experiment 2 showed that judged location of a target in mid-air, with both ground and ceiling surfaces present, was more accurate when the observer directed their attention to the lower visual field; this indicates that the attention effect extends to visual space above the ground. These findings underscore the role of attention in anchoring visual orientation in space, which is arguably a primal event that enhances one’s ability to interact with objects and surface layouts within the visual space. The fact that the effect of attention was contingent on the ground being visible suggests that our terrestrial visual system is best served by its ecological niche. PMID:29177198

  17. Prediction and constraint in audiovisual speech perception.

    PubMed

    Peelle, Jonathan E; Sommers, Mitchell S

    2015-07-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing the precision of prediction. Electrophysiological studies demonstrate that oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to acoustic information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. The role of prestimulus activity in visual extinction☆

    PubMed Central

    Urner, Maren; Sarri, Margarita; Grahn, Jessica; Manly, Tom; Rees, Geraint; Friston, Karl

    2013-01-01

    Patients with visual extinction following right-hemisphere damage sometimes see and sometimes miss stimuli in the left visual field, particularly when stimuli are presented simultaneously to both visual fields. Awareness of left visual field stimuli is associated with increased activity in bilateral parietal and frontal cortex. However, it is unknown why patients see or miss these stimuli. Previous neuroimaging studies in healthy adults show that prestimulus activity biases perceptual decisions, and biases in visual perception can be attributed to fluctuations in prestimulus activity in task relevant brain regions. Here, we used functional MRI to investigate whether prestimulus activity affected perception in the context of visual extinction following stroke. We measured prestimulus activity in stimulus-responsive cortical areas during an extinction paradigm in a patient with unilateral right parietal damage and visual extinction. This allowed us to compare prestimulus activity on physically identical bilateral trials that either did or did not lead to visual extinction. We found significantly increased activity prior to stimulus presentation in two areas that were also activated by visual stimulation: the left calcarine sulcus and right occipital inferior cortex. Using dynamic causal modelling (DCM) we found that both these differences in prestimulus activity and stimulus evoked responses could be explained by enhanced effective connectivity within and between visual areas, prior to stimulus presentation. Thus, we provide evidence for the idea that differences in ongoing neural activity in visually responsive areas prior to stimulus onset affect awareness in visual extinction, and that these differences are mediated by fluctuations in extrinsic and intrinsic connectivity. PMID:23680398

  19. The role of prestimulus activity in visual extinction.

    PubMed

    Urner, Maren; Sarri, Margarita; Grahn, Jessica; Manly, Tom; Rees, Geraint; Friston, Karl

    2013-07-01

    Patients with visual extinction following right-hemisphere damage sometimes see and sometimes miss stimuli in the left visual field, particularly when stimuli are presented simultaneously to both visual fields. Awareness of left visual field stimuli is associated with increased activity in bilateral parietal and frontal cortex. However, it is unknown why patients see or miss these stimuli. Previous neuroimaging studies in healthy adults show that prestimulus activity biases perceptual decisions, and biases in visual perception can be attributed to fluctuations in prestimulus activity in task relevant brain regions. Here, we used functional MRI to investigate whether prestimulus activity affected perception in the context of visual extinction following stroke. We measured prestimulus activity in stimulus-responsive cortical areas during an extinction paradigm in a patient with unilateral right parietal damage and visual extinction. This allowed us to compare prestimulus activity on physically identical bilateral trials that either did or did not lead to visual extinction. We found significantly increased activity prior to stimulus presentation in two areas that were also activated by visual stimulation: the left calcarine sulcus and right occipital inferior cortex. Using dynamic causal modelling (DCM) we found that both these differences in prestimulus activity and stimulus evoked responses could be explained by enhanced effective connectivity within and between visual areas, prior to stimulus presentation. Thus, we provide evidence for the idea that differences in ongoing neural activity in visually responsive areas prior to stimulus onset affect awareness in visual extinction, and that these differences are mediated by fluctuations in extrinsic and intrinsic connectivity. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. A Comparative Study on the Visual Perceptions of Children with Attention Deficit Hyperactivity Disorder

    NASA Astrophysics Data System (ADS)

    Ahmetoglu, Emine; Aral, Neriman; Butun Ayhan, Aynur

    This study was conducted in order to (a) compare the visual perceptions of seven-year-old children diagnosed with attention deficit hyperactivity disorder with those of normally developing children of the same age and development level and (b) determine whether the visual perceptions of children with attention deficit hyperactivity disorder vary with respect to gender, having received preschool education and parents` educational level. A total of 60 children, 30 with attention deficit hyperactivity disorder and 30 with normal development, were assigned to the study. Data about children with attention deficit hyperactivity disorder and their families was collected by using a General Information Form and the visual perception of children was examined through the Frostig Developmental Test of Visual Perception. The Mann-Whitney U-test and Kruskal-Wallis variance analysis was used to determine whether there was a difference of between the visual perceptions of children with normal development and those diagnosed with attention deficit hyperactivity disorder and to discover whether the variables of gender, preschool education and parents` educational status affected the visual perceptions of children with attention deficit hyperactivity disorder. The results showed that there was a statistically meaningful difference between the visual perceptions of the two groups and that the visual perceptions of children with attention deficit hyperactivity disorder were affected meaningfully by gender, preschool education and parents` educational status.

  1. Abnormal late visual responses and alpha oscillations in neurofibromatosis type 1: a link to visual and attention deficits

    PubMed Central

    2014-01-01

    Background Neurofibromatosis type 1 (NF1) affects several areas of cognitive function including visual processing and attention. We investigated the neural mechanisms underlying the visual deficits of children and adolescents with NF1 by studying visual evoked potentials (VEPs) and brain oscillations during visual stimulation and rest periods. Methods Electroencephalogram/event-related potential (EEG/ERP) responses were measured during visual processing (NF1 n = 17; controls n = 19) and idle periods with eyes closed and eyes open (NF1 n = 12; controls n = 14). Visual stimulation was chosen to bias activation of the three detection mechanisms: achromatic, red-green and blue-yellow. Results We found significant differences between the groups for late chromatic VEPs and a specific enhancement in the amplitude of the parieto-occipital alpha amplitude both during visual stimulation and idle periods. Alpha modulation and the negative influence of alpha oscillations in visual performance were found in both groups. Conclusions Our findings suggest abnormal later stages of visual processing and enhanced amplitude of alpha oscillations supporting the existence of deficits in basic sensory processing in NF1. Given the link between alpha oscillations, visual perception and attention, these results indicate a neural mechanism that might underlie the visual sensitivity deficits and increased lapses of attention observed in individuals with NF1. PMID:24559228

  2. Egocentric Direction and Position Perceptions are Dissociable Based on Only Static Lane Edge Information

    PubMed Central

    Nakashima, Ryoichi; Iwai, Ritsuko; Ueda, Sayako; Kumada, Takatsune

    2015-01-01

    When observers perceive several objects in a space, at the same time, they should effectively perceive their own position as a viewpoint. However, little is known about observers’ percepts of their own spatial location based on the visual scene information viewed from them. Previous studies indicate that two distinct visual spatial processes exist in the locomotion situation: the egocentric position perception and egocentric direction perception. Those studies examined such perceptions in information rich visual environments where much dynamic and static visual information was available. This study examined these two perceptions in information of impoverished environments, including only static lane edge information (i.e., limited information). We investigated the visual factors associated with static lane edge information that may affect these perceptions. Especially, we examined the effects of the two factors on egocentric direction and position perceptions. One is the “uprightness factor” that “far” visual information is seen at upper location than “near” visual information. The other is the “central vision factor” that observers usually look at “far” visual information using central vision (i.e., foveal vision) whereas ‘near’ visual information using peripheral vision. Experiment 1 examined the effect of the “uprightness factor” using normal and inverted road images. Experiment 2 examined the effect of the “central vision factor” using normal and transposed road images where the upper half of the normal image was presented under the lower half. Experiment 3 aimed to replicate the results of Experiments 1 and 2. Results showed that egocentric direction perception is interfered with image inversion or image transposition, whereas egocentric position perception is robust against these image transformations. That is, both “uprightness” and “central vision” factors are important for egocentric direction perception, but not for egocentric position perception. Therefore, the two visual spatial perceptions about observers’ own viewpoints are fundamentally dissociable. PMID:26648895

  3. The Perceptual Root of Object-Based Storage: An Interactive Model of Perception and Visual Working Memory

    ERIC Educational Resources Information Center

    Gao, Tao; Gao, Zaifeng; Li, Jie; Sun, Zhongqiang; Shen, Mowei

    2011-01-01

    Mainstream theories of visual perception assume that visual working memory (VWM) is critical for integrating online perceptual information and constructing coherent visual experiences in changing environments. Given the dynamic interaction between online perception and VWM, we propose that how visual information is processed during visual…

  4. Altered transfer of visual motion information to parietal association cortex in untreated first-episode psychosis: Implications for pursuit eye tracking

    PubMed Central

    Lencer, Rebekka; Keedy, Sarah K.; Reilly, James L.; McDonough, Bruce E.; Harris, Margret S. H.; Sprenger, Andreas; Sweeney, John A.

    2011-01-01

    Visual motion processing and its use for pursuit eye movement control represent a valuable model for studying the use of sensory input for action planning. In psychotic disorders, alterations of visual motion perception have been suggested to cause pursuit eye tracking deficits. We evaluated this system in functional neuroimaging studies of untreated first-episode schizophrenia (N=24), psychotic bipolar disorder patients (N=13) and healthy controls (N=20). During a passive visual motion processing task, both patient groups showed reduced activation in the posterior parietal projection fields of motion-sensitive extrastriate area V5, but not in V5 itself. This suggests reduced bottom-up transfer of visual motion information from extrastriate cortex to perceptual systems in parietal association cortex. During active pursuit, activation was enhanced in anterior intraparietal sulcus and insula in both patient groups, and in dorsolateral prefrontal cortex and dorsomedial thalamus in schizophrenia patients. This may result from increased demands on sensorimotor systems for pursuit control due to the limited availability of perceptual motion information about target speed and tracking error. Visual motion information transfer deficits to higher -level association cortex may contribute to well-established pursuit tracking abnormalities, and perhaps to a wider array of alterations in perception and action planning in psychotic disorders. PMID:21873035

  5. The importance of laughing in your face: influences of visual laughter on auditory laughter perception.

    PubMed

    Jordan, Timothy R; Abedipour, Lily

    2010-01-01

    Hearing the sound of laughter is important for social communication, but processes contributing to the audibility of laughter remain to be determined. Production of laughter resembles production of speech in that both involve visible facial movements accompanying socially significant auditory signals. However, while it is known that speech is more audible when the facial movements producing the speech sound can be seen, similar visual enhancement of the audibility of laughter remains unknown. To address this issue, spontaneously occurring laughter was edited to produce stimuli comprising visual laughter, auditory laughter, visual and auditory laughter combined, and no laughter at all (either visual or auditory), all presented in four levels of background noise. Visual laughter and no-laughter stimuli produced very few reports of auditory laughter. However, visual laughter consistently made auditory laughter more audible, compared to the same auditory signal presented without visual laughter, resembling findings reported previously for speech.

  6. Visual Memories Bypass Normalization.

    PubMed

    Bloem, Ilona M; Watanabe, Yurika L; Kibbe, Melissa M; Ling, Sam

    2018-05-01

    How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores-neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation.

  7. Visual Memories Bypass Normalization

    PubMed Central

    Bloem, Ilona M.; Watanabe, Yurika L.; Kibbe, Melissa M.; Ling, Sam

    2018-01-01

    How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores—neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation. PMID:29596038

  8. Cross-modal perception of rhythm in music and dance by cochlear implant users.

    PubMed

    Vongpaisal, Tara; Monaghan, Melanie

    2014-05-01

    Two studies examined adult cochlear implant (CI) users' ability to match auditory rhythms occurring in music to visual rhythms occurring in dance (Cha Cha, Slow Swing, Tango and Jive). In Experiment 1, adults CI users (n = 10) and hearing controls matched a music excerpt to choreographed dance sequences presented as silent videos. In Experiment 2, participants matched a silent video of a dance sequence to music excerpts. CI users were successful in detecting timing congruencies across music and dance at well above-chance levels suggesting that they were able to process distinctive auditory and visual rhythm patterns that characterized each style. However, they were better able to detect cross-modal timing congruencies when the reference was an auditory rhythm than when the reference was a visual rhythm. Learning strategies that encourage cross-modal learning of musical rhythms may have applications in developing novel rehabilitative strategies to enhance music perception and appreciation outcomes of child implant users.

  9. Neuronal integration in visual cortex elevates face category tuning to conscious face perception

    PubMed Central

    Fahrenfort, Johannes J.; Snijders, Tineke M.; Heinen, Klaartje; van Gaal, Simon; Scholte, H. Steven; Lamme, Victor A. F.

    2012-01-01

    The human brain has the extraordinary capability to transform cluttered sensory input into distinct object representations. For example, it is able to rapidly and seemingly without effort detect object categories in complex natural scenes. Surprisingly, category tuning is not sufficient to achieve conscious recognition of objects. What neural process beyond category extraction might elevate neural representations to the level where objects are consciously perceived? Here we show that visible and invisible faces produce similar category-selective responses in the ventral visual cortex. The pattern of neural activity evoked by visible faces could be used to decode the presence of invisible faces and vice versa. However, only visible faces caused extensive response enhancements and changes in neural oscillatory synchronization, as well as increased functional connectivity between higher and lower visual areas. We conclude that conscious face perception is more tightly linked to neural processes of sustained information integration and binding than to processes accommodating face category tuning. PMID:23236162

  10. Using Screencasts to Enhance Assessment Feedback: Students' Perceptions and Preferences

    ERIC Educational Resources Information Center

    Marriott, Pru; Teoh, Lim Keong

    2012-01-01

    In the UK, assessment and feedback have been regularly highlighted by the National Student Survey as critical aspects that require improvement. An innovative approach to delivering feedback that has proved successful in non-business-related disciplines is the delivery of audio and visual feedback using screencast technology. The feedback on…

  11. Effect of Musical Expertise on Visuospatial Abilities: Evidence from Reaction Times and Mental Imagery

    ERIC Educational Resources Information Center

    Brochard, Renaud; Dufour, Andre; Despres, Olivier

    2004-01-01

    Recently, the relationship between music and nonmusical cognitive abilities has been highly debated. It has been documented that formal music training would improve verbal, mathematical or visuospatial performance in children. In the experiments described here, we tested if visual perception and imagery abilities were enhanced in adult musicians…

  12. The Impact of a Performance Profiling Intervention on Athletes' Intrinsic Motivation

    ERIC Educational Resources Information Center

    Weston, Neil J. V.; Greenlees, Iain A.; Thelwell, Richard C.

    2011-01-01

    Originally developed by Butler (1989) with the Great Britain Olympic boxing team, performance profiling is an assessment tool primarily used by sport psychologists to enhance athlete awareness. The completed profile provides the athlete, the coach, and psychologist with a visual representation of the athlete's perception of his or her performance…

  13. Processing Digital Imagery to Enhance Perceptions of Realism

    NASA Technical Reports Server (NTRS)

    Woodell, Glenn A.; Jobson, Daniel J.; Rahman, Zia-ur

    2003-01-01

    Multi-scale retinex with color restoration (MSRCR) is a method of processing digital image data based on Edwin Land s retinex (retina + cortex) theory of human color vision. An outgrowth of basic scientific research and its application to NASA s remote-sensing mission, MSRCR is embodied in a general-purpose algorithm that greatly improves the perception of visual realism and the quantity and quality of perceived information in a digitized image. In addition, the MSRCR algorithm includes provisions for automatic corrections to accelerate and facilitate what could otherwise be a tedious image-editing process. The MSRCR algorithm has been, and is expected to continue to be, the basis for development of commercial image-enhancement software designed to extend and refine its capabilities for diverse applications.

  14. Competitive Dynamics in MSTd: A Mechanism for Robust Heading Perception Based on Optic Flow

    PubMed Central

    Layton, Oliver W.; Fajen, Brett R.

    2016-01-01

    Human heading perception based on optic flow is not only accurate, it is also remarkably robust and stable. These qualities are especially apparent when observers move through environments containing other moving objects, which introduce optic flow that is inconsistent with observer self-motion and therefore uninformative about heading direction. Moving objects may also occupy large portions of the visual field and occlude regions of the background optic flow that are most informative about heading perception. The fact that heading perception is biased by no more than a few degrees under such conditions attests to the robustness of the visual system and warrants further investigation. The aim of the present study was to investigate whether recurrent, competitive dynamics among MSTd neurons that serve to reduce uncertainty about heading over time offer a plausible mechanism for capturing the robustness of human heading perception. Simulations of existing heading models that do not contain competitive dynamics yield heading estimates that are far more erratic and unstable than human judgments. We present a dynamical model of primate visual areas V1, MT, and MSTd based on that of Layton, Mingolla, and Browning that is similar to the other models, except that the model includes recurrent interactions among model MSTd neurons. Competitive dynamics stabilize the model’s heading estimate over time, even when a moving object crosses the future path. Soft winner-take-all dynamics enhance units that code a heading direction consistent with the time history and suppress responses to transient changes to the optic flow field. Our findings support recurrent competitive temporal dynamics as a crucial mechanism underlying the robustness and stability of perception of heading. PMID:27341686

  15. Consumer perceptions of medication warnings about driving: a comparison of French and Australian labels.

    PubMed

    Smyth, T; Sheehan, M; Siskind, V; Mercier-Guyon, C; Mallaret, M

    2013-01-01

    Little research has examined user perceptions of medication warnings about driving. Consumer perceptions of the Australian national approach to medication warnings about driving are examined. The Australian approach to warning presentation is compared with an alternative approach used in France. Visual characteristics of the warnings and overall warning readability are investigated. Risk perceptions and behavioral intentions associated with the warnings are also examined. Surveys were conducted with 358 public hospital outpatients in Queensland, Australia. Extending this investigation is a supplementary comparison study of French hospital outpatients (n = 75). The results suggest that the Australian warning approach of using a combination of visual characteristics is important for consumers but that the use of a pictogram could enhance effects. Significantly higher levels of risk perception were found among the sample for the French highest severity label compared to the analogous mandatory Australian warning, with a similar trend evident in the French study results. The results also indicated that the French label was associated with more cautious behavioral intentions. The results are potentially important for the Australian approach to medication warnings about driving impairment. The research contributes practical findings that can be used to enhance the effectiveness of warnings and develop countermeasures in this area. Hospital pharmacy patients should include persons with the highest level of likelihood of knowledge and awareness of medication warning labeling. Even in this context it appears that a review of the Australian warning system would be useful particularly in the context of increasing evidence relating to associated driving risks. Reviewing text size and readability of messages including the addition of pictograms, as well as clarifying the importance of potential risk in a general community context, is recommended for consideration and further research.

  16. Enhanced dimension-specific visual working memory in grapheme–color synesthesia☆

    PubMed Central

    Terhune, Devin Blair; Wudarczyk, Olga Anna; Kochuparampil, Priya; Cohen Kadosh, Roi

    2013-01-01

    There is emerging evidence that the encoding of visual information and the maintenance of this information in a temporarily accessible state in working memory rely on the same neural mechanisms. A consequence of this overlap is that atypical forms of perception should influence working memory. We examined this by investigating whether having grapheme–color synesthesia, a condition characterized by the involuntary experience of color photisms when reading or representing graphemes, would confer benefits on working memory. Two competing hypotheses propose that superior memory in synesthesia results from information being coded in two information channels (dual-coding) or from superior dimension-specific visual processing (enhanced processing). We discriminated between these hypotheses in three n-back experiments in which controls and synesthetes viewed inducer and non-inducer graphemes and maintained color or grapheme information in working memory. Synesthetes displayed superior color working memory than controls for both grapheme types, whereas the two groups did not differ in grapheme working memory. Further analyses excluded the possibilities of enhanced working memory among synesthetes being due to greater color discrimination, stimulus color familiarity, or bidirectionality. These results reveal enhanced dimension-specific visual working memory in this population and supply further evidence for a close relationship between sensory processing and the maintenance of sensory information in working memory. PMID:23892185

  17. Altered figure-ground perception in monkeys with an extra-striate lesion.

    PubMed

    Supèr, Hans; Lamme, Victor A F

    2007-11-05

    The visual system binds and segments the elements of an image into coherent objects and their surroundings. Recent findings demonstrate that primary visual cortex is involved in this process of figure-ground organization. In the primary visual cortex the late part of a neural response to a stimulus correlates with figure-ground segregation and perception. Such a late onset indicates an involvement of feedback projections from higher visual areas. To investigate the possible role of feedback in figure-ground perception we removed dorsal extra-striate areas of the monkey visual cortex. The findings show that figure-ground perception is reduced when the figure is presented in the lesioned hemifield and perception is normal when the figure appeared in the intact hemifield. In conclusion, our observations show the importance for recurrent processing in visual perception.

  18. Eye movements and attention: The role of pre-saccadic shifts of attention in perception, memory and the control of saccades

    PubMed Central

    Gersch, Timothy M.; Schnitzer, Brian S.; Dosher, Barbara A.; Kowler, Eileen

    2012-01-01

    Saccadic eye movements and perceptual attention work in a coordinated fashion to allow selection of the objects, features or regions with the greatest momentary need for limited visual processing resources. This study investigates perceptual characteristics of pre-saccadic shifts of attention during a sequence of saccades using the visual manipulations employed to study mechanisms of attention during maintained fixation. The first part of this paper reviews studies of the connections between saccades and attention, and their significance for both saccadic control and perception. The second part presents three experiments that examine the effects of pre-saccadic shifts of attention on vision during sequences of saccades. Perceptual enhancements at the saccadic goal location relative to non-goal locations were found across a range of stimulus contrasts, with either perceptual discrimination or detection tasks, with either single or multiple perceptual targets, and regardless of the presence of external noise. The results show that the preparation of saccades can evoke a variety of attentional effects, including attentionally-mediated changes in the strength of perceptual representations, selection of targets for encoding in visual memory, exclusion of external noise, or changes in the levels of internal visual noise. The visual changes evoked by saccadic planning make it possible for the visual system to effectively use saccadic eye movements to explore the visual environment. PMID:22809798

  19. Grasp posture alters visual processing biases near the hands

    PubMed Central

    Thomas, Laura E.

    2015-01-01

    Observers experience biases in visual processing for objects within easy reach of their hands that may assist them in evaluating items that are candidates for action. I investigated the hypothesis that hand postures affording different types of actions differentially bias vision. Across three experiments, participants performed global motion detection and global form perception tasks while their hands were positioned a) near the display in a posture affording a power grasp, b) near the display in a posture affording a precision grasp, or c) in their laps. Although the power grasp posture facilitated performance on the motion task, the precision grasp posture instead facilitated performance on the form task. These results suggest that the visual system weights processing based on an observer’s current affordances for specific actions: fast and forceful power grasps enhance temporal sensitivity, while detail-oriented precision grasps enhance spatial sensitivity. PMID:25862545

  20. Auditory, visual, and auditory-visual perceptions of emotions by young children with hearing loss versus children with normal hearing.

    PubMed

    Most, Tova; Michaelis, Hilit

    2012-08-01

    This study aimed to investigate the effect of hearing loss (HL) on emotion-perception ability among young children with and without HL. A total of 26 children 4.0-6.6 years of age with prelingual sensory-neural HL ranging from moderate to profound and 14 children with normal hearing (NH) participated. They were asked to identify happiness, anger, sadness, and fear expressed by an actress when uttering the same neutral nonsense sentence. Their auditory, visual, and auditory-visual perceptions of the emotional content were assessed. The accuracy of emotion perception among children with HL was lower than that of the NH children in all 3 conditions: auditory, visual, and auditory-visual. Perception through the combined auditory-visual mode significantly surpassed the auditory or visual modes alone in both groups, indicating that children with HL utilized the auditory information for emotion perception. No significant differences in perception emerged according to degree of HL. In addition, children with profound HL and cochlear implants did not perform differently from children with less severe HL who used hearing aids. The relatively high accuracy of emotion perception by children with HL may be explained by their intensive rehabilitation, which emphasizes suprasegmental and paralinguistic aspects of verbal communication.

  1. Perception and control of rotorcraft flight

    NASA Technical Reports Server (NTRS)

    Owen, Dean H.

    1991-01-01

    Three topics which can be applied to rotorcraft flight are examined: (1) the nature of visual information; (2) what visual information is informative about; and (3) the control of visual information. The anchorage of visual perception is defined as the distribution of structure in the surrounding optical array or the distribution of optical structure over the retinal surface. A debate was provoked about whether the referent of visual event perception, and in turn control, is optical motion, kinetics, or dynamics. The interface of control theory and visual perception is also considered. The relationships among these problems is the basis of this article.

  2. Suggested Activities to Use With Children Who Present Symptoms of Visual Perception Problems, Elementary Level.

    ERIC Educational Resources Information Center

    Washington County Public Schools, Washington, PA.

    Symptoms displayed by primary age children with learning disabilities are listed; perceptual handicaps are explained. Activities are suggested for developing visual perception and perception involving motor activities. Also suggested are activities to develop body concept, visual discrimination and attentiveness, visual memory, and figure ground…

  3. [Visual perception and its disorders].

    PubMed

    Ruf-Bächtiger, L

    1989-11-21

    It's the brain and not the eye that decides what is perceived. In spite of this fact, quite a lot is known about the functioning of the eye and the first sections of the optic tract, but little about the actual process of perception. Examination of visual perception and its malfunctions relies therefore on certain hypotheses. Proceeding from the model of functional brain systems, variant functional domains of visual perception can be distinguished. Among the more important of these domains are: digit span, visual discrimination and figure-ground discrimination. Evaluation of these functional domains allows us to understand those children with disorders of visual perception better and to develop more effective treatment methods.

  4. The Developmental Test of Visual Perception-Third Edition (DTVP-3): A Review, Critique, and Practice Implications

    ERIC Educational Resources Information Center

    Brown, Ted; Murdolo, Yuki

    2015-01-01

    The "Developmental Test of Visual Perception-Third Edition" (DTVP-3) is a recent revision of the "Developmental Test of Visual Perception-Second Edition" (DTVP-2). The DTVP-3 is designed to assess the visual perceptual and/or visual-motor integration skills of children from 4 to 12 years of age. The test is standardized using…

  5. A Critical Review of the "Motor-Free Visual Perception Test-Fourth Edition" (MVPT-4)

    ERIC Educational Resources Information Center

    Brown, Ted; Peres, Lisa

    2018-01-01

    The "Motor-Free Visual Perception Test-fourth edition" (MVPT-4) is a revised version of the "Motor-Free Visual Perception Test-third edition." The MVPT-4 is used to assess the visual-perceptual ability of individuals aged 4.0 through 80+ years via a series of visual-perceptual tasks that do not require a motor response. Test…

  6. Texture Segregation Causes Early Figure Enhancement and Later Ground Suppression in Areas V1 and V4 of Visual Cortex

    PubMed Central

    Poort, Jasper; Self, Matthew W.; van Vugt, Bram; Malkki, Hemi; Roelfsema, Pieter R.

    2016-01-01

    Segregation of images into figures and background is fundamental for visual perception. Cortical neurons respond more strongly to figural image elements than to background elements, but the mechanisms of figure–ground modulation (FGM) are only partially understood. It is unclear whether FGM in early and mid-level visual cortex is caused by an enhanced response to the figure, a suppressed response to the background, or both. We studied neuronal activity in areas V1 and V4 in monkeys performing a texture segregation task. We compared texture-defined figures with homogeneous textures and found an early enhancement of the figure representation, and a later suppression of the background. Across neurons, the strength of figure enhancement was independent of the strength of background suppression. We also examined activity in the different V1 layers. Both figure enhancement and ground suppression were strongest in superficial and deep layers and weaker in layer 4. The current–source density profiles suggested that figure enhancement was caused by stronger synaptic inputs in feedback-recipient layers 1, 2, and 5 and ground suppression by weaker inputs in these layers, suggesting an important role for feedback connections from higher level areas. These results provide new insights into the mechanisms for figure–ground organization. PMID:27522074

  7. Short-term visual deprivation reduces interference effects of task-irrelevant facial expressions on affective prosody judgments

    PubMed Central

    Fengler, Ineke; Nava, Elena; Röder, Brigitte

    2015-01-01

    Several studies have suggested that neuroplasticity can be triggered by short-term visual deprivation in healthy adults. Specifically, these studies have provided evidence that visual deprivation reversibly affects basic perceptual abilities. The present study investigated the long-lasting effects of short-term visual deprivation on emotion perception. To this aim, we visually deprived a group of young healthy adults, age-matched with a group of non-deprived controls, for 3 h and tested them before and after visual deprivation (i.e., after 8 h on average and at 4 week follow-up) on an audio–visual (i.e., faces and voices) emotion discrimination task. To observe changes at the level of basic perceptual skills, we additionally employed a simple audio–visual (i.e., tone bursts and light flashes) discrimination task and two unimodal (one auditory and one visual) perceptual threshold measures. During the 3 h period, both groups performed a series of auditory tasks. To exclude the possibility that changes in emotion discrimination may emerge as a consequence of the exposure to auditory stimulation during the 3 h stay in the dark, we visually deprived an additional group of age-matched participants who concurrently performed unrelated (i.e., tactile) tasks to the later tested abilities. The two visually deprived groups showed enhanced affective prosodic discrimination abilities in the context of incongruent facial expressions following the period of visual deprivation; this effect was partially maintained until follow-up. By contrast, no changes were observed in affective facial expression discrimination and in the basic perception tasks in any group. These findings suggest that short-term visual deprivation per se triggers a reweighting of visual and auditory emotional cues, which seems to possibly prevail for longer durations. PMID:25954166

  8. Exogenous Attention Enables Perceptual Learning.

    PubMed

    Szpiro, Sarit F A; Carrasco, Marisa

    2015-12-01

    Practice can improve visual perception, and these improvements are considered to be a form of brain plasticity. Training-induced learning is time-consuming and requires hundreds of trials across multiple days. The process of learning acquisition is understudied. Can learning acquisition be potentiated by manipulating visual attentional cues? We developed a protocol in which we used task-irrelevant cues for between-groups manipulation of attention during training. We found that training with exogenous attention can enable the acquisition of learning. Remarkably, this learning was maintained even when observers were subsequently tested under neutral conditions, which indicates that a change in perception was involved. Our study is the first to isolate the effects of exogenous attention and to demonstrate its efficacy to enable learning. We propose that exogenous attention boosts perceptual learning by enhancing stimulus encoding. © The Author(s) 2015.

  9. Temporal and spatio-temporal vibrotactile displays for voice fundamental frequency: an initial evaluation of a new vibrotactile speech perception aid with normal-hearing and hearing-impaired individuals.

    PubMed

    Auer, E T; Bernstein, L E; Coulter, D C

    1998-10-01

    Four experiments were performed to evaluate a new wearable vibrotactile speech perception aid that extracts fundamental frequency (F0) and displays the extracted F0 as a single-channel temporal or an eight-channel spatio-temporal stimulus. Specifically, we investigated the perception of intonation (i.e., question versus statement) and emphatic stress (i.e., stress on the first, second, or third word) under Visual-Alone (VA), Visual-Tactile (VT), and Tactile-Alone (TA) conditions and compared performance using the temporal and spatio-temporal vibrotactile display. Subjects were adults with normal hearing in experiments I-III and adults with severe to profound hearing impairments in experiment IV. Both versions of the vibrotactile speech perception aid successfully conveyed intonation. Vibrotactile stress information was successfully conveyed, but vibrotactile stress information did not enhance performance in VT conditions beyond performance in VA conditions. In experiment III, which involved only intonation identification, a reliable advantage for the spatio-temporal display was obtained. Differences between subject groups were obtained for intonation identification, with more accurate VT performance by those with normal hearing. Possible effects of long-term hearing status are discussed.

  10. Social vision: sustained perceptual enhancement of affective facial cues in social anxiety

    PubMed Central

    McTeague, Lisa M.; Shumen, Joshua R.; Wieser, Matthias J.; Lang, Peter J.; Keil, Andreas

    2010-01-01

    Heightened perception of facial cues is at the core of many theories of social behavior and its disorders. In the present study, we continuously measured electrocortical dynamics in human visual cortex, as evoked by happy, neutral, fearful, and angry faces. Thirty-seven participants endorsing high versus low generalized social anxiety (upper and lower tertiles of 2,104 screened undergraduates) viewed naturalistic faces flickering at 17.5 Hz to evoke steady-state visual evoked potentials (ssVEPs), recorded from 129 scalp electrodes. Electrophysiological data were evaluated in the time-frequency domain after linear source space projection using the minimum norm method. Source estimation indicated an early visual cortical origin of the face-evoked ssVEP, which showed sustained amplitude enhancement for emotional expressions specifically in individuals with pervasive social anxiety. Participants in the low symptom group showed no such sensitivity, and a correlational analysis across the entire sample revealed a strong relationship between self-reported interpersonal anxiety/avoidance and enhanced visual cortical response amplitude for emotional, versus neutral expressions. This pattern was maintained across the 3500 ms viewing epoch, suggesting that temporally sustained, heightened perceptual bias towards affective facial cues is associated with generalized social anxiety. PMID:20832490

  11. Changing the Spatial Scope of Attention Alters Patterns of Neural Gain in Human Cortex

    PubMed Central

    Garcia, Javier O.; Rungratsameetaweemana, Nuttida; Sprague, Thomas C.

    2014-01-01

    Over the last several decades, spatial attention has been shown to influence the activity of neurons in visual cortex in various ways. These conflicting observations have inspired competing models to account for the influence of attention on perception and behavior. Here, we used electroencephalography (EEG) to assess steady-state visual evoked potentials (SSVEP) in human subjects and showed that highly focused spatial attention primarily enhanced neural responses to high-contrast stimuli (response gain), whereas distributed attention primarily enhanced responses to medium-contrast stimuli (contrast gain). Together, these data suggest that different patterns of neural modulation do not reflect fundamentally different neural mechanisms, but instead reflect changes in the spatial extent of attention. PMID:24381272

  12. The development of visual speech perception in Mandarin Chinese-speaking children.

    PubMed

    Chen, Liang; Lei, Jianghua

    2017-01-01

    The present study aimed to investigate the development of visual speech perception in Chinese-speaking children. Children aged 7, 13 and 16 were asked to visually identify both consonant and vowel sounds in Chinese as quickly and accurately as possible. Results revealed (1) an increase in accuracy of visual speech perception between ages 7 and 13 after which the accuracy rate either stagnates or drops; and (2) a U-shaped development pattern in speed of perception with peak performance in 13-year olds. Results also showed that across all age groups, the overall levels of accuracy rose, whereas the response times fell for simplex finals, complex finals and initials. These findings suggest that (1) visual speech perception in Chinese is a developmental process that is acquired over time and is still fine-tuned well into late adolescence; (2) factors other than cross-linguistic differences in phonological complexity and degrees of reliance on visual information are involved in development of visual speech perception.

  13. Cortical visual prostheses: from microstimulation to functional percept

    NASA Astrophysics Data System (ADS)

    Najarpour Foroushani, Armin; Pack, Christopher C.; Sawan, Mohamad

    2018-04-01

    Cortical visual prostheses are intended to restore vision by targeted electrical stimulation of the visual cortex. The perception of spots of light, called phosphenes, resulting from microstimulation of the visual pathway, suggests the possibility of creating meaningful percept made of phosphenes. However, to date electrical stimulation of V1 has still not resulted in perception of phosphenated images that goes beyond punctate spots of light. In this review, we summarize the clinical and experimental progress that has been made in generating phosphenes and modulating their associated perceptual characteristics in human and macaque primary visual cortex (V1). We focus specifically on the effects of different microstimulation parameters on perception and we analyse key challenges facing the generation of meaningful artificial percepts. Finally, we propose solutions to these challenges based on the application of supervised learning of population codes for spatial stimulation of visual cortex.

  14. Analysis of EEG signals related to artists and nonartists during visual perception, mental imagery, and rest using approximate entropy.

    PubMed

    Shourie, Nasrin; Firoozabadi, Mohammad; Badie, Kambiz

    2014-01-01

    In this paper, differences between multichannel EEG signals of artists and nonartists were analyzed during visual perception and mental imagery of some paintings and at resting condition using approximate entropy (ApEn). It was found that ApEn is significantly higher for artists during the visual perception and the mental imagery in the frontal lobe, suggesting that artists process more information during these conditions. It was also observed that ApEn decreases for the two groups during the visual perception due to increasing mental load; however, their variation patterns are different. This difference may be used for measuring progress in novice artists. In addition, it was found that ApEn is significantly lower during the visual perception than the mental imagery in some of the channels, suggesting that visual perception task requires more cerebral efforts.

  15. Macroscopic brain dynamics during verbal and pictorial processing of affective stimuli.

    PubMed

    Keil, Andreas

    2006-01-01

    Emotions can be viewed as action dispositions, preparing an individual to act efficiently and successfully in situations of behavioral relevance. To initiate optimized behavior, it is essential to accurately process the perceptual elements indicative of emotional relevance. The present chapter discusses effects of affective content on neural and behavioral parameters of perception, across different information channels. Electrocortical data are presented from studies examining affective perception with pictures and words in different task contexts. As a main result, these data suggest that sensory facilitation has an important role in affective processing. Affective pictures appear to facilitate perception as a function of emotional arousal at multiple levels of visual analysis. If the discrimination between affectively arousing vs. nonarousing content relies on fine-grained differences, amplification of the cortical representation may occur as early as 60-90 ms after stimulus onset. Affectively arousing information as conveyed via visual verbal channels was not subject to such very early enhancement. However, electrocortical indices of lexical access and/or activation of semantic networks showed that affectively arousing content may enhance the formation of semantic representations during word encoding. It can be concluded that affective arousal is associated with activation of widespread networks, which act to optimize sensory processing. On the basis of prioritized sensory analysis for affectively relevant stimuli, subsequent steps such as working memory, motor preparation, and action may be adjusted to meet the adaptive requirements of the situation perceived.

  16. Enhancing Digital Access to Learning Materials for Canadians with Perceptual Disabilities: A Pilot Study. Research Report

    ERIC Educational Resources Information Center

    Lockerby, Christina; Breau, Rachel; Zuvela, Biljana

    2006-01-01

    By exploring the experiences of participants with DAISY (Digital Accessible Information System) Talking Books, the study reported in this article not only discovered how people who are blind, visually impaired, and/or print-disabled read DAISY books, but also identified participants' perceptions of DAISY as being particularly useful in their…

  17. Educational Applications of Vision Therapy: A Pilot Study on Children with Autism.

    ERIC Educational Resources Information Center

    Lovelace, Kelly; Rhodes, Heidi; Chambliss, Catherine

    This report discusses the outcomes of a study that explored the feasibility of using vision therapy (VT) as part of an interdisciplinary approach to the education of children with autism. Traditional research on VT has explored its usefulness in helping patients to use both eyes together, improve depth perception, and enhance visual acuity.…

  18. Vision in Children and Adolescents with Autistic Spectrum Disorder: Evidence for Reduced Convergence

    ERIC Educational Resources Information Center

    Milne, Elizabeth; Griffiths, Helen; Buckley, David; Scope, Alison

    2009-01-01

    Evidence of atypical perception in individuals with ASD is mainly based on self report, parental questionnaires or psychophysical/cognitive paradigms. There have been relatively few attempts to establish whether binocular vision is enhanced, intact or abnormal in those with ASD. To address this, we screened visual function in 51 individuals with…

  19. Sex differences in the development of brain mechanisms for processing biological motion.

    PubMed

    Anderson, L C; Bolling, D Z; Schelinski, S; Coffman, M C; Pelphrey, K A; Kaiser, M D

    2013-12-01

    Disorders related to social functioning including autism and schizophrenia differ drastically in incidence and severity between males and females. Little is known about the neural systems underlying these sex-linked differences in risk and resiliency. Using functional magnetic resonance imaging and a task involving the visual perception of point-light displays of coherent and scrambled biological motion, we discovered sex differences in the development of neural systems for basic social perception. In adults, we identified enhanced activity during coherent biological motion perception in females relative to males in a network of brain regions previously implicated in social perception including amygdala, medial temporal gyrus, and temporal pole. These sex differences were less pronounced in our sample of school-age youth. We hypothesize that the robust neural circuitry supporting social perception in females, which diverges from males beginning in childhood, may underlie sex differences in disorders related to social processing. © 2013 Elsevier Inc. All rights reserved.

  20. Increased discriminability of authenticity from multimodal laughter is driven by auditory information.

    PubMed

    Lavan, Nadine; McGettigan, Carolyn

    2017-10-01

    We present an investigation of the perception of authenticity in audiovisual laughter, in which we contrast spontaneous and volitional samples and examine the contributions of unimodal affective information to multimodal percepts. In a pilot study, we demonstrate that listeners perceive spontaneous laughs as more authentic than volitional ones, both in unimodal (audio-only, visual-only) and multimodal contexts (audiovisual). In the main experiment, we show that the discriminability of volitional and spontaneous laughter is enhanced for multimodal laughter. Analyses of relationships between affective ratings and the perception of authenticity show that, while both unimodal percepts significantly predict evaluations of audiovisual laughter, it is auditory affective cues that have the greater influence on multimodal percepts. We discuss differences and potential mismatches in emotion signalling through voices and faces, in the context of spontaneous and volitional behaviour, and highlight issues that should be addressed in future studies of dynamic multimodal emotion processing.

  1. Sex Differences in the Development of Brain Mechanisms for Processing Biological Motion

    PubMed Central

    Anderson, L.C.; Bolling, D.Z.; Schelinski, S.; Coffman, M.C.; Pelphrey, K.A.; Kaiser, M.D.

    2013-01-01

    Disorders related to social functioning including autism and schizophrenia differ drastically in incidence and severity between males and females. Little is known about the neural systems underlying these sex-linked differences in risk and resiliency. Using functional magnetic resonance imaging and a task involving the visual perception of point-light displays of coherent and scrambled biological motion, we discovered sex differences in the development of neural systems for basic social perception. In adults, we identified enhanced activity during coherent biological motion perception in females relative to males in a network of brain regions previously implicated in social perception including amygdala, medial temporal gyrus, and temporal pole. These sex differences were less pronounced in our sample of school-age youth. We hypothesize that the robust neural circuitry supporting social perception in females, which diverges from males beginning in childhood, may underlie sex differences in disorders related to social processing. PMID:23876243

  2. Perceiving groups: The people perception of diversity and hierarchy.

    PubMed

    Phillips, L Taylor; Slepian, Michael L; Hughes, Brent L

    2018-05-01

    The visual perception of individuals has received considerable attention (visual person perception), but little social psychological work has examined the processes underlying the visual perception of groups of people (visual people perception). Ensemble-coding is a visual mechanism that automatically extracts summary statistics (e.g., average size) of lower-level sets of stimuli (e.g., geometric figures), and also extends to the visual perception of groups of faces. Here, we consider whether ensemble-coding supports people perception, allowing individuals to form rapid, accurate impressions about groups of people. Across nine studies, we demonstrate that people visually extract high-level properties (e.g., diversity, hierarchy) that are unique to social groups, as opposed to individual persons. Observers rapidly and accurately perceived group diversity and hierarchy, or variance across race, gender, and dominance (Studies 1-3). Further, results persist when observers are given very short display times, backward pattern masks, color- and contrast-controlled stimuli, and absolute versus relative response options (Studies 4a-7b), suggesting robust effects supported specifically by ensemble-coding mechanisms. Together, we show that humans can rapidly and accurately perceive not only individual persons, but also emergent social information unique to groups of people. These people perception findings demonstrate the importance of visual processes for enabling people to perceive social groups and behave effectively in group-based social interactions. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  3. Abnormal Size-Dependent Modulation of Motion Perception in Children with Autism Spectrum Disorder (ASD)

    PubMed Central

    Sysoeva, Olga V.; Galuta, Ilia A.; Davletshina, Maria S.; Orekhova, Elena V.; Stroganova, Tatiana A.

    2017-01-01

    Excitation/Inhibition (E/I) imbalance in neural networks is now considered among the core neural underpinnings of autism psychopathology. In motion perception at least two phenomena critically depend on E/I balance in visual cortex: spatial suppression (SS), and spatial facilitation (SF) corresponding to impoverished or improved motion perception with increasing stimuli size, respectively. While SS is dominant at high contrast, SF is evident for low contrast stimuli, due to the prevalence of inhibitory contextual modulations in the former, and excitatory ones in the latter case. Only one previous study (Foss-Feig et al., 2013) investigated SS and SF in Autism Spectrum Disorder (ASD). Our study aimed to replicate previous findings, and to explore the putative contribution of deficient inhibitory influences into an enhanced SF index in ASD—a cornerstone for interpretation proposed by Foss-Feig et al. (2013). The SS and SF were examined in 40 boys with ASD, broad spectrum of intellectual abilities (63 < IQ < 127) and 44 typically developing (TD) boys, aged 6–15 years. The stimuli of small (1°) and large (12°) radius were presented under high (100%) and low (1%) contrast conditions. Social Responsiveness Scale and Sensory Profile Questionnaire were used to assess the autism severity and sensory processing abnormalities. We found that the SS index was atypically reduced, while SF index abnormally enhanced in children with ASD. The presence of abnormally enhanced SF in children with ASD was the only consistent finding between our study and that of Foss-Feig et al. While the SS and SF indexes were strongly interrelated in TD participants, this correlation was absent in their peers with ASD. In addition, the SF index but not the SS index correlated with the severity of autism and the poor registration abilities. The pattern of results is partially consistent with the idea of hypofunctional inhibitory transmission in visual areas in ASD. Nonetheless, the absence of correlation between SF and SS indexes paired with a strong direct link between abnormally enhanced SF and autism symptoms in our ASD sample emphasizes the role of the enhanced excitatory influences by themselves in the observed abnormalities in low-level visual phenomena found in ASD. PMID:28405183

  4. The Gestalt Principle of Similarity Benefits Visual Working Memory

    PubMed Central

    Peterson, Dwight J.; Berryhill, Marian E.

    2013-01-01

    Visual working memory (VWM) is essential for many cognitive processes yet it is notably limited in capacity. Visual perception processing is facilitated by Gestalt principles of grouping, such as connectedness, similarity, and proximity. This introduces the question: do these perceptual benefits extend to VWM? If so, can this be an approach to enhance VWM function by optimizing the processing of information? Previous findings demonstrate that several Gestalt principles (connectedness, common region, and spatial proximity) do facilitate VWM performance in change detection tasks (Woodman, Vecera, & Luck, 2003; Xu, 2002a, 2006; Xu & Chun, 2007; Jiang, Olson & Chun, 2000). One prevalent Gestalt principle, similarity, has not been examined with regard to facilitating VWM. Here, we investigated whether grouping by similarity benefits VWM. Experiment 1 established the basic finding that VWM performance could benefit from grouping. Experiment 2 replicated and extended this finding by showing that similarity was only effective when the similar stimuli were proximal. In short, the VWM performance benefit derived from similarity was constrained by spatial proximity such that similar items need to be near each other. Thus, the Gestalt principle of similarity benefits visual perception, but it can provide benefits to VWM as well. PMID:23702981

  5. The Gestalt principle of similarity benefits visual working memory.

    PubMed

    Peterson, Dwight J; Berryhill, Marian E

    2013-12-01

    Visual working memory (VWM) is essential for many cognitive processes, yet it is notably limited in capacity. Visual perception processing is facilitated by Gestalt principles of grouping, such as connectedness, similarity, and proximity. This introduces the question, do these perceptual benefits extend to VWM? If so, can this be an approach to enhance VWM function by optimizing the processing of information? Previous findings have demonstrated that several Gestalt principles (connectedness, common region, and spatial proximity) do facilitate VWM performance in change detection tasks (Jiang, Olson, & Chun, 2000; Woodman, Vecera, & Luck, 2003; Xu, 2002, 2006; Xu & Chun, 2007). However, one prevalent Gestalt principle, similarity, has not been examined with regard to facilitating VWM. Here, we investigated whether grouping by similarity benefits VWM. Experiment 1 established the basic finding that VWM performance could benefit from grouping. Experiment 2 replicated and extended this finding by showing that similarity was only effective when the similar stimuli were proximal. In short, the VWM performance benefit derived from similarity was constrained by spatial proximity, such that similar items need to be near each other. Thus, the Gestalt principle of similarity benefits visual perception, but it can provide benefits to VWM as well.

  6. A systematic review of the technology-based assessment of visual perception and exploration behaviour in association football.

    PubMed

    McGuckian, Thomas B; Cole, Michael H; Pepping, Gert-Jan

    2018-04-01

    To visually perceive opportunities for action, athletes rely on the movements of their eyes, head and body to explore their surrounding environment. To date, the specific types of technology and their efficacy for assessing the exploration behaviours of association footballers have not been systematically reviewed. This review aimed to synthesise the visual perception and exploration behaviours of footballers according to the task constraints, action requirements of the experimental task, and level of expertise of the athlete, in the context of the technology used to quantify the visual perception and exploration behaviours of footballers. A systematic search for papers that included keywords related to football, technology, and visual perception was conducted. All 38 included articles utilised eye-movement registration technology to quantify visual perception and exploration behaviour. The experimental domain appears to influence the visual perception behaviour of footballers, however no studies investigated exploration behaviours of footballers in open-play situations. Studies rarely utilised representative stimulus presentation or action requirements. To fully understand the visual perception requirements of athletes, it is recommended that future research seek to validate alternate technologies that are capable of investigating the eye, head and body movements associated with the exploration behaviours of footballers during representative open-play situations.

  7. Feature-based attentional modulations in the absence of direct visual stimulation.

    PubMed

    Serences, John T; Boynton, Geoffrey M

    2007-07-19

    When faced with a crowded visual scene, observers must selectively attend to behaviorally relevant objects to avoid sensory overload. Often this selection process is guided by prior knowledge of a target-defining feature (e.g., the color red when looking for an apple), which enhances the firing rate of visual neurons that are selective for the attended feature. Here, we used functional magnetic resonance imaging and a pattern classification algorithm to predict the attentional state of human observers as they monitored a visual feature (one of two directions of motion). We find that feature-specific attention effects spread across the visual field-even to regions of the scene that do not contain a stimulus. This spread of feature-based attention to empty regions of space may facilitate the perception of behaviorally relevant stimuli by increasing sensitivity to attended features at all locations in the visual field.

  8. [Inductive reasoning and intelligence test performance--analysis of the manner of the effect of 2 thought training methods for children].

    PubMed

    Hager, W; Hasselhorn, M; Hübner, S

    1995-10-01

    For his training programs of inductive reasoning Klauer postulates a transfer effect to inductive thinking as well as to (performance in tests of) intelligence. As evidence for these claims, however, he uses the same data. That means the same tests are used to prove enhancement of inductive thinking and to prove transfer to performance in tests of intelligence. Moreover, Klauer's claim to train inductive thinking is criticized since better performances in at least some of the tests he administers can result due to enhancements in the area of visual perception. Finally, we ask what kind of effects the programs result in: Are they mere coaching effects or do the programs result in better performance due to enhanced competencies? The empirical evidence suggest that positive effects on inductive thinking do not last as long as perceptual competencies that are necessarily fostered when visual material is presented to children.

  9. Attention Enhances Synaptic Efficacy and Signal-to-Noise in Neural Circuits

    PubMed Central

    Briggs, Farran; Mangun, George R.; Usrey, W. Martin

    2013-01-01

    Summary Attention is a critical component of perception. However, the mechanisms by which attention modulates neuronal communication to guide behavior are poorly understood. To elucidate the synaptic mechanisms of attention, we developed a sensitive assay of attentional modulation of neuronal communication. In alert monkeys performing a visual spatial attention task, we probed thalamocortical communication by electrically stimulating neurons in the lateral geniculate nucleus of the thalamus while simultaneously recording shock-evoked responses from monosynaptically connected neurons in primary visual cortex. We found that attention enhances neuronal communication by (1) increasing the efficacy of presynaptic input in driving postsynaptic responses, (2) increasing synchronous responses among ensembles of postsynaptic neurons receiving independent input, and (3) decreasing redundant signals between postsynaptic neurons receiving common input. These results demonstrate that attention finely tunes neuronal communication at the synaptic level by selectively altering synaptic weights, enabling enhanced detection of salient events in the noisy sensory milieu. PMID:23803766

  10. Seeing a haptically explored face: visual facial-expression aftereffect from haptic adaptation to a face.

    PubMed

    Matsumiya, Kazumichi

    2013-10-01

    Current views on face perception assume that the visual system receives only visual facial signals. However, I show that the visual perception of faces is systematically biased by adaptation to a haptically explored face. Recently, face aftereffects (FAEs; the altered perception of faces after adaptation to a face) have been demonstrated not only in visual perception but also in haptic perception; therefore, I combined the two FAEs to examine whether the visual system receives face-related signals from the haptic modality. I found that adaptation to a haptically explored facial expression on a face mask produced a visual FAE for facial expression. This cross-modal FAE was not due to explicitly imaging a face, response bias, or adaptation to local features. Furthermore, FAEs transferred from vision to haptics. These results indicate that visual face processing depends on substrates adapted by haptic faces, which suggests that face processing relies on shared representation underlying cross-modal interactions.

  11. Microcomputer-based artificial vision support system for real-time image processing for camera-driven visual prostheses

    NASA Astrophysics Data System (ADS)

    Fink, Wolfgang; You, Cindy X.; Tarbell, Mark A.

    2010-01-01

    It is difficult to predict exactly what blind subjects with camera-driven visual prostheses (e.g., retinal implants) can perceive. Thus, it is prudent to offer them a wide variety of image processing filters and the capability to engage these filters repeatedly in any user-defined order to enhance their visual perception. To attain true portability, we employ a commercial off-the-shelf battery-powered general purpose Linux microprocessor platform to create the microcomputer-based artificial vision support system (μAVS2) for real-time image processing. Truly standalone, μAVS2 is smaller than a deck of playing cards, lightweight, fast, and equipped with USB, RS-232 and Ethernet interfaces. Image processing filters on μAVS2 operate in a user-defined linear sequential-loop fashion, resulting in vastly reduced memory and CPU requirements during execution. μAVS2 imports raw video frames from a USB or IP camera, performs image processing, and issues the processed data over an outbound Internet TCP/IP or RS-232 connection to the visual prosthesis system. Hence, μAVS2 affords users of current and future visual prostheses independent mobility and the capability to customize the visual perception generated. Additionally, μAVS2 can easily be reconfigured for other prosthetic systems. Testing of μAVS2 with actual retinal implant carriers is envisioned in the near future.

  12. Microcomputer-based artificial vision support system for real-time image processing for camera-driven visual prostheses.

    PubMed

    Fink, Wolfgang; You, Cindy X; Tarbell, Mark A

    2010-01-01

    It is difficult to predict exactly what blind subjects with camera-driven visual prostheses (e.g., retinal implants) can perceive. Thus, it is prudent to offer them a wide variety of image processing filters and the capability to engage these filters repeatedly in any user-defined order to enhance their visual perception. To attain true portability, we employ a commercial off-the-shelf battery-powered general purpose Linux microprocessor platform to create the microcomputer-based artificial vision support system (microAVS(2)) for real-time image processing. Truly standalone, microAVS(2) is smaller than a deck of playing cards, lightweight, fast, and equipped with USB, RS-232 and Ethernet interfaces. Image processing filters on microAVS(2) operate in a user-defined linear sequential-loop fashion, resulting in vastly reduced memory and CPU requirements during execution. MiccroAVS(2) imports raw video frames from a USB or IP camera, performs image processing, and issues the processed data over an outbound Internet TCP/IP or RS-232 connection to the visual prosthesis system. Hence, microAVS(2) affords users of current and future visual prostheses independent mobility and the capability to customize the visual perception generated. Additionally, microAVS(2) can easily be reconfigured for other prosthetic systems. Testing of microAVS(2) with actual retinal implant carriers is envisioned in the near future.

  13. Endogenous modulation of human visual cortex activity improves perception at twilight.

    PubMed

    Cordani, Lorenzo; Tagliazucchi, Enzo; Vetter, Céline; Hassemer, Christian; Roenneberg, Till; Stehle, Jörg H; Kell, Christian A

    2018-04-10

    Perception, particularly in the visual domain, is drastically influenced by rhythmic changes in ambient lighting conditions. Anticipation of daylight changes by the circadian system is critical for survival. However, the neural bases of time-of-day-dependent modulation in human perception are not yet understood. We used fMRI to study brain dynamics during resting-state and close-to-threshold visual perception repeatedly at six times of the day. Here we report that resting-state signal variance drops endogenously at times coinciding with dawn and dusk, notably in sensory cortices only. In parallel, perception-related signal variance in visual cortices decreases and correlates negatively with detection performance, identifying an anticipatory mechanism that compensates for the deteriorated visual signal quality at dawn and dusk. Generally, our findings imply that decreases in spontaneous neural activity improve close-to-threshold perception.

  14. Optical phonetics and visual perception of lexical and phrasal stress in English.

    PubMed

    Scarborough, Rebecca; Keating, Patricia; Mattys, Sven L; Cho, Taehong; Alwan, Abeer

    2009-01-01

    In a study of optical cues to the visual perception of stress, three American English talkers spoke words that differed in lexical stress and sentences that differed in phrasal stress, while video and movements of the face were recorded. The production of stressed and unstressed syllables from these utterances was analyzed along many measures of facial movement, which were generally larger and faster in the stressed condition. In a visual perception experiment, 16 perceivers identified the location of stress in forced-choice judgments of video clips of these utterances (without audio). Phrasal stress was better perceived than lexical stress. The relation of the visual intelligibility of the prosody of these utterances to the optical characteristics of their production was analyzed to determine which cues are associated with successful visual perception. While most optical measures were correlated with perception performance, chin measures, especially Chin Opening Displacement, contributed the most to correct perception independently of the other measures. Thus, our results indicate that the information for visual stress perception is mainly associated with mouth opening movements.

  15. Visual and auditory perception in preschool children at risk for dyslexia.

    PubMed

    Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina

    2014-11-01

    Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Eye movements and attention in reading, scene perception, and visual search.

    PubMed

    Rayner, Keith

    2009-08-01

    Eye movements are now widely used to investigate cognitive processes during reading, scene perception, and visual search. In this article, research on the following topics is reviewed with respect to reading: (a) the perceptual span (or span of effective vision), (b) preview benefit, (c) eye movement control, and (d) models of eye movements. Related issues with respect to eye movements during scene perception and visual search are also reviewed. It is argued that research on eye movements during reading has been somewhat advanced over research on eye movements in scene perception and visual search and that some of the paradigms developed to study reading should be more widely adopted in the study of scene perception and visual search. Research dealing with "real-world" tasks and research utilizing the visual-world paradigm are also briefly discussed.

  17. Cognitive aspects of color

    NASA Astrophysics Data System (ADS)

    Derefeldt, Gunilla A. M.; Menu, Jean-Pierre; Swartling, Tiina

    1995-04-01

    This report surveys cognitive aspects of color in terms of behavioral, neuropsychological, and neurophysiological data. Color is usually defined as psychophysical color or as perceived color. Behavioral data on categorical color perception, absolute judgement of colors, color coding, visual search, and visual awareness refer to the more cognitive aspects of color. These are of major importance in visual synthesis and spatial organization, as already shown by the Gestalt psychologists. Neuropsychological and neurophysiological findings provide evidence for an interrelation between cognitive color and spatial organization. Color also enhances planning strategies, as has been shown by studies on color and eye movements. Memory colors and the color- language connections in the brain also belong among the cognitive aspects of color.

  18. Sharpening vision by adapting to flicker.

    PubMed

    Arnold, Derek H; Williams, Jeremy D; Phipps, Natasha E; Goodale, Melvyn A

    2016-11-01

    Human vision is surprisingly malleable. A static stimulus can seem to move after prolonged exposure to movement (the motion aftereffect), and exposure to tilted lines can make vertical lines seem oppositely tilted (the tilt aftereffect). The paradigm used to induce such distortions (adaptation) can provide powerful insights into the computations underlying human visual experience. Previously spatial form and stimulus dynamics were thought to be encoded independently, but here we show that adaptation to stimulus dynamics can sharpen form perception. We find that fast flicker adaptation (FFAd) shifts the tuning of face perception to higher spatial frequencies, enhances the acuity of spatial vision-allowing people to localize inputs with greater precision and to read finer scaled text, and it selectively reduces sensitivity to coarse-scale form signals. These findings are consistent with two interrelated influences: FFAd reduces the responsiveness of magnocellular neurons (which are important for encoding dynamics, but can have poor spatial resolution), and magnocellular responses contribute coarse spatial scale information when the visual system synthesizes form signals. Consequently, when magnocellular responses are mitigated via FFAd, human form perception is transiently sharpened because "blur" signals are mitigated.

  19. Sharpening vision by adapting to flicker

    PubMed Central

    Arnold, Derek H.; Williams, Jeremy D.; Phipps, Natasha E.; Goodale, Melvyn A.

    2016-01-01

    Human vision is surprisingly malleable. A static stimulus can seem to move after prolonged exposure to movement (the motion aftereffect), and exposure to tilted lines can make vertical lines seem oppositely tilted (the tilt aftereffect). The paradigm used to induce such distortions (adaptation) can provide powerful insights into the computations underlying human visual experience. Previously spatial form and stimulus dynamics were thought to be encoded independently, but here we show that adaptation to stimulus dynamics can sharpen form perception. We find that fast flicker adaptation (FFAd) shifts the tuning of face perception to higher spatial frequencies, enhances the acuity of spatial vision—allowing people to localize inputs with greater precision and to read finer scaled text, and it selectively reduces sensitivity to coarse-scale form signals. These findings are consistent with two interrelated influences: FFAd reduces the responsiveness of magnocellular neurons (which are important for encoding dynamics, but can have poor spatial resolution), and magnocellular responses contribute coarse spatial scale information when the visual system synthesizes form signals. Consequently, when magnocellular responses are mitigated via FFAd, human form perception is transiently sharpened because “blur” signals are mitigated. PMID:27791115

  20. Visual Working Memory Enhances the Neural Response to Matching Visual Input.

    PubMed

    Gayet, Surya; Guggenmos, Matthias; Christophel, Thomas B; Haynes, John-Dylan; Paffen, Chris L E; Van der Stigchel, Stefan; Sterzer, Philipp

    2017-07-12

    Visual working memory (VWM) is used to maintain visual information available for subsequent goal-directed behavior. The content of VWM has been shown to affect the behavioral response to concurrent visual input, suggesting that visual representations originating from VWM and from sensory input draw upon a shared neural substrate (i.e., a sensory recruitment stance on VWM storage). Here, we hypothesized that visual information maintained in VWM would enhance the neural response to concurrent visual input that matches the content of VWM. To test this hypothesis, we measured fMRI BOLD responses to task-irrelevant stimuli acquired from 15 human participants (three males) performing a concurrent delayed match-to-sample task. In this task, observers were sequentially presented with two shape stimuli and a retro-cue indicating which of the two shapes should be memorized for subsequent recognition. During the retention interval, a task-irrelevant shape (the probe) was briefly presented in the peripheral visual field, which could either match or mismatch the shape category of the memorized stimulus. We show that this probe stimulus elicited a stronger BOLD response, and allowed for increased shape-classification performance, when it matched rather than mismatched the concurrently memorized content, despite identical visual stimulation. Our results demonstrate that VWM enhances the neural response to concurrent visual input in a content-specific way. This finding is consistent with the view that neural populations involved in sensory processing are recruited for VWM storage, and it provides a common explanation for a plethora of behavioral studies in which VWM-matching visual input elicits a stronger behavioral and perceptual response. SIGNIFICANCE STATEMENT Humans heavily rely on visual information to interact with their environment and frequently must memorize such information for later use. Visual working memory allows for maintaining such visual information in the mind's eye after termination of its retinal input. It is hypothesized that information maintained in visual working memory relies on the same neural populations that process visual input. Accordingly, the content of visual working memory is known to affect our conscious perception of concurrent visual input. Here, we demonstrate for the first time that visual input elicits an enhanced neural response when it matches the content of visual working memory, both in terms of signal strength and information content. Copyright © 2017 the authors 0270-6474/17/376638-10$15.00/0.

  1. Recognition and surprise alter the human visual evoked response.

    PubMed Central

    Neville, H; Snyder, E; Woods, D; Galambos, R

    1982-01-01

    Event-related brain potentials (ERPs) to colored slides contained a late positive component that was significantly enhanced when adults recognized the person, place, or painting in the photograph. Additionally, two late components change in amplitude, corresponding to the amount of surprise reported. Because subjects received no instructions to differentiate among the slides, these changes in brain potentials reflect natural classifications made according to their perceptions and evaluations of the pictorial material. This may be a useful paradigm with which to assess perception, memory, and orienting capacities in populations such as infants who cannot follow verbal instructions. Images PMID:6952260

  2. Lightness Constancy in Surface Visualization

    PubMed Central

    Szafir, Danielle Albers; Sarikaya, Alper; Gleicher, Michael

    2016-01-01

    Color is a common channel for displaying data in surface visualization, but is affected by the shadows and shading used to convey surface depth and shape. Understanding encoded data in the context of surface structure is critical for effective analysis in a variety of domains, such as in molecular biology. In the physical world, lightness constancy allows people to accurately perceive shadowed colors; however, its effectiveness in complex synthetic environments such as surface visualizations is not well understood. We report a series of crowdsourced and laboratory studies that confirm the existence of lightness constancy effects for molecular surface visualizations using ambient occlusion. We provide empirical evidence of how common visualization design decisions can impact viewers’ abilities to accurately identify encoded surface colors. These findings suggest that lightness constancy aids in understanding color encodings in surface visualization and reveal a correlation between visualization techniques that improve color interpretation in shadow and those that enhance perceptions of surface depth. These results collectively suggest that understanding constancy in practice can inform effective visualization design. PMID:26584495

  3. Multisensory effects on somatosensation: a trimodal visuo-vestibular-tactile interaction

    PubMed Central

    Kaliuzhna, Mariia; Ferrè, Elisa Raffaella; Herbelin, Bruno; Blanke, Olaf; Haggard, Patrick

    2016-01-01

    Vestibular information about self-motion is combined with other sensory signals. Previous research described both visuo-vestibular and vestibular-tactile bilateral interactions, but the simultaneous interaction between all three sensory modalities has not been explored. Here we exploit a previously reported visuo-vestibular integration to investigate multisensory effects on tactile sensitivity in humans. Tactile sensitivity was measured during passive whole body rotations alone or in conjunction with optic flow, creating either purely vestibular or visuo-vestibular sensations of self-motion. Our results demonstrate that tactile sensitivity is modulated by perceived self-motion, as provided by a combined visuo-vestibular percept, and not by the visual and vestibular cues independently. We propose a hierarchical multisensory interaction that underpins somatosensory modulation: visual and vestibular cues are first combined to produce a multisensory self-motion percept. Somatosensory processing is then enhanced according to the degree of perceived self-motion. PMID:27198907

  4. Enhanced dimension-specific visual working memory in grapheme-color synesthesia.

    PubMed

    Terhune, Devin Blair; Wudarczyk, Olga Anna; Kochuparampil, Priya; Cohen Kadosh, Roi

    2013-10-01

    There is emerging evidence that the encoding of visual information and the maintenance of this information in a temporarily accessible state in working memory rely on the same neural mechanisms. A consequence of this overlap is that atypical forms of perception should influence working memory. We examined this by investigating whether having grapheme-color synesthesia, a condition characterized by the involuntary experience of color photisms when reading or representing graphemes, would confer benefits on working memory. Two competing hypotheses propose that superior memory in synesthesia results from information being coded in two information channels (dual-coding) or from superior dimension-specific visual processing (enhanced processing). We discriminated between these hypotheses in three n-back experiments in which controls and synesthetes viewed inducer and non-inducer graphemes and maintained color or grapheme information in working memory. Synesthetes displayed superior color working memory than controls for both grapheme types, whereas the two groups did not differ in grapheme working memory. Further analyses excluded the possibilities of enhanced working memory among synesthetes being due to greater color discrimination, stimulus color familiarity, or bidirectionality. These results reveal enhanced dimension-specific visual working memory in this population and supply further evidence for a close relationship between sensory processing and the maintenance of sensory information in working memory. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  5. Visual Perception of Force: Comment on White (2012)

    ERIC Educational Resources Information Center

    Hubbard, Timothy L.

    2012-01-01

    White (2012) proposed that kinematic features in a visual percept are matched to stored representations containing information regarding forces (based on prior haptic experience) and that information in the matched, stored representations regarding forces is then incorporated into visual perception. Although some elements of White's (2012) account…

  6. Neural Responses to Complex Auditory Rhythms: The Role of Attending

    PubMed Central

    Chapin, Heather L.; Zanto, Theodore; Jantzen, Kelly J.; Kelso, Scott J. A.; Steinberg, Fred; Large, Edward W.

    2010-01-01

    The aim of this study was to explore the role of attention in pulse and meter perception using complex rhythms. We used a selective attention paradigm in which participants attended to either a complex auditory rhythm or a visually presented word list. Performance on a reproduction task was used to gauge whether participants were attending to the appropriate stimulus. We hypothesized that attention to complex rhythms – which contain no energy at the pulse frequency – would lead to activations in motor areas involved in pulse perception. Moreover, because multiple repetitions of a complex rhythm are needed to perceive a pulse, activations in pulse-related areas would be seen only after sufficient time had elapsed for pulse perception to develop. Selective attention was also expected to modulate activity in sensory areas specific to the modality. We found that selective attention to rhythms led to increased BOLD responses in basal ganglia, and basal ganglia activity was observed only after the rhythms had cycled enough times for a stable pulse percept to develop. These observations suggest that attention is needed to recruit motor activations associated with the perception of pulse in complex rhythms. Moreover, attention to the auditory stimulus enhanced activity in an attentional sensory network including primary auditory cortex, insula, anterior cingulate, and prefrontal cortex, and suppressed activity in sensory areas associated with attending to the visual stimulus. PMID:21833279

  7. Real-Time Mutual Gaze Perception Enhances Collaborative Learning and Collaboration Quality

    ERIC Educational Resources Information Center

    Schneider, Bertrand; Pea, Roy

    2013-01-01

    In this paper we present the results of an eye-tracking study on collaborative problem-solving dyads. Dyads remotely collaborated to learn from contrasting cases involving basic concepts about how the human brain processes visual information. In one condition, dyads saw the eye gazes of their partner on the screen; in a control group, they did not…

  8. An Examination of Undergraduate Student's Perceptions and Predilections of the Use of YouTube in the Teaching and Learning Process

    ERIC Educational Resources Information Center

    Buzzetto-More, Nicole A.

    2014-01-01

    Pervasive social networking and media sharing technologies have augmented perceptual understanding and information gathering and, while text-based resources have remained the standard for centuries, they do not appeal to the hyper-stimulated visual learners of today. In particular, the research suggests that targeted YouTube videos enhance student…

  9. Seeing Objects as Faces Enhances Object Detection.

    PubMed

    Takahashi, Kohske; Watanabe, Katsumi

    2015-10-01

    The face is a special visual stimulus. Both bottom-up processes for low-level facial features and top-down modulation by face expectations contribute to the advantages of face perception. However, it is hard to dissociate the top-down factors from the bottom-up processes, since facial stimuli mandatorily lead to face awareness. In the present study, using the face pareidolia phenomenon, we demonstrated that face awareness, namely seeing an object as a face, enhances object detection performance. In face pareidolia, some people see a visual stimulus, for example, three dots arranged in V shape, as a face, while others do not. This phenomenon allows us to investigate the effect of face awareness leaving the stimulus per se unchanged. Participants were asked to detect a face target or a triangle target. While target per se was identical between the two tasks, the detection sensitivity was higher when the participants recognized the target as a face. This was the case irrespective of the stimulus eccentricity or the vertical orientation of the stimulus. These results demonstrate that seeing an object as a face facilitates object detection via top-down modulation. The advantages of face perception are, therefore, at least partly, due to face awareness.

  10. Seeing Objects as Faces Enhances Object Detection

    PubMed Central

    Watanabe, Katsumi

    2015-01-01

    The face is a special visual stimulus. Both bottom-up processes for low-level facial features and top-down modulation by face expectations contribute to the advantages of face perception. However, it is hard to dissociate the top-down factors from the bottom-up processes, since facial stimuli mandatorily lead to face awareness. In the present study, using the face pareidolia phenomenon, we demonstrated that face awareness, namely seeing an object as a face, enhances object detection performance. In face pareidolia, some people see a visual stimulus, for example, three dots arranged in V shape, as a face, while others do not. This phenomenon allows us to investigate the effect of face awareness leaving the stimulus per se unchanged. Participants were asked to detect a face target or a triangle target. While target per se was identical between the two tasks, the detection sensitivity was higher when the participants recognized the target as a face. This was the case irrespective of the stimulus eccentricity or the vertical orientation of the stimulus. These results demonstrate that seeing an object as a face facilitates object detection via top-down modulation. The advantages of face perception are, therefore, at least partly, due to face awareness. PMID:27648219

  11. Visual Form Perception Can Be a Cognitive Correlate of Lower Level Math Categories for Teenagers.

    PubMed

    Cui, Jiaxin; Zhang, Yiyun; Cheng, Dazhi; Li, Dawei; Zhou, Xinlin

    2017-01-01

    Numerous studies have assessed the cognitive correlates of performance in mathematics, but little research has been conducted to systematically examine the relations between visual perception as the starting point of visuospatial processing and typical mathematical performance. In the current study, we recruited 223 seventh graders to perform a visual form perception task (figure matching), numerosity comparison, digit comparison, exact computation, approximate computation, and curriculum-based mathematical achievement tests. Results showed that, after controlling for gender, age, and five general cognitive processes (choice reaction time, visual tracing, mental rotation, spatial working memory, and non-verbal matrices reasoning), visual form perception had unique contributions to numerosity comparison, digit comparison, and exact computation, but had no significant relation with approximate computation or curriculum-based mathematical achievement. These results suggest that visual form perception is an important independent cognitive correlate of lower level math categories, including the approximate number system, digit comparison, and exact computation.

  12. Presenting self-monitoring test results for consumers: the effects of graphical formats and age.

    PubMed

    Tao, Da; Yuan, Juan; Qu, Xingda

    2018-05-11

    To examine the effects of graphical formats and age on consumers' comprehension and perceptions of the use of self-monitoring test results. Participants (36 older and 36 young adults) were required to perform verbatim comprehension and value interpretation tasks with hypothetical self-monitoring test results. The test results were randomly presented by four reference range number lines: basic, color enhanced, color/text enhanced, and personalized information enhanced formats. We measured participants' task performance and eye movement data during task completion, and their perceptions and preference of the graphical formats. The 4 graphical formats yielded comparable task performance, while text/color and personalized information enhanced formats were believed to be easier and more useful in information comprehension, and led to increased confidence in correct comprehension of test results, compared with other formats (all p's < .05). Perceived health risk increased as the formats applied more information cues (p = .008). There were age differences in task performance and visual attention (all p's < .01), while young and older adults had similar perceptions for the 4 formats. Personalized information enhanced format was preferred by both groups. Text/color and personalized information cues appear to be useful for comprehending test results. Future work can be directed to improve the design of graphical formats especially for older adults, and to assess the formats in clinical settings.

  13. Manipulation of Pre-Target Activity on the Right Frontal Eye Field Enhances Conscious Visual Perception in Humans

    PubMed Central

    Chanes, Lorena; Chica, Ana B.; Quentin, Romain; Valero-Cabré, Antoni

    2012-01-01

    The right Frontal Eye Field (FEF) is a region of the human brain, which has been consistently involved in visuo-spatial attention and access to consciousness. Nonetheless, the extent of this cortical site’s ability to influence specific aspects of visual performance remains debated. We hereby manipulated pre-target activity on the right FEF and explored its influence on the detection and categorization of low-contrast near-threshold visual stimuli. Our data show that pre-target frontal neurostimulation has the potential when used alone to induce enhancements of conscious visual detection. More interestingly, when FEF stimulation was combined with visuo-spatial cues, improvements remained present only for trials in which the cue correctly predicted the location of the subsequent target. Our data provide evidence for the causal role of the right FEF pre-target activity in the modulation of human conscious vision and reveal the dependence of such neurostimulatory effects on the state of activity set up by cue validity in the dorsal attentional orienting network. PMID:22615759

  14. Visual perceptual abilities of Chinese-speaking and English-speaking children.

    PubMed

    Lai, Mun Yee; Leung, Frederick Koon Shing

    2012-04-01

    This paper reports an investigation of Chinese-speaking and English-speaking children's general visual perceptual abilities. The Developmental Test of Visual Perception was administered to 41 native Chinese-speaking children of mean age 5 yr. 4 mo. in Hong Kong and 35 English-speaking children of mean age 5 yr. 2 mo. in Melbourne. Of interest were the two interrelated components of visual perceptual abilities, namely, motor-reduced visual perceptual and visual-motor integration perceptual abilities, which require either verbal or motoric responses in completing visual tasks. Chinese-speaking children significantly outperformed the English-speaking children on general visual perceptual abilities. When comparing the results of each of the two different components, the Chinese-speaking students' performance on visual-motor integration was far better than that of their counterparts (ES = 2.70), while the two groups of students performed similarly on motor-reduced visual perceptual abilities. Cultural factors such as written language format may be contributing to the enhanced performance of Chinese-speaking children's visual-motor integration abilities, but there may be validity questions in the Chinese version.

  15. Audiovisual integration of emotional signals in voice and face: an event-related fMRI study.

    PubMed

    Kreifelts, Benjamin; Ethofer, Thomas; Grodd, Wolfgang; Erb, Michael; Wildgruber, Dirk

    2007-10-01

    In a natural environment, non-verbal emotional communication is multimodal (i.e. speech melody, facial expression) and multifaceted concerning the variety of expressed emotions. Understanding these communicative signals and integrating them into a common percept is paramount to successful social behaviour. While many previous studies have focused on the neurobiology of emotional communication in the auditory or visual modality alone, far less is known about multimodal integration of auditory and visual non-verbal emotional information. The present study investigated this process using event-related fMRI. Behavioural data revealed that audiovisual presentation of non-verbal emotional information resulted in a significant increase in correctly classified stimuli when compared with visual and auditory stimulation. This behavioural gain was paralleled by enhanced activation in bilateral posterior superior temporal gyrus (pSTG) and right thalamus, when contrasting audiovisual to auditory and visual conditions. Further, a characteristic of these brain regions, substantiating their role in the emotional integration process, is a linear relationship between the gain in classification accuracy and the strength of the BOLD response during the bimodal condition. Additionally, enhanced effective connectivity between audiovisual integration areas and associative auditory and visual cortices was observed during audiovisual stimulation, offering further insight into the neural process accomplishing multimodal integration. Finally, we were able to document an enhanced sensitivity of the putative integration sites to stimuli with emotional non-verbal content as compared to neutral stimuli.

  16. Making memories: the development of long-term visual knowledge in children with visual agnosia.

    PubMed

    Metitieri, Tiziana; Barba, Carmen; Pellacani, Simona; Viggiano, Maria Pia; Guerrini, Renzo

    2013-01-01

    There are few reports about the effects of perinatal acquired brain lesions on the development of visual perception. These studies demonstrate nonseverely impaired visual-spatial abilities and preserved visual memory. Longitudinal data analyzing the effects of compromised perceptions on long-term visual knowledge in agnosics are limited to lesions having occurred in adulthood. The study of children with focal lesions of the visual pathways provides a unique opportunity to assess the development of visual memory when perceptual input is degraded. We assessed visual recognition and visual memory in three children with lesions to the visual cortex having occurred in early infancy. We then explored the time course of visual memory impairment in two of them at 2  years and 3.7  years from the initial assessment. All children exhibited apperceptive visual agnosia and visual memory impairment. We observed a longitudinal improvement of visual memory modulated by the structural properties of objects. Our findings indicate that processing of degraded perceptions from birth results in impoverished memories. The dynamic interaction between perception and memory during development might modulate the long-term construction of visual representations, resulting in less severe impairment.

  17. Making Memories: The Development of Long-Term Visual Knowledge in Children with Visual Agnosia

    PubMed Central

    Barba, Carmen; Pellacani, Simona; Viggiano, Maria Pia; Guerrini, Renzo

    2013-01-01

    There are few reports about the effects of perinatal acquired brain lesions on the development of visual perception. These studies demonstrate nonseverely impaired visual-spatial abilities and preserved visual memory. Longitudinal data analyzing the effects of compromised perceptions on long-term visual knowledge in agnosics are limited to lesions having occurred in adulthood. The study of children with focal lesions of the visual pathways provides a unique opportunity to assess the development of visual memory when perceptual input is degraded. We assessed visual recognition and visual memory in three children with lesions to the visual cortex having occurred in early infancy. We then explored the time course of visual memory impairment in two of them at 2 years and 3.7 years from the initial assessment. All children exhibited apperceptive visual agnosia and visual memory impairment. We observed a longitudinal improvement of visual memory modulated by the structural properties of objects. Our findings indicate that processing of degraded perceptions from birth results in impoverished memories. The dynamic interaction between perception and memory during development might modulate the long-term construction of visual representations, resulting in less severe impairment. PMID:24319599

  18. Analyzing the Reading Skills and Visual Perception Levels of First Grade Students

    ERIC Educational Resources Information Center

    Çayir, Aybala

    2017-01-01

    The purpose of this study was to analyze primary school first grade students' reading levels and correlate their visual perception skills. For this purpose, students' reading speed, reading comprehension and reading errors were determined using The Informal Reading Inventory. Students' visual perception levels were also analyzed using…

  19. Dissociable Roles of Different Types of Working Memory Load in Visual Detection

    PubMed Central

    Konstantinou, Nikos; Lavie, Nilli

    2013-01-01

    We contrasted the effects of different types of working memory (WM) load on detection. Considering the sensory-recruitment hypothesis of visual short-term memory (VSTM) within load theory (e.g., Lavie, 2010) led us to predict that VSTM load would reduce visual-representation capacity, thus leading to reduced detection sensitivity during maintenance, whereas load on WM cognitive control processes would reduce priority-based control, thus leading to enhanced detection sensitivity for a low-priority stimulus. During the retention interval of a WM task, participants performed a visual-search task while also asked to detect a masked stimulus in the periphery. Loading WM cognitive control processes (with the demand to maintain a random digit order [vs. fixed in conditions of low load]) led to enhanced detection sensitivity. In contrast, loading VSTM (with the demand to maintain the color and positions of six squares [vs. one in conditions of low load]) reduced detection sensitivity, an effect comparable with that found for manipulating perceptual load in the search task. The results confirmed our predictions and established a new functional dissociation between the roles of different types of WM load in the fundamental visual perception process of detection. PMID:23713796

  20. Texture Segregation Causes Early Figure Enhancement and Later Ground Suppression in Areas V1 and V4 of Visual Cortex.

    PubMed

    Poort, Jasper; Self, Matthew W; van Vugt, Bram; Malkki, Hemi; Roelfsema, Pieter R

    2016-10-01

    Segregation of images into figures and background is fundamental for visual perception. Cortical neurons respond more strongly to figural image elements than to background elements, but the mechanisms of figure-ground modulation (FGM) are only partially understood. It is unclear whether FGM in early and mid-level visual cortex is caused by an enhanced response to the figure, a suppressed response to the background, or both.We studied neuronal activity in areas V1 and V4 in monkeys performing a texture segregation task. We compared texture-defined figures with homogeneous textures and found an early enhancement of the figure representation, and a later suppression of the background. Across neurons, the strength of figure enhancement was independent of the strength of background suppression.We also examined activity in the different V1 layers. Both figure enhancement and ground suppression were strongest in superficial and deep layers and weaker in layer 4. The current-source density profiles suggested that figure enhancement was caused by stronger synaptic inputs in feedback-recipient layers 1, 2, and 5 and ground suppression by weaker inputs in these layers, suggesting an important role for feedback connections from higher level areas. These results provide new insights into the mechanisms for figure-ground organization. © The Author 2016. Published by Oxford University Press.

  1. Perceptual geometry of space and form: visual perception of natural scenes and their virtual representation

    NASA Astrophysics Data System (ADS)

    Assadi, Amir H.

    2001-11-01

    Perceptual geometry is an emerging field of interdisciplinary research whose objectives focus on study of geometry from the perspective of visual perception, and in turn, apply such geometric findings to the ecological study of vision. Perceptual geometry attempts to answer fundamental questions in perception of form and representation of space through synthesis of cognitive and biological theories of visual perception with geometric theories of the physical world. Perception of form and space are among fundamental problems in vision science. In recent cognitive and computational models of human perception, natural scenes are used systematically as preferred visual stimuli. Among key problems in perception of form and space, we have examined perception of geometry of natural surfaces and curves, e.g. as in the observer's environment. Besides a systematic mathematical foundation for a remarkably general framework, the advantages of the Gestalt theory of natural surfaces include a concrete computational approach to simulate or recreate images whose geometric invariants and quantities might be perceived and estimated by an observer. The latter is at the very foundation of understanding the nature of perception of space and form, and the (computer graphics) problem of rendering scenes to visually invoke virtual presence.

  2. Influence of Visual Motion, Suggestion, and Illusory Motion on Self-Motion Perception in the Horizontal Plane.

    PubMed

    Rosenblatt, Steven David; Crane, Benjamin Thomas

    2015-01-01

    A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37) participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s) at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001) and rotation (p<0.001), and arrows (p = 0.02). For the visual motion stimuli, inertial motion perception was shifted in the direction consistent with the visual stimulus. Arrows had a small effect on self-motion perception driven by a minority of subjects. There was no significant effect of illusory motion on self-motion perception for either translation or rotation (p>0.1 for both). Thus, although a true moving visual field can induce self-motion, results of this study show that illusory motion does not.

  3. The Effect of Visual Experience on Perceived Haptic Verticality When Tilted in the Roll Plane

    PubMed Central

    Cuturi, Luigi F.; Gori, Monica

    2017-01-01

    The orientation of the body in space can influence perception of verticality leading sometimes to biases consistent with priors peaked at the most common head and body orientation, that is upright. In this study, we investigate haptic perception of verticality in sighted individuals and early and late blind adults when tilted counterclockwise in the roll plane. Participants were asked to perform a stimulus orientation discrimination task with their body tilted to their left ear side 90° relative to gravity. Stimuli were presented by using a motorized haptic bar. In order to test whether different reference frames relative to the head influenced perception of verticality, we varied the position of the stimulus on the body longitudinal axis. Depending on the stimulus position sighted participants tended to have biases away or toward their body tilt. Visually impaired individuals instead show a different pattern of verticality estimations. A bias toward head and body tilt (i.e., Aubert effect) was observed in late blind individuals. Interestingly, no strong biases were observed in early blind individuals. Overall, these results posit visual sensory information to be fundamental in influencing the haptic readout of proprioceptive and vestibular information about body orientation relative to gravity. The acquisition of an idiotropic vector signaling the upright might take place through vision during development. Regarding early blind individuals, independent spatial navigation experience likely enhanced by echolocation behavior might have a role in such acquisition. In the case of participants with late onset blindness, early experience of vision might lead them to anchor their visually acquired priors to the haptic modality with no disambiguation between head and body references as observed in sighted individuals (Fraser et al., 2015). With our study, we aim to investigate haptic perception of gravity direction in unusual body tilts when vision is absent due to visual impairment. Insofar, our findings throw light on the influence of proprioceptive/vestibular sensory information on haptic perceived verticality in blind individuals showing how this phenomenon is affected by visual experience. PMID:29270109

  4. Comparison of visual field training for hemianopia with active versus sham transcranial direct cortical stimulation.

    PubMed

    Plow, Ela B; Obretenova, Souzana N; Fregni, Felipe; Pascual-Leone, Alvaro; Merabet, Lotfi B

    2012-01-01

    Vision Restoration Therapy (VRT) aims to improve visual field function by systematically training regions of residual vision associated with the activity of suboptimal firing neurons within the occipital cortex. Transcranial direct current stimulation (tDCS) has been shown to modulate cortical excitability. Assess the possible efficacy of tDCS combined with VRT. The authors conducted a randomized, double-blind, demonstration-of-concept pilot study where participants were assigned to either VRT and tDCS or VRT and sham. The anode was placed over the occipital pole to target both affected and unaffected lobes. One hour training sessions were carried out 3 times per week for 3 months in a laboratory. Outcome measures included objective and subjective changes in visual field, recording of visual fixation performance, and vision-related activities of daily living (ADLs) and quality of life (QOL). Although 12 participants were enrolled, only 8 could be analyzed. The VRT and tDCS group demonstrated significantly greater expansion in visual field and improvement on ADLs compared with the VRT and sham group. Contrary to expectations, subjective perception of visual field change was greater in the VRT and sham group. QOL did not change for either group. The observed changes in visual field were unrelated to compensatory eye movements, as shown with fixation monitoring. The combination of occipital cortical tDCS with visual field rehabilitation appears to enhance visual functional outcomes compared with visual rehabilitation alone. TDCS may enhance inherent mechanisms of plasticity associated with training.

  5. The reliability and clinical correlates of figure-ground perception in schizophrenia.

    PubMed

    Malaspina, Dolores; Simon, Naomi; Goetz, Raymond R; Corcoran, Cheryl; Coleman, Eliza; Printz, David; Mujica-Parodi, Lilianne; Wolitzky, Rachel

    2004-01-01

    Schizophrenia subjects are impaired in a number of visual attention paradigms. However, their performance on tests of figure-ground visual perception (FGP), which requires subjects to visually discriminate figures embedded in a rival background, is relatively unstudied. We examined FGP in 63 schizophrenia patients and 27 control subjects and found that the patients performed the FGP test reliably and had significantly lower FGP scores than the control subjects. Figure-ground visual perception was significantly correlated with other neuropsychological test scores and was inversely related to negative symptoms. It was unrelated to antipsychotic medication treatment. Figure-ground visual perception depends on "top down" processing of visual stimuli, and thus this data suggests that dysfunction in the higher-level pathways that modulate visual perceptual processes may also be related to a core defect in schizophrenia.

  6. Timing in audiovisual speech perception: A mini review and new psychophysical data.

    PubMed

    Venezia, Jonathan H; Thurman, Steven M; Matchin, William; George, Sahara E; Hickok, Gregory

    2016-02-01

    Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (~35 % identification of /apa/ compared to ~5 % in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (~130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content.

  7. Timing in Audiovisual Speech Perception: A Mini Review and New Psychophysical Data

    PubMed Central

    Venezia, Jonathan H.; Thurman, Steven M.; Matchin, William; George, Sahara E.; Hickok, Gregory

    2015-01-01

    Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually-relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (∼35% identification of /apa/ compared to ∼5% in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually-relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (∼130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content. PMID:26669309

  8. Reading Disability and Visual Perception in Families: New Findings.

    ERIC Educational Resources Information Center

    Oxford, Rebecca L.

    Frequently a variety of visual perception difficulties correlate with reading disabilities. A study was made to investigate the relationship between visual perception and reading disability in families, and to explore the genetic aspects of the relationship. One-hundred twenty-five reading-disabled students, ages 7.5 to 12 years, were matched with…

  9. Emotional modulation of visual remapping of touch.

    PubMed

    Cardini, Flavia; Bertini, Caterina; Serino, Andrea; Ladavas, Elisabetta

    2012-10-01

    The perception of tactile stimuli on the face is modulated if subjects concurrently observe a face being touched; this effect is termed "visual remapping of touch" or the VRT effect. Given the high social value of this mechanism, we investigated whether it might be modulated by specific key information processed in face-to-face interactions: facial emotional expression. In two separate experiments, participants received tactile stimuli, near the perceptual threshold, either on their right, left, or both cheeks. Concurrently, they watched several blocks of movies depicting a face with a neutral, happy, or fearful expression that was touched or just approached by human fingers (Experiment 1). Participants were asked to distinguish between unilateral and bilateral felt tactile stimulation. Tactile perception was enhanced when viewing touch toward a fearful face compared with viewing touch toward the other two expressions. In order to test whether this result can be generalized to other negative emotions or whether it is a fear-specific effect, we ran a second experiment, where participants watched movies of faces-touched or approached by fingers-with either a fearful or an angry expression (Experiment 2). In line with the first experiment, tactile perception was enhanced when subjects viewed touch toward a fearful face and not toward an angry face. Results of the present experiments are interpreted in light of different mechanisms underlying different emotions recognition, with a specific involvement of the somatosensory system when viewing a fearful expression and a resulting fear-specific modulation of the VRT effect.

  10. Perception of audio-visual speech synchrony in Spanish-speaking children with and without specific language impairment

    PubMed Central

    PONS, FERRAN; ANDREU, LLORENC.; SANZ-TORRENT, MONICA; BUIL-LEGAZ, LUCIA; LEWKOWICZ, DAVID J.

    2014-01-01

    Speech perception involves the integration of auditory and visual articulatory information and, thus, requires the perception of temporal synchrony between this information. There is evidence that children with Specific Language Impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component followed the visual component. None of the groups perceived an audiovisual asynchrony of 366ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception. PMID:22874648

  11. Perception of audio-visual speech synchrony in Spanish-speaking children with and without specific language impairment.

    PubMed

    Pons, Ferran; Andreu, Llorenç; Sanz-Torrent, Monica; Buil-Legaz, Lucía; Lewkowicz, David J

    2013-06-01

    Speech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666 ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component preceded [corrected] the visual component. None of the groups perceived an audiovisual asynchrony of 366 ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception.

  12. PERCEPT Indoor Navigation System for the Blind and Visually Impaired: Architecture and Experimentation

    PubMed Central

    Ganz, Aura; Schafer, James; Gandhi, Siddhesh; Puleo, Elaine; Wilson, Carole; Robertson, Meg

    2012-01-01

    We introduce PERCEPT system, an indoor navigation system for the blind and visually impaired. PERCEPT will improve the quality of life and health of the visually impaired community by enabling independent living. Using PERCEPT, blind users will have independent access to public health facilities such as clinics, hospitals, and wellness centers. Access to healthcare facilities is crucial for this population due to the multiple health conditions that they face such as diabetes and its complications. PERCEPT system trials with 24 blind and visually impaired users in a multistory building show PERCEPT system effectiveness in providing appropriate navigation instructions to these users. The uniqueness of our system is that it is affordable and that its design follows orientation and mobility principles. We hope that PERCEPT will become a standard deployed in all indoor public spaces, especially in healthcare and wellness facilities. PMID:23316225

  13. Contributions of local speech encoding and functional connectivity to audio-visual speech perception

    PubMed Central

    Giordano, Bruno L; Ince, Robin A A; Gross, Joachim; Schyns, Philippe G; Panzeri, Stefano; Kayser, Christoph

    2017-01-01

    Seeing a speaker’s face enhances speech intelligibility in adverse environments. We investigated the underlying network mechanisms by quantifying local speech representations and directed connectivity in MEG data obtained while human participants listened to speech of varying acoustic SNR and visual context. During high acoustic SNR speech encoding by temporally entrained brain activity was strong in temporal and inferior frontal cortex, while during low SNR strong entrainment emerged in premotor and superior frontal cortex. These changes in local encoding were accompanied by changes in directed connectivity along the ventral stream and the auditory-premotor axis. Importantly, the behavioral benefit arising from seeing the speaker’s face was not predicted by changes in local encoding but rather by enhanced functional connectivity between temporal and inferior frontal cortex. Our results demonstrate a role of auditory-frontal interactions in visual speech representations and suggest that functional connectivity along the ventral pathway facilitates speech comprehension in multisensory environments. DOI: http://dx.doi.org/10.7554/eLife.24763.001 PMID:28590903

  14. Visual Imagery without Visual Perception?

    ERIC Educational Resources Information Center

    Bertolo, Helder

    2005-01-01

    The question regarding visual imagery and visual perception remain an open issue. Many studies have tried to understand if the two processes share the same mechanisms or if they are independent, using different neural substrates. Most research has been directed towards the need of activation of primary visual areas during imagery. Here we review…

  15. Perception and Attention for Visualization

    ERIC Educational Resources Information Center

    Haroz, Steve

    2013-01-01

    This work examines how a better understanding of visual perception and attention can impact visualization design. In a collection of studies, I explore how different levels of the visual system can measurably affect a variety of visualization metrics. The results show that expert preference, user performance, and even computational performance are…

  16. Do Visual Illusions Probe the Visual Brain?: Illusions in Action without a Dorsal Visual Stream

    ERIC Educational Resources Information Center

    Coello, Yann; Danckert, James; Blangero, Annabelle; Rossetti, Yves

    2007-01-01

    Visual illusions have been shown to affect perceptual judgements more so than motor behaviour, which was interpreted as evidence for a functional division of labour within the visual system. The dominant perception-action theory argues that perception involves a holistic processing of visual objects or scenes, performed within the ventral,…

  17. Visual perception from the perspective of a representational, non-reductionistic, level-dependent account of perception and conscious awareness

    PubMed Central

    Overgaard, Morten; Mogensen, Jesper

    2014-01-01

    This article proposes a new model to interpret seemingly conflicting evidence concerning the correlation of consciousness and neural processes. Based on an analysis of research of blindsight and subliminal perception, the reorganization of elementary functions and consciousness framework suggests that mental representations consist of functions at several different levels of analysis, including truly localized perceptual elementary functions and perceptual algorithmic modules, which are interconnections of the elementary functions. We suggest that conscious content relates to the ‘top level’ of analysis in a ‘situational algorithmic strategy’ that reflects the general state of an individual. We argue that conscious experience is intrinsically related to representations that are available to guide behaviour. From this perspective, we find that blindsight and subliminal perception can be explained partly by too coarse-grained methodology, and partly by top-down enhancing of representations that normally would not be relevant to action. PMID:24639581

  18. Influence of age, sex, and education on the Visual Object and Space Perception Battery (VOSP) in a healthy normal elderly population.

    PubMed

    Herrera-Guzmán, I; Peña-Casanova, J; Lara, J P; Gudayol-Ferré, E; Böhm, P

    2004-08-01

    The assessment of visual perception and cognition forms an important part of any general cognitive evaluation. We have studied the possible influence of age, sex, and education on a normal elderly Spanish population (90 healthy subjects) in performance in visual perception tasks. To evaluate visual perception and cognition, we have used the subjects performance with The Visual Object and Space Perception Battery (VOSP). The test consists of 8 subtests: 4 measure visual object perception (Incomplete Letters, Silhouettes, Object Decision, and Progressive Silhouettes) while the other 4 measure visual space perception (Dot Counting, Position Discrimination, Number Location, and Cube Analysis). The statistical procedures employed were either simple or multiple linear regression analyses (subtests with normal distribution) and Mann-Whitney tests, followed by ANOVA with Scheffe correction (subtests without normal distribution). Age and sex were found to be significant modifying factors in the Silhouettes, Object Decision, Progressive Silhouettes, Position Discrimination, and Cube Analysis subtests. Educational level was found to be a significant predictor of function for the Silhouettes and Object Decision subtests. The results of the sample were adjusted in line with the differences observed. Our study also offers preliminary normative data for the administration of the VOSP to an elderly Spanish population. The results are discussed and compared with similar studies performed in different cultural backgrounds.

  19. Visual perception of ADHD children with sensory processing disorder.

    PubMed

    Jung, Hyerim; Woo, Young Jae; Kang, Je Wook; Choi, Yeon Woo; Kim, Kyeong Mi

    2014-04-01

    The aim of the present study was to investigate the visual perception difference between ADHD children with and without sensory processing disorder, and the relationship between sensory processing and visual perception of the children with ADHD. Participants were 47 outpatients, aged 6-8 years, diagnosed with ADHD. After excluding those who met exclusion criteria, 38 subjects were clustered into two groups, ADHD children with and without sensory processing disorder (SPD), using SSP reported by their parents, then subjects completed K-DTVP-2. Spearman correlation analysis was run to determine the relationship between sensory processing and visual perception, and Mann-Whitney-U test was conducted to compare the K-DTVP-2 score of two groups respectively. The ADHD children with SPD performed inferiorly to ADHD children without SPD in the on 3 quotients of K-DTVP-2. The GVP of K-DTVP-2 score was related to Movement Sensitivity section (r=0.368(*)) and Low Energy/Weak section of SSP (r=0.369*). The result of the present study suggests that among children with ADHD, the visual perception is lower in those children with co-morbid SPD. Also, visual perception may be related to sensory processing, especially in the reactions of vestibular and proprioceptive senses. Regarding academic performance, it is necessary to consider how sensory processing issues affect visual perception in children with ADHD.

  20. Differential temporal dynamics during visual imagery and perception.

    PubMed

    Dijkstra, Nadine; Mostert, Pim; Lange, Floris P de; Bosch, Sander; van Gerven, Marcel Aj

    2018-05-29

    Visual perception and imagery rely on similar representations in the visual cortex. During perception, visual activity is characterized by distinct processing stages, but the temporal dynamics underlying imagery remain unclear. Here, we investigated the dynamics of visual imagery in human participants using magnetoencephalography. Firstly, we show that, compared to perception, imagery decoding becomes significant later and representations at the start of imagery already overlap with later time points. This suggests that during imagery, the entire visual representation is activated at once or that there are large differences in the timing of imagery between trials. Secondly, we found consistent overlap between imagery and perceptual processing around 160 ms and from 300 ms after stimulus onset. This indicates that the N170 gets reactivated during imagery and that imagery does not rely on early perceptual representations. Together, these results provide important insights for our understanding of the neural mechanisms of visual imagery. © 2018, Dijkstra et al.

  1. How visual cues for when to listen aid selective auditory attention.

    PubMed

    Varghese, Lenny A; Ozmeral, Erol J; Best, Virginia; Shinn-Cunningham, Barbara G

    2012-06-01

    Visual cues are known to aid auditory processing when they provide direct information about signal content, as in lip reading. However, some studies hint that visual cues also aid auditory perception by guiding attention to the target in a mixture of similar sounds. The current study directly tests this idea for complex, nonspeech auditory signals, using a visual cue providing only timing information about the target. Listeners were asked to identify a target zebra finch bird song played at a random time within a longer, competing masker. Two different maskers were used: noise and a chorus of competing bird songs. On half of all trials, a visual cue indicated the timing of the target within the masker. For the noise masker, the visual cue did not affect performance when target and masker were from the same location, but improved performance when target and masker were in different locations. In contrast, for the chorus masker, visual cues improved performance only when target and masker were perceived as coming from the same direction. These results suggest that simple visual cues for when to listen improve target identification by enhancing sounds near the threshold of audibility when the target is energetically masked and by enhancing segregation when it is difficult to direct selective attention to the target. Visual cues help little when target and masker already differ in attributes that enable listeners to engage selective auditory attention effectively, including differences in spectrotemporal structure and in perceived location.

  2. Does Seeing Ice Really Feel Cold? Visual-Thermal Interaction under an Illusory Body-Ownership

    PubMed Central

    Kanaya, Shoko; Matsushima, Yuka; Yokosawa, Kazuhiko

    2012-01-01

    Although visual information seems to affect thermal perception (e.g. red color is associated with heat), previous studies have failed to demonstrate the interaction between visual and thermal senses. However, it has been reported that humans feel an illusory thermal sensation in conjunction with an apparently-thermal visual stimulus placed on a prosthetic hand in the rubber hand illusion (RHI) wherein an individual feels that a prosthetic (rubber) hand belongs to him/her. This study tests the possibility that the ownership of the body surface on which a visual stimulus is placed enhances the likelihood of a visual-thermal interaction. We orthogonally manipulated three variables: induced hand-ownership, visually-presented thermal information, and tactically-presented physical thermal information. Results indicated that the sight of an apparently-thermal object on a rubber hand that is illusorily perceived as one's own hand affects thermal judgments about the object physically touching this hand. This effect was not observed without the RHI. The importance of ownership of a body part that is touched by the visual object on the visual-thermal interaction is discussed. PMID:23144814

  3. Does seeing ice really feel cold? Visual-thermal interaction under an illusory body-ownership.

    PubMed

    Kanaya, Shoko; Matsushima, Yuka; Yokosawa, Kazuhiko

    2012-01-01

    Although visual information seems to affect thermal perception (e.g. red color is associated with heat), previous studies have failed to demonstrate the interaction between visual and thermal senses. However, it has been reported that humans feel an illusory thermal sensation in conjunction with an apparently-thermal visual stimulus placed on a prosthetic hand in the rubber hand illusion (RHI) wherein an individual feels that a prosthetic (rubber) hand belongs to him/her. This study tests the possibility that the ownership of the body surface on which a visual stimulus is placed enhances the likelihood of a visual-thermal interaction. We orthogonally manipulated three variables: induced hand-ownership, visually-presented thermal information, and tactically-presented physical thermal information. Results indicated that the sight of an apparently-thermal object on a rubber hand that is illusorily perceived as one's own hand affects thermal judgments about the object physically touching this hand. This effect was not observed without the RHI. The importance of ownership of a body part that is touched by the visual object on the visual-thermal interaction is discussed.

  4. Visual Enhancement of Illusory Phenomenal Accents in Non-Isochronous Auditory Rhythms

    PubMed Central

    2016-01-01

    Musical rhythms encompass temporal patterns that often yield regular metrical accents (e.g., a beat). There have been mixed results regarding perception as a function of metrical saliency, namely, whether sensitivity to a deviant was greater in metrically stronger or weaker positions. Besides, effects of metrical position have not been examined in non-isochronous rhythms, or with respect to multisensory influences. This study was concerned with two main issues: (1) In non-isochronous auditory rhythms with clear metrical accents, how would sensitivity to a deviant be modulated by metrical positions? (2) Would the effects be enhanced by multisensory information? Participants listened to strongly metrical rhythms with or without watching a point-light figure dance to the rhythm in the same meter, and detected a slight loudness increment. Both conditions were presented with or without an auditory interference that served to impair auditory metrical perception. Sensitivity to a deviant was found greater in weak beat than in strong beat positions, consistent with the Predictive Coding hypothesis and the idea of metrically induced illusory phenomenal accents. The visual rhythm of dance hindered auditory detection, but more so when the latter was itself less impaired. This pattern suggested that the visual and auditory rhythms were perceptually integrated to reinforce metrical accentuation, yielding more illusory phenomenal accents and thus lower sensitivity to deviants, in a manner consistent with the principle of inverse effectiveness. Results were discussed in the predictive framework for multisensory rhythms involving observed movements and possible mediation of the motor system. PMID:27880850

  5. Visual Form Perception Can Be a Cognitive Correlate of Lower Level Math Categories for Teenagers

    PubMed Central

    Cui, Jiaxin; Zhang, Yiyun; Cheng, Dazhi; Li, Dawei; Zhou, Xinlin

    2017-01-01

    Numerous studies have assessed the cognitive correlates of performance in mathematics, but little research has been conducted to systematically examine the relations between visual perception as the starting point of visuospatial processing and typical mathematical performance. In the current study, we recruited 223 seventh graders to perform a visual form perception task (figure matching), numerosity comparison, digit comparison, exact computation, approximate computation, and curriculum-based mathematical achievement tests. Results showed that, after controlling for gender, age, and five general cognitive processes (choice reaction time, visual tracing, mental rotation, spatial working memory, and non-verbal matrices reasoning), visual form perception had unique contributions to numerosity comparison, digit comparison, and exact computation, but had no significant relation with approximate computation or curriculum-based mathematical achievement. These results suggest that visual form perception is an important independent cognitive correlate of lower level math categories, including the approximate number system, digit comparison, and exact computation. PMID:28824513

  6. Auditory, visual, and auditory-visual perception of emotions by individuals with cochlear implants, hearing AIDS, and normal hearing.

    PubMed

    Most, Tova; Aviner, Chen

    2009-01-01

    This study evaluated the benefits of cochlear implant (CI) with regard to emotion perception of participants differing in their age of implantation, in comparison to hearing aid users and adolescents with normal hearing (NH). Emotion perception was examined by having the participants identify happiness, anger, surprise, sadness, fear, and disgust. The emotional content was placed upon the same neutral sentence. The stimuli were presented in auditory, visual, and combined auditory-visual modes. The results revealed better auditory identification by the participants with NH in comparison to all groups of participants with hearing loss (HL). No differences were found among the groups with HL in each of the 3 modes. Although auditory-visual perception was better than visual-only perception for the participants with NH, no such differentiation was found among the participants with HL. The results question the efficiency of some currently used CIs in providing the acoustic cues required to identify the speaker's emotional state.

  7. [Peculiarities of visual perception of dentition and smile aesthetic parameters].

    PubMed

    Riakhovskiĭ, A N; Usanova, E V

    2007-01-01

    As the result of the studies it was determined in which limits the dentition central line displacement from the face middle line and the change of smile line tilt angle become noticeable for visual perception. And also how much visual perception of the dentition aesthetic parameters were differed in doctors with different experience, dental technicians and patients.

  8. Seen, Unseen or Overlooked? How Can Visual Perception Develop through a Multimodal Enquiry?

    ERIC Educational Resources Information Center

    Payne, Rachel

    2012-01-01

    This article outlines an exploration into the development of visual perception through analysing the process of taking photographs of the mundane as small-scale research. A preoccupation with social construction of the visual lies at the heart of the investigation by correlating the perceptive process to Mitchell's (2002) counter thesis for visual…

  9. Neural mechanisms underlying sound-induced visual motion perception: An fMRI study.

    PubMed

    Hidaka, Souta; Higuchi, Satomi; Teramoto, Wataru; Sugita, Yoichi

    2017-07-01

    Studies of crossmodal interactions in motion perception have reported activation in several brain areas, including those related to motion processing and/or sensory association, in response to multimodal (e.g., visual and auditory) stimuli that were both in motion. Recent studies have demonstrated that sounds can trigger illusory visual apparent motion to static visual stimuli (sound-induced visual motion: SIVM): A visual stimulus blinking at a fixed location is perceived to be moving laterally when an alternating left-right sound is also present. Here, we investigated brain activity related to the perception of SIVM using a 7T functional magnetic resonance imaging technique. Specifically, we focused on the patterns of neural activities in SIVM and visually induced visual apparent motion (VIVM). We observed shared activations in the middle occipital area (V5/hMT), which is thought to be involved in visual motion processing, for SIVM and VIVM. Moreover, as compared to VIVM, SIVM resulted in greater activation in the superior temporal area and dominant functional connectivity between the V5/hMT area and the areas related to auditory and crossmodal motion processing. These findings indicate that similar but partially different neural mechanisms could be involved in auditory-induced and visually-induced motion perception, and neural signals in auditory, visual, and, crossmodal motion processing areas closely and directly interact in the perception of SIVM. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Spatial Frequency Requirements and Gaze Strategy in Visual-Only and Audiovisual Speech Perception

    ERIC Educational Resources Information Center

    Wilson, Amanda H.; Alsius, Agnès; Parè, Martin; Munhall, Kevin G.

    2016-01-01

    Purpose: The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. Method: We presented vowel-consonant-vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent…

  11. A model of color vision with a robot system

    NASA Astrophysics Data System (ADS)

    Wang, Haihui

    2006-01-01

    In this paper, we propose to generalize the saccade target method and state that perceptual stability in general arises by learning the effects one's actions have on sensor responses. The apparent visual stability of color percept across saccadic eye movements can be explained by positing that perception involves observing how sensory input changes in response to motor activities. The changes related to self-motion can be learned, and once learned, used to form stable percepts. The variation of sensor data in response to a motor act is therefore a requirement for stable perception rather than something that has to be compensated for in order to perceive a stable world. In this paper, we have provided a simple implementation of this sensory-motor contingency view of perceptual stability. We showed how a straightforward application of the temporal difference enhancement learning technique yielding color percepts that are stable across saccadic eye movements, even though the raw sensor input may change radically.

  12. Saccadic Corollary Discharge Underlies Stable Visual Perception

    PubMed Central

    Berman, Rebecca A.; Joiner, Wilsaan M.; Wurtz, Robert H.

    2016-01-01

    Saccadic eye movements direct the high-resolution foveae of our retinas toward objects of interest. With each saccade, the image jumps on the retina, causing a discontinuity in visual input. Our visual perception, however, remains stable. Philosophers and scientists over centuries have proposed that visual stability depends upon an internal neuronal signal that is a copy of the neuronal signal driving the eye movement, now referred to as a corollary discharge (CD) or efference copy. In the old world monkey, such a CD circuit for saccades has been identified extending from superior colliculus through MD thalamus to frontal cortex, but there is little evidence that this circuit actually contributes to visual perception. We tested the influence of this CD circuit on visual perception by first training macaque monkeys to report their perceived eye direction, and then reversibly inactivating the CD as it passes through the thalamus. We found that the monkey's perception changed; during CD inactivation, there was a difference between where the monkey perceived its eyes to be directed and where they were actually directed. Perception and saccade were decoupled. We established that the perceived eye direction at the end of the saccade was not derived from proprioceptive input from eye muscles, and was not altered by contextual visual information. We conclude that the CD provides internal information contributing to the brain's creation of perceived visual stability. More specifically, the CD might provide the internal saccade vector used to unite separate retinal images into a stable visual scene. SIGNIFICANCE STATEMENT Visual stability is one of the most remarkable aspects of human vision. The eyes move rapidly several times per second, displacing the retinal image each time. The brain compensates for this disruption, keeping our visual perception stable. A major hypothesis explaining this stability invokes a signal within the brain, a corollary discharge, that informs visual regions of the brain when and where the eyes are about to move. Such a corollary discharge circuit for eye movements has been identified in macaque monkey. We now show that selectively inactivating this brain circuit alters the monkey's visual perception. We conclude that this corollary discharge provides a critical signal that can be used to unite jumping retinal images into a consistent visual scene. PMID:26740647

  13. Accuracy of System Step Response Roll Magnitude Estimation from Central and Peripheral Visual Displays and Simulator Cockpit Motion

    NASA Technical Reports Server (NTRS)

    Hosman, R. J. A. W.; Vandervaart, J. C.

    1984-01-01

    An experiment to investigate visual roll attitude and roll rate perception is described. The experiment was also designed to assess the improvements of perception due to cockpit motion. After the onset of the motion, subjects were to make accurate and quick estimates of the final magnitude of the roll angle step response by pressing the appropriate button of a keyboard device. The differing time-histories of roll angle, roll rate and roll acceleration caused by a step response stimulate the different perception processes related the central visual field, peripheral visual field and vestibular organs in different, yet exactly known ways. Experiments with either of the visual displays or cockpit motion and some combinations of these were run to asses the roles of the different perception processes. Results show that the differences in response time are much more pronounced than the differences in perception accuracy.

  14. Auditory-visual fusion in speech perception in children with cochlear implants

    PubMed Central

    Schorr, Efrat A.; Fox, Nathan A.; van Wassenhove, Virginie; Knudsen, Eric I.

    2005-01-01

    Speech, for most of us, is a bimodal percept whenever we both hear the voice and see the lip movements of a speaker. Children who are born deaf never have this bimodal experience. We tested children who had been deaf from birth and who subsequently received cochlear implants for their ability to fuse the auditory information provided by their implants with visual information about lip movements for speech perception. For most of the children with implants (92%), perception was dominated by vision when visual and auditory speech information conflicted. For some, bimodal fusion was strong and consistent, demonstrating a remarkable plasticity in their ability to form auditory-visual associations despite the atypical stimulation provided by implants. The likelihood of consistent auditory-visual fusion declined with age at implant beyond 2.5 years, suggesting a sensitive period for bimodal integration in speech perception. PMID:16339316

  15. Predictions penetrate perception: Converging insights from brain, behaviour and disorder

    PubMed Central

    O’Callaghan, Claire; Kveraga, Kestutis; Shine, James M; Adams, Reginald B.; Bar, Moshe

    2018-01-01

    It is argued that during ongoing visual perception, the brain is generating top-down predictions to facilitate, guide and constrain the processing of incoming sensory input. Here we demonstrate that these predictions are drawn from a diverse range of cognitive processes, in order to generate the richest and most informative prediction signals. This is consistent with a central role for cognitive penetrability in visual perception. We review behavioural and mechanistic evidence that indicate a wide spectrum of domains—including object recognition, contextual associations, cognitive biases and affective state—that can directly influence visual perception. We combine these insights from the healthy brain with novel observations from neuropsychiatric disorders involving visual hallucinations, which highlight the consequences of imbalance between top-down signals and incoming sensory information. Together, these lines of evidence converge to indicate that predictive penetration, be it cognitive, social or emotional, should be considered a fundamental framework that supports visual perception. PMID:27222169

  16. Eagle-eyed visual acuity: an experimental investigation of enhanced perception in autism.

    PubMed

    Ashwin, Emma; Ashwin, Chris; Rhydderch, Danielle; Howells, Jessica; Baron-Cohen, Simon

    2009-01-01

    Anecdotal accounts of sensory hypersensitivity in individuals with autism spectrum conditions (ASC) have been noted since the first reports of the condition. Over time, empirical evidence has supported the notion that those with ASC have superior visual abilities compared with control subjects. However, it remains unclear whether these abilities are specifically the result of differences in sensory thresholds (low-level processing), rather than higher-level cognitive processes. This study investigates visual threshold in n = 15 individuals with ASC and n = 15 individuals without ASC, using a standardized optometric test, the Freiburg Visual Acuity and Contrast Test, to investigate basic low-level visual acuity. Individuals with ASC have significantly better visual acuity (20:7) compared with control subjects (20:13)-acuity so superior that it lies in the region reported for birds of prey. The results of this study suggest that inclusion of sensory hypersensitivity in the diagnostic criteria for ASC may be warranted and that basic standardized tests of sensory thresholds may inform causal theories of ASC.

  17. Rise and fall of the two visual systems theory.

    PubMed

    Rossetti, Yves; Pisella, Laure; McIntosh, Robert D

    2017-06-01

    Among the many dissociations describing the visual system, the dual theory of two visual systems, respectively dedicated to perception and action, has yielded a lot of support. There are psychophysical, anatomical and neuropsychological arguments in favor of this theory. Several behavioral studies that used sensory and motor psychophysical parameters observed differences between perceptive and motor responses. The anatomical network of the visual system in the non-human primate was very readily organized according to two major pathways, dorsal and ventral. Neuropsychological studies, exploring optic ataxia and visual agnosia as characteristic deficits of these two pathways, led to the proposal of a functional double dissociation between visuomotor and visual perceptual functions. After a major wave of popularity that promoted great advances, particularly in knowledge of visuomotor functions, the guiding theory is now being reconsidered. Firstly, the idea of a double dissociation between optic ataxia and visual form agnosia, as cleanly separating visuomotor from visual perceptual functions, is no longer tenable; optic ataxia does not support a dissociation between perception and action and might be more accurately viewed as a negative image of action blindsight. Secondly, dissociations between perceptive and motor responses highlighted in the framework of this theory concern a very elementary level of action, even automatically guided action routines. Thirdly, the very rich interconnected network of the visual brain yields few arguments in favor of a strict perception/action dissociation. Overall, the dissociation between motor function and perceptive function explored by these behavioral and neuropsychological studies can help define an automatic level of action organization deficient in optic ataxia and preserved in action blindsight, and underlines the renewed need to consider the perception-action circle as a functional ensemble. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  18. Visual motion perception predicts driving hazard perception ability.

    PubMed

    Lacherez, Philippe; Au, Sandra; Wood, Joanne M

    2014-02-01

    To examine the basis of previous findings of an association between indices of driving safety and visual motion sensitivity and to examine whether this association could be explained by low-level changes in visual function. A total of 36 visually normal participants (aged 19-80 years) completed a battery of standard vision tests including visual acuity, contrast sensitivity and automated visual fields and two tests of motion perception including sensitivity for movement of a drifting Gabor stimulus and sensitivity for displacement in a random dot kinematogram (Dmin ). Participants also completed a hazard perception test (HPT), which measured participants' response times to hazards embedded in video recordings of real-world driving, which has been shown to be linked to crash risk. Dmin for the random dot stimulus ranged from -0.88 to -0.12 log minutes of arc, and the minimum drift rate for the Gabor stimulus ranged from 0.01 to 0.35 cycles per second. Both measures of motion sensitivity significantly predicted response times on the HPT. In addition, while the relationship involving the HPT and motion sensitivity for the random dot kinematogram was partially explained by the other visual function measures, the relationship with sensitivity for detection of the drifting Gabor stimulus remained significant even after controlling for these variables. These findings suggest that motion perception plays an important role in the visual perception of driving-relevant hazards independent of other areas of visual function and should be further explored as a predictive test of driving safety. Future research should explore the causes of reduced motion perception to develop better interventions to improve road safety. © 2012 The Authors. Acta Ophthalmologica © 2012 Acta Ophthalmologica Scandinavica Foundation.

  19. Auditory-Visual Speech Perception in Three- and Four-Year-Olds and Its Relationship to Perceptual Attunement and Receptive Vocabulary

    ERIC Educational Resources Information Center

    Erdener, Dogu; Burnham, Denis

    2018-01-01

    Despite the body of research on auditory-visual speech perception in infants and schoolchildren, development in the early childhood period remains relatively uncharted. In this study, English-speaking children between three and four years of age were investigated for: (i) the development of visual speech perception--lip-reading and visual…

  20. Relationships between Fine-Motor, Visual-Motor, and Visual Perception Scores and Handwriting Legibility and Speed

    ERIC Educational Resources Information Center

    Klein, Sheryl; Guiltner, Val; Sollereder, Patti; Cui, Ying

    2011-01-01

    Occupational therapists assess fine motor, visual motor, visual perception, and visual skill development, but knowledge of the relationships between scores on sensorimotor performance measures and handwriting legibility and speed is limited. Ninety-nine students in grades three to six with learning and/or behavior problems completed the Upper-Limb…

  1. Touch to see: neuropsychological evidence of a sensory mirror system for touch.

    PubMed

    Bolognini, Nadia; Olgiati, Elena; Xaiz, Annalisa; Posteraro, Lucio; Ferraro, Francesco; Maravita, Angelo

    2012-09-01

    The observation of touch can be grounded in the activation of brain areas underpinning direct tactile experience, namely the somatosensory cortices. What is the behavioral impact of such a mirror sensory activity on visual perception? To address this issue, we investigated the causal interplay between observed and felt touch in right brain-damaged patients, as a function of their underlying damaged visual and/or tactile modalities. Patients and healthy controls underwent a detection task, comprising visual stimuli depicting touches or without a tactile component. Touch and No-touch stimuli were presented in egocentric or allocentric perspectives. Seeing touches, regardless of the viewing perspective, differently affects visual perception depending on which sensory modality is damaged: In patients with a selective visual deficit, but without any tactile defect, the sight of touch improves the visual impairment; this effect is associated with a lesion to the supramarginal gyrus. In patients with a tactile deficit, but intact visual perception, the sight of touch disrupts visual processing, inducing a visual extinction-like phenomenon. This disruptive effect is associated with the damage of the postcentral gyrus. Hence, a damage to the somatosensory system can lead to a dysfunctional visual processing, and an intact somatosensory processing can aid visual perception.

  2. Visual perception and frontal lobe in intellectual disabilities: a study with evoked potentials and neuropsychology.

    PubMed

    Muñoz-Ruata, J; Caro-Martínez, E; Martínez Pérez, L; Borja, M

    2010-12-01

    Perception disorders are frequently observed in persons with intellectual disability (ID) and their influence on cognition has been discussed. The objective of this study is to clarify the mechanisms behind these alterations by analysing the visual event related potentials early component, the N1 wave, which is related to perception alterations in several pathologies. Additionally, the relationship between N1 and neuropsychological visual tests was studied with the aim to understand its functional significance in ID persons. A group of 69 subjects, with etiologically heterogeneous mild ID, performed an odd-ball task of active discrimination of geometric figures. N1a (frontal) and N1b (post-occipital) waves were obtained from the evoked potentials. They also performed several neuropsychological tests. Only component N1a, produced by the target stimulus, showed significant correlations with the visual integration, visual semantic association, visual analogical reasoning tests, Perceptual Reasoning Index (Wechsler Intelligence Scale for Children Fourth Edition) and intelligence quotient. The systematic correlations, produced by the target stimulus in perceptual abilities tasks, with the N1a (frontal) and not with N1b (posterior), suggest that the visual perception process involves frontal participation. These correlations support the idea that the N1a and N1b are not equivalent. The relationship between frontal functions and early stages of visual perception is revised and discussed, as well as the frontal contribution with the neuropsychological tests used. A possible relationship between the frontal activity dysfunction in ID and perceptive problems is suggested. Perceptive alteration observed in persons with ID could indeed be because of altered sensory areas, but also to a failure in the frontal participation of perceptive processes conceived as elaborations inside reverberant circuits of perception-action. © 2010 The Authors. Journal of Intellectual Disability Research © 2010 Blackwell Publishing Ltd.

  3. Driver Vision Based Perception-Response Time Prediction and Assistance Model on Mountain Highway Curve.

    PubMed

    Li, Yi; Chen, Yuren

    2016-12-30

    To make driving assistance system more humanized, this study focused on the prediction and assistance of drivers' perception-response time on mountain highway curves. Field tests were conducted to collect real-time driving data and driver vision information. A driver-vision lane model quantified curve elements in drivers' vision. A multinomial log-linear model was established to predict perception-response time with traffic/road environment information, driver-vision lane model, and mechanical status (last second). A corresponding assistance model showed a positive impact on drivers' perception-response times on mountain highway curves. Model results revealed that the driver-vision lane model and visual elements did have important influence on drivers' perception-response time. Compared with roadside passive road safety infrastructure, proper visual geometry design, timely visual guidance, and visual information integrality of a curve are significant factors for drivers' perception-response time.

  4. Visual Arts Teaching in Kindergarten through 3rd-Grade Classrooms in the UAE: Teacher Profiles, Perceptions, and Practices

    ERIC Educational Resources Information Center

    Buldu, Mehmet; Shaban, Mohamed S.

    2010-01-01

    This study portrayed a picture of kindergarten through 3rd-grade teachers who teach visual arts, their perceptions of the value of visual arts, their visual arts teaching practices, visual arts experiences provided to young learners in school, and major factors and/or influences that affect their teaching of visual arts. The sample for this study…

  5. Doctor-patient communication in rheumatology: studies of visual and verbal perception using educational booklets and other graphic material.

    PubMed Central

    Moll, J M

    1986-01-01

    Patients (n = 404) with osteoarthrosis and control subjects (n = 233) were studied to examine the communicational value of five styles of illustration (cartoon (C), matchstick (M), representational (R), symbolic (S), photographic (P) and two levels of text ('easy', 'hard'), presented as educational booklets about osteoarthrosis. Booklet comprehension was tested with a multiple choice questionnaire (MCQ) scored by two raw score and two, more sensitive, weight-of-evidence methods. Further studies assessed perception of image detail, tone, and colour by ranking, rating, latency, and questionnaire methods. A subgroup was tested psychometrically. The main findings were: pictures in booklets enhance communication; perception of pictorial style depends on its vehicle of presentation, cartoons being most effective in booklets, photographs overall; simplifying text does not significantly enhance communication; certain picture-text 'interactions' appear to increase comprehension (e.g. 'hard' text with 'easy' pictures); several 'endogenous' factors are associated with increased comprehension: 'psychological' (e.g., intelligence, memory, reading skill); 'demographic' (e.g., the young, males, higher social grades, higher educational levels); 'disease' (e.g., longer disease duration, previous information about the disease). Images PMID:3954469

  6. Doctor-patient communication in rheumatology: studies of visual and verbal perception using educational booklets and other graphic material.

    PubMed

    Moll, J M

    1986-03-01

    Patients (n = 404) with osteoarthrosis and control subjects (n = 233) were studied to examine the communicational value of five styles of illustration (cartoon (C), matchstick (M), representational (R), symbolic (S), photographic (P) and two levels of text ('easy', 'hard'), presented as educational booklets about osteoarthrosis. Booklet comprehension was tested with a multiple choice questionnaire (MCQ) scored by two raw score and two, more sensitive, weight-of-evidence methods. Further studies assessed perception of image detail, tone, and colour by ranking, rating, latency, and questionnaire methods. A subgroup was tested psychometrically. The main findings were: pictures in booklets enhance communication; perception of pictorial style depends on its vehicle of presentation, cartoons being most effective in booklets, photographs overall; simplifying text does not significantly enhance communication; certain picture-text 'interactions' appear to increase comprehension (e.g. 'hard' text with 'easy' pictures); several 'endogenous' factors are associated with increased comprehension: 'psychological' (e.g., intelligence, memory, reading skill); 'demographic' (e.g., the young, males, higher social grades, higher educational levels); 'disease' (e.g., longer disease duration, previous information about the disease).

  7. Crossmodal semantic priming by naturalistic sounds and spoken words enhances visual sensitivity.

    PubMed

    Chen, Yi-Chuan; Spence, Charles

    2011-10-01

    We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when the sound leads the picture as well as when they are presented simultaneously? And, second, do naturalistic sounds (e.g., a dog's "woofing") and spoken words (e.g., /dɔg/) elicit similar semantic priming effects? Here, we estimated participants' sensitivity and response criterion using signal detection theory in a picture detection task. The results demonstrate that naturalistic sounds enhanced visual sensitivity when the onset of the sounds led that of the picture by 346 ms (but not when the sounds led the pictures by 173 ms, nor when they were presented simultaneously, Experiments 1-3A). At the same SOA, however, spoken words did not induce semantic priming effects on visual detection sensitivity (Experiments 3B and 4A). When using a dual picture detection/identification task, both kinds of auditory stimulus induced a similar semantic priming effect (Experiment 4B). Therefore, we suggest that there needs to be sufficient processing time for the auditory stimulus to access its associated meaning to modulate visual perception. Besides, the interactions between pictures and the two types of sounds depend not only on their processing route to access semantic representations, but also on the response to be made to fulfill the requirements of the task.

  8. Enhanced ERPs to visual stimuli in unaffected male siblings of ASD children.

    PubMed

    Anzures, Gizelle; Goyet, Louise; Ganea, Natasa; Johnson, Mark H

    2016-01-01

    Autism spectrum disorders are characterized by deficits in social and communication abilities. While unaffected relatives lack severe deficits, milder impairments have been reported in some first-degree relatives. The present study sought to verify whether mild deficits in face perception are evident among the unaffected younger siblings of children with ASD. Children between 6-9 years of age completed a face-recognition task and a passive viewing ERP task with face and house stimuli. Sixteen children were typically developing with no family history of ASD, and 17 were unaffected children with an older sibling with ASD. Findings indicate that, while unaffected siblings are comparable to controls in their face-recognition abilities, unaffected male siblings in particular show relatively enhanced P100 and P100-N170 peak-to-peak amplitude responses to faces and houses. Enhanced ERPs among unaffected male siblings is discussed in relation to potential differences in neural network recruitment during visual and face processing.

  9. Motor excitability during visual perception of known and unknown spoken languages.

    PubMed

    Swaminathan, Swathi; MacSweeney, Mairéad; Boyles, Rowan; Waters, Dafydd; Watkins, Kate E; Möttönen, Riikka

    2013-07-01

    It is possible to comprehend speech and discriminate languages by viewing a speaker's articulatory movements. Transcranial magnetic stimulation studies have shown that viewing speech enhances excitability in the articulatory motor cortex. Here, we investigated the specificity of this enhanced motor excitability in native and non-native speakers of English. Both groups were able to discriminate between speech movements related to a known (i.e., English) and unknown (i.e., Hebrew) language. The motor excitability was higher during observation of a known language than an unknown language or non-speech mouth movements, suggesting that motor resonance is enhanced specifically during observation of mouth movements that convey linguistic information. Surprisingly, however, the excitability was equally high during observation of a static face. Moreover, the motor excitability did not differ between native and non-native speakers. These findings suggest that the articulatory motor cortex processes several kinds of visual cues during speech communication. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

  10. Does visual attention drive the dynamics of bistable perception?

    PubMed Central

    Dieter, Kevin C.; Brascamp, Jan; Tadin, Duje; Blake, Randolph

    2016-01-01

    How does attention interact with incoming sensory information to determine what we perceive? One domain in which this question has received serious consideration is that of bistable perception: a captivating class of phenomena that involves fluctuating visual experience in the face of physically unchanging sensory input. Here, some investigations have yielded support for the idea that attention alone determines what is seen, while others have implicated entirely attention-independent processes in driving alternations during bistable perception. We review the body of literature addressing this divide and conclude that in fact both sides are correct – depending on the form of bistable perception being considered. Converging evidence suggests that visual attention is required for alternations in the type of bistable perception called binocular rivalry, while alternations during other types of bistable perception appear to continue without requiring attention. We discuss some implications of this differential effect of attention for our understanding of the mechanisms underlying bistable perception, and examine how these mechanisms operate during our everyday visual experiences. PMID:27230785

  11. Does visual attention drive the dynamics of bistable perception?

    PubMed

    Dieter, Kevin C; Brascamp, Jan; Tadin, Duje; Blake, Randolph

    2016-10-01

    How does attention interact with incoming sensory information to determine what we perceive? One domain in which this question has received serious consideration is that of bistable perception: a captivating class of phenomena that involves fluctuating visual experience in the face of physically unchanging sensory input. Here, some investigations have yielded support for the idea that attention alone determines what is seen, while others have implicated entirely attention-independent processes in driving alternations during bistable perception. We review the body of literature addressing this divide and conclude that in fact both sides are correct-depending on the form of bistable perception being considered. Converging evidence suggests that visual attention is required for alternations in the type of bistable perception called binocular rivalry, while alternations during other types of bistable perception appear to continue without requiring attention. We discuss some implications of this differential effect of attention for our understanding of the mechanisms underlying bistable perception, and examine how these mechanisms operate during our everyday visual experiences.

  12. Neural correlates of tactile perception during pre-, peri-, and post-movement.

    PubMed

    Juravle, Georgiana; Heed, Tobias; Spence, Charles; Röder, Brigitte

    2016-05-01

    Tactile information is differentially processed over the various phases of goal-directed movements. Here, event-related potentials (ERPs) were used to investigate the neural correlates of tactile and visual information processing during movement. Participants performed goal-directed reaches for an object placed centrally on the table in front of them. Tactile and visual stimulation (100 ms) was presented in separate trials during the different phases of the movement (i.e. preparation, execution, and post-movement). These stimuli were independently delivered to either the moving or resting hand. In a control condition, the participants only performed the movement, while omission (i.e. movement-only) ERPs were recorded. Participants were instructed to ignore the presence or absence of any sensory events and to concentrate solely on the execution of the movement. Enhanced ERPs were observed 80-200 ms after tactile stimulation, as well as 100-250 ms after visual stimulation: These modulations were greatest during the execution of the goal-directed movement, and they were effector based (i.e. significantly more negative for stimuli presented to the moving hand). Furthermore, ERPs revealed enhanced sensory processing during goal-directed movements for visual stimuli as well. Such enhanced processing of both tactile and visual information during the execution phase suggests that incoming sensory information is continuously monitored for a potential adjustment of the current motor plan. Furthermore, the results reported here also highlight a tight coupling between spatial attention and the execution of motor actions.

  13. Movement Sonification: Effects on Motor Learning beyond Rhythmic Adjustments.

    PubMed

    Effenberg, Alfred O; Fehse, Ursula; Schmitz, Gerd; Krueger, Bjoern; Mechling, Heinz

    2016-01-01

    Motor learning is based on motor perception and emergent perceptual-motor representations. A lot of behavioral research is related to single perceptual modalities but during last two decades the contribution of multimodal perception on motor behavior was discovered more and more. A growing number of studies indicates an enhanced impact of multimodal stimuli on motor perception, motor control and motor learning in terms of better precision and higher reliability of the related actions. Behavioral research is supported by neurophysiological data, revealing that multisensory integration supports motor control and learning. But the overwhelming part of both research lines is dedicated to basic research. Besides research in the domains of music, dance and motor rehabilitation, there is almost no evidence for enhanced effectiveness of multisensory information on learning of gross motor skills. To reduce this gap, movement sonification is used here in applied research on motor learning in sports. Based on the current knowledge on the multimodal organization of the perceptual system, we generate additional real-time movement information being suitable for integration with perceptual feedback streams of visual and proprioceptive modality. With ongoing training, synchronously processed auditory information should be initially integrated into the emerging internal models, enhancing the efficacy of motor learning. This is achieved by a direct mapping of kinematic and dynamic motion parameters to electronic sounds, resulting in continuous auditory and convergent audiovisual or audio-proprioceptive stimulus arrays. In sharp contrast to other approaches using acoustic information as error-feedback in motor learning settings, we try to generate additional movement information suitable for acceleration and enhancement of adequate sensorimotor representations and processible below the level of consciousness. In the experimental setting, participants were asked to learn a closed motor skill (technique acquisition of indoor rowing). One group was treated with visual information and two groups with audiovisual information (sonification vs. natural sounds). For all three groups learning became evident and remained stable. Participants treated with additional movement sonification showed better performance compared to both other groups. Results indicate that movement sonification enhances motor learning of a complex gross motor skill-even exceeding usually expected acoustic rhythmic effects on motor learning.

  14. Movement Sonification: Effects on Motor Learning beyond Rhythmic Adjustments

    PubMed Central

    Effenberg, Alfred O.; Fehse, Ursula; Schmitz, Gerd; Krueger, Bjoern; Mechling, Heinz

    2016-01-01

    Motor learning is based on motor perception and emergent perceptual-motor representations. A lot of behavioral research is related to single perceptual modalities but during last two decades the contribution of multimodal perception on motor behavior was discovered more and more. A growing number of studies indicates an enhanced impact of multimodal stimuli on motor perception, motor control and motor learning in terms of better precision and higher reliability of the related actions. Behavioral research is supported by neurophysiological data, revealing that multisensory integration supports motor control and learning. But the overwhelming part of both research lines is dedicated to basic research. Besides research in the domains of music, dance and motor rehabilitation, there is almost no evidence for enhanced effectiveness of multisensory information on learning of gross motor skills. To reduce this gap, movement sonification is used here in applied research on motor learning in sports. Based on the current knowledge on the multimodal organization of the perceptual system, we generate additional real-time movement information being suitable for integration with perceptual feedback streams of visual and proprioceptive modality. With ongoing training, synchronously processed auditory information should be initially integrated into the emerging internal models, enhancing the efficacy of motor learning. This is achieved by a direct mapping of kinematic and dynamic motion parameters to electronic sounds, resulting in continuous auditory and convergent audiovisual or audio-proprioceptive stimulus arrays. In sharp contrast to other approaches using acoustic information as error-feedback in motor learning settings, we try to generate additional movement information suitable for acceleration and enhancement of adequate sensorimotor representations and processible below the level of consciousness. In the experimental setting, participants were asked to learn a closed motor skill (technique acquisition of indoor rowing). One group was treated with visual information and two groups with audiovisual information (sonification vs. natural sounds). For all three groups learning became evident and remained stable. Participants treated with additional movement sonification showed better performance compared to both other groups. Results indicate that movement sonification enhances motor learning of a complex gross motor skill—even exceeding usually expected acoustic rhythmic effects on motor learning. PMID:27303255

  15. Applied estimation for hybrid dynamical systems using perceptional information

    NASA Astrophysics Data System (ADS)

    Plotnik, Aaron M.

    This dissertation uses the motivating example of robotic tracking of mobile deep ocean animals to present innovations in robotic perception and estimation for hybrid dynamical systems. An approach to estimation for hybrid systems is presented that utilizes uncertain perceptional information about the system's mode to improve tracking of its mode and continuous states. This results in significant improvements in situations where previously reported methods of estimation for hybrid systems perform poorly due to poor distinguishability of the modes. The specific application that motivates this research is an automatic underwater robotic observation system that follows and films individual deep ocean animals. A first version of such a system has been developed jointly by the Stanford Aerospace Robotics Laboratory and Monterey Bay Aquarium Research Institute (MBARI). This robotic observation system is successfully fielded on MBARI's ROVs, but agile specimens often evade the system. When a human ROV pilot performs this task, one advantage that he has over the robotic observation system in these situations is the ability to use visual perceptional information about the target, immediately recognizing any changes in the specimen's behavior mode. With the approach of the human pilot in mind, a new version of the robotic observation system is proposed which is extended to (a) derive perceptional information (visual cues) about the behavior mode of the tracked specimen, and (b) merge this dissimilar, discrete and uncertain information with more traditional continuous noisy sensor data by extending existing algorithms for hybrid estimation. These performance enhancements are enabled by integrating techniques in hybrid estimation, computer vision and machine learning. First, real-time computer vision and classification algorithms extract a visual observation of the target's behavior mode. Existing hybrid estimation algorithms are extended to admit this uncertain but discrete observation, complementing the information available from more traditional sensors. State tracking is achieved using a new form of Rao-Blackwellized particle filter called the mode-observed Gaussian Particle Filter. Performance is demonstrated using data from simulation and data collected on actual specimens in the ocean. The framework for estimation using both traditional and perceptional information is easily extensible to other stochastic hybrid systems with mode-related perceptional observations available.

  16. [Nursing Experience of Using Mirror Visual Feedback for a Schizophrenia Patient With Visual Hallucinations].

    PubMed

    Lan, Shu-Ling; Chen, Yu-Chi; Chang, Hsiu-Ju

    2018-06-01

    The aim of this paper was to describe the nursing application of mirror visual feedback in a patient suffering from long-term visual hallucinations. The intervention period was from May 15th to October 19th, 2015. Using the five facets of psychiatric nursing assessment, several health problems were observed, including disturbed sensory perceptions (prominent visual hallucinations) and poor self-care (e.g. limited abilities to self-bathe and put on clothing). Furthermore, "caregiver role strain" due to the related intense care burden was noted. After building up a therapeutic interpersonal relationship, the technique of brain plasticity and mirror visual feedback were performed using multiple nursing care methods in order to help the patient suppress her visual hallucinations by enhancing a different visual stimulus. We also taught her how to cope with visual hallucinations in a proper manner. The frequency and content of visual hallucinations were recorded to evaluate the effects of management. The therapeutic plan was formulated together with the patient in order to boost her self-confidence, and a behavior contract was implemented in order to improve her personal hygiene. In addition, psychoeducation on disease-related topics was provided to the patient's family, and they were encouraged to attend relevant therapeutic activities. As a result, her family became less passive and negative and more engaged in and positive about her future. The crisis of "caregiver role strain" was successfully resolved. The current experience is hoped to serve as a model for enhancing communication and cooperation between family and staff in similar medical settings.

  17. Motion sickness increases functional connectivity between visual motion and nausea-associated brain regions.

    PubMed

    Toschi, Nicola; Kim, Jieun; Sclocco, Roberta; Duggento, Andrea; Barbieri, Riccardo; Kuo, Braden; Napadow, Vitaly

    2017-01-01

    The brain networks supporting nausea not yet understood. We previously found that while visual stimulation activated primary (V1) and extrastriate visual cortices (MT+/V5, coding for visual motion), increasing nausea was associated with increasing sustained activation in several brain areas, with significant co-activation for anterior insula (aIns) and mid-cingulate (MCC) cortices. Here, we hypothesized that motion sickness also alters functional connectivity between visual motion and previously identified nausea-processing brain regions. Subjects prone to motion sickness and controls completed a motion sickness provocation task during fMRI/ECG acquisition. We studied changes in connectivity between visual processing areas activated by the stimulus (MT+/V5, V1), right aIns and MCC when comparing rest (BASELINE) to peak nausea state (NAUSEA). Compared to BASELINE, NAUSEA reduced connectivity between right and left V1 and increased connectivity between right MT+/V5 and aIns and between left MT+/V5 and MCC. Additionally, the change in MT+/V5 to insula connectivity was significantly associated with a change in sympathovagal balance, assessed by heart rate variability analysis. No state-related connectivity changes were noted for the control group. Increased connectivity between a visual motion processing region and nausea/salience brain regions may reflect increased transfer of visual/vestibular mismatch information to brain regions supporting nausea perception and autonomic processing. We conclude that vection-induced nausea increases connectivity between nausea-processing regions and those activated by the nauseogenic stimulus. This enhanced low-frequency coupling may support continual, slowly evolving nausea perception and shifts toward sympathetic dominance. Disengaging this coupling may be a target for biobehavioral interventions aimed at reducing motion sickness severity. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Perception of Stand-on-ability: Do Geographical Slants Feel Steeper Than They Look?

    PubMed

    Hajnal, Alen; Wagman, Jeffrey B; Doyon, Jonathan K; Clark, Joseph D

    2016-07-01

    Past research has shown that haptically perceived surface slant by foot is matched with visually perceived slant by a factor of 0.81. Slopes perceived visually appear shallower than when stood on without looking. We sought to identify the sources of this discrepancy by asking participants to judge whether they would be able to stand on an inclined ramp. In the first experiment, visual perception was compared to pedal perception in which participants took half a step with one foot onto an occluded ramp. Visual perception closely matched the actual maximal slope angle that one could stand on, whereas pedal perception underestimated it. Participants may have been less stable in the pedal condition while taking half a step onto the ramp. We controlled for this by having participants hold onto a sturdy tripod in the pedal condition (Experiment 2). This did not eliminate the difference between visual and haptic perception, but repeating the task while sitting on a chair did (Experiment 3). Beyond balance requirements, pedal perception may also be constrained by the limited range of motion at the ankle and knee joints while standing. Indeed, when we restricted range of motion by wearing an ankle brace pedal perception underestimated the affordance (Experiment 4). Implications for ecological theory were offered by discussing the notion of functional equivalence and the role of exploration in perception. © The Author(s) 2016.

  19. Visual perception and imagery: a new molecular hypothesis.

    PubMed

    Bókkon, I

    2009-05-01

    Here, we put forward a redox molecular hypothesis about the natural biophysical substrate of visual perception and visual imagery. This hypothesis is based on the redox and bioluminescent processes of neuronal cells in retinotopically organized cytochrome oxidase-rich visual areas. Our hypothesis is in line with the functional roles of reactive oxygen and nitrogen species in living cells that are not part of haphazard process, but rather a very strict mechanism used in signaling pathways. We point out that there is a direct relationship between neuronal activity and the biophoton emission process in the brain. Electrical and biochemical processes in the brain represent sensory information from the external world. During encoding or retrieval of information, electrical signals of neurons can be converted into synchronized biophoton signals by bioluminescent radical and non-radical processes. Therefore, information in the brain appears not only as an electrical (chemical) signal but also as a regulated biophoton (weak optical) signal inside neurons. During visual perception, the topological distribution of photon stimuli on the retina is represented by electrical neuronal activity in retinotopically organized visual areas. These retinotopic electrical signals in visual neurons can be converted into synchronized biophoton signals by radical and non-radical processes in retinotopically organized mitochondria-rich areas. As a result, regulated bioluminescent biophotons can create intrinsic pictures (depictive representation) in retinotopically organized cytochrome oxidase-rich visual areas during visual imagery and visual perception. The long-term visual memory is interpreted as epigenetic information regulated by free radicals and redox processes. This hypothesis does not claim to solve the secret of consciousness, but proposes that the evolution of higher levels of complexity made the intrinsic picture representation of the external visual world possible by regulated redox and bioluminescent reactions in the visual system during visual perception and visual imagery.

  20. Object perception is selectively slowed by a visually similar working memory load.

    PubMed

    Robinson, Alan; Manzi, Alberto; Triesch, Jochen

    2008-12-22

    The capacity of visual working memory has been extensively characterized, but little work has investigated how occupying visual memory influences other aspects of cognition and perception. Here we show a novel effect: maintaining an item in visual working memory slows processing of similar visual stimuli during the maintenance period. Subjects judged the gender of computer rendered faces or the naturalness of body postures while maintaining different visual memory loads. We found that when stimuli of the same class (faces or bodies) were maintained in memory, perceptual judgments were slowed. Interestingly, this is the opposite of what would be predicted from traditional priming. Our results suggest there is interference between visual working memory and perception, caused by visual similarity between new perceptual input and items already encoded in memory.

  1. Optical images of visible and invisible percepts in the primary visual cortex of primates

    PubMed Central

    Macknik, Stephen L.; Haglund, Michael M.

    1999-01-01

    We optically imaged a visual masking illusion in primary visual cortex (area V-1) of rhesus monkeys to ask whether activity in the early visual system more closely reflects the physical stimulus or the generated percept. Visual illusions can be a powerful way to address this question because they have the benefit of dissociating the stimulus from perception. We used an illusion in which a flickering target (a bar oriented in visual space) is rendered invisible by two counter-phase flickering bars, called masks, which flank and abut the target. The target and masks, when shown separately, each generated correlated activity on the surface of the cortex. During the illusory condition, however, optical signals generated in the cortex by the target disappeared although the image of the masks persisted. The optical image thus was correlated with perception but not with the physical stimulus. PMID:10611363

  2. Do we track what we see? Common versus independent processing for motion perception and smooth pursuit eye movements: a review.

    PubMed

    Spering, Miriam; Montagnini, Anna

    2011-04-22

    Many neurophysiological studies in monkeys have indicated that visual motion information for the guidance of perception and smooth pursuit eye movements is - at an early stage - processed in the same visual pathway in the brain, crucially involving the middle temporal area (MT). However, these studies left some questions unanswered: Are perception and pursuit driven by the same or independent neuronal signals within this pathway? Are the perceptual interpretation of visual motion information and the motor response to visual signals limited by the same source of neuronal noise? Here, we review psychophysical studies that were motivated by these questions and compared perception and pursuit behaviorally in healthy human observers. We further review studies that focused on the interaction between perception and pursuit. The majority of results point to similarities between perception and pursuit, but dissociations were also reported. We discuss recent developments in this research area and conclude with suggestions for common and separate principles for the guidance of perceptual and motor responses to visual motion information. Copyright © 2010 Elsevier Ltd. All rights reserved.

  3. Radiation-induced optic neuropathy: A magnetic resonance imaging study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guy, J.; Mancuso, A.; Beck, R.

    1991-03-01

    Optic neuropathy induced by radiation is an infrequent cause of delayed visual loss that may at times be difficult to differentiate from compression of the visual pathways by recurrent neoplasm. The authors describe six patients with this disorder who experienced loss of vision 6 to 36 months after neurological surgery and radiation therapy. Of the six patients in the series, two had a pituitary adenoma and one each had a metastatic melanoma, multiple myeloma, craniopharyngioma, and lymphoepithelioma. Visual acuity in the affected eyes ranged from 20/25 to no light perception. Magnetic resonance (MR) imaging showed sellar and parasellar recurrence ofmore » both pituitary adenomas, but the intrinsic lesions of the optic nerves and optic chiasm induced by radiation were enhanced after gadolinium-diethylenetriaminepenta-acetic acid (DTPA) administration and were clearly distinguishable from the suprasellar compression of tumor. Repeated MR imaging showed spontaneous resolution of gadolinium-DTPA enhancement of the optic nerve in a patient who was initially suspected of harboring recurrence of a metastatic malignant melanoma as the cause of visual loss. The authors found the presumptive diagnosis of radiation-induced optic neuropathy facilitated by MR imaging with gadolinium-DTPA. This neuro-imaging procedure may help avert exploratory surgery in some patients with recurrent neoplasm in whom the etiology of visual loss is uncertain.« less

  4. Toward Model Building for Visual Aesthetic Perception

    PubMed Central

    Lughofer, Edwin; Zeng, Xianyi

    2017-01-01

    Several models of visual aesthetic perception have been proposed in recent years. Such models have drawn on investigations into the neural underpinnings of visual aesthetics, utilizing neurophysiological techniques and brain imaging techniques including functional magnetic resonance imaging, magnetoencephalography, and electroencephalography. The neural mechanisms underlying the aesthetic perception of the visual arts have been explained from the perspectives of neuropsychology, brain and cognitive science, informatics, and statistics. Although corresponding models have been constructed, the majority of these models contain elements that are difficult to be simulated or quantified using simple mathematical functions. In this review, we discuss the hypotheses, conceptions, and structures of six typical models for human aesthetic appreciation in the visual domain: the neuropsychological, information processing, mirror, quartet, and two hierarchical feed-forward layered models. Additionally, the neural foundation of aesthetic perception, appreciation, or judgement for each model is summarized. The development of a unified framework for the neurobiological mechanisms underlying the aesthetic perception of visual art and the validation of this framework via mathematical simulation is an interesting challenge in neuroaesthetics research. This review aims to provide information regarding the most promising proposals for bridging the gap between visual information processing and brain activity involved in aesthetic appreciation. PMID:29270194

  5. Visual imagery without visual perception: lessons from blind subjects

    NASA Astrophysics Data System (ADS)

    Bértolo, Helder

    2014-08-01

    The question regarding visual imagery and visual perception remain an open issue. Many studies have tried to understand if the two processes share the same mechanisms or if they are independent, using different neural substrates. Most research has been directed towards the need of activation of primary visual areas during imagery. Here we review some of the works providing evidence for both claims. It seems that studying visual imagery in blind subjects can be used as a way of answering some of those questions, namely if it is possible to have visual imagery without visual perception. We present results from the work of our group using visual activation in dreams and its relation with EEG's spectral components, showing that congenitally blind have visual contents in their dreams and are able to draw them; furthermore their Visual Activation Index is negatively correlated with EEG alpha power. This study supports the hypothesis that it is possible to have visual imagery without visual experience.

  6. An Arts-Based Supplemental Resource's Effect on Teachers' Perceptions of Curriculum Integration, Instructional Materials Development, Learning Activities Selections, and Critical Thinking Improvement

    ERIC Educational Resources Information Center

    Eutsler, Mark L.

    2013-01-01

    Indiana's declining SAT scores prompted the publisher of a statewide magazine covering the literary, performing, and visual arts to take action and create a program to use the magazine as a supplemental resource for students. It was believed that such a supplemental resource could enhance critical thinking and writing skills and help raise SAT…

  7. Enhancing Sensitivity to Visual Motion.

    DTIC Science & Technology

    1980-05-01

    for certain amblyopes, repeated testing enhianced sensitivity several fold. Amblyopia refers to any of a class of diseases in which there is a loss in...See SEKULER, 1980 for a full treatment of these models. The predictions for the Simultaneous and Random conditions from the different models are...Psychologia, 18, 35-50. COHEN, L.B. & SALAPATEK, P. Infant perception. From sensation to cognition. New York, Academic Press. CYNADER, M., BERMAN, N

  8. Audiovisual associations alter the perception of low-level visual motion

    PubMed Central

    Kafaligonul, Hulusi; Oluk, Can

    2015-01-01

    Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role. PMID:25873869

  9. The Effects of Visual Beats on Prosodic Prominence: Acoustic Analyses, Auditory Perception and Visual Perception

    ERIC Educational Resources Information Center

    Krahmer, Emiel; Swerts, Marc

    2007-01-01

    Speakers employ acoustic cues (pitch accents) to indicate that a word is important, but may also use visual cues (beat gestures, head nods, eyebrow movements) for this purpose. Even though these acoustic and visual cues are related, the exact nature of this relationship is far from well understood. We investigate whether producing a visual beat…

  10. Acoustic facilitation of object movement detection during self-motion

    PubMed Central

    Calabro, F. J.; Soto-Faraco, S.; Vaina, L. M.

    2011-01-01

    In humans, as well as most animal species, perception of object motion is critical to successful interaction with the surrounding environment. Yet, as the observer also moves, the retinal projections of the various motion components add to each other and extracting accurate object motion becomes computationally challenging. Recent psychophysical studies have demonstrated that observers use a flow-parsing mechanism to estimate and subtract self-motion from the optic flow field. We investigated whether concurrent acoustic cues for motion can facilitate visual flow parsing, thereby enhancing the detection of moving objects during simulated self-motion. Participants identified an object (the target) that moved either forward or backward within a visual scene containing nine identical textured objects simulating forward observer translation. We found that spatially co-localized, directionally congruent, moving auditory stimuli enhanced object motion detection. Interestingly, subjects who performed poorly on the visual-only task benefited more from the addition of moving auditory stimuli. When auditory stimuli were not co-localized to the visual target, improvements in detection rates were weak. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow can operate on multisensory object representations. PMID:21307050

  11. Working memory and decision processes in visual area v4.

    PubMed

    Hayden, Benjamin Y; Gallant, Jack L

    2013-01-01

    Recognizing and responding to a remembered stimulus requires the coordination of perception, working memory, and decision-making. To investigate the role of visual cortex in these processes, we recorded responses of single V4 neurons during performance of a delayed match-to-sample task that incorporates rapid serial visual presentation of natural images. We found that neuronal activity during the delay period after the cue but before the images depends on the identity of the remembered image and that this change persists while distractors appear. This persistent response modulation has been identified as a diagnostic criterion for putative working memory signals; our data thus suggest that working memory may involve reactivation of sensory neurons. When the remembered image reappears in the neuron's receptive field, visually evoked responses are enhanced; this match enhancement is a diagnostic criterion for decision. One model that predicts these data is the matched filter hypothesis, which holds that during search V4 neurons change their tuning so as to match the remembered cue, and thus become detectors for that image. More generally, these results suggest that V4 neurons participate in the perceptual, working memory, and decision processes that are needed to perform memory-guided decision-making.

  12. Sounds can boost the awareness of visual events through attention without cross-modal integration.

    PubMed

    Pápai, Márta Szabina; Soto-Faraco, Salvador

    2017-01-31

    Cross-modal interactions can lead to enhancement of visual perception, even for visual events below awareness. However, the underlying mechanism is still unclear. Can purely bottom-up cross-modal integration break through the threshold of awareness? We used a binocular rivalry paradigm to measure perceptual switches after brief flashes or sounds which, sometimes, co-occurred. When flashes at the suppressed eye coincided with sounds, perceptual switches occurred the earliest. Yet, contrary to the hypothesis of cross-modal integration, this facilitation never surpassed the assumption of probability summation of independent sensory signals. A follow-up experiment replicated the same pattern of results using silent gaps embedded in continuous noise, instead of sounds. This manipulation should weaken putative sound-flash integration, although keep them salient as bottom-up attention cues. Additional results showed that spatial congruency between flashes and sounds did not determine the effectiveness of cross-modal facilitation, which was again not better than probability summation. Thus, the present findings fail to fully support the hypothesis of bottom-up cross-modal integration, above and beyond the independent contribution of two transient signals, as an account for cross-modal enhancement of visual events below level of awareness.

  13. Shared Neural Substrates of Emotionally Enhanced Perceptual and Mnemonic Vividness

    PubMed Central

    Todd, Rebecca M.; Schmitz, Taylor W.; Susskind, Josh; Anderson, Adam K.

    2013-01-01

    It is well-known that emotionally salient events are remembered more vividly than mundane ones. Our recent research has demonstrated that such memory vividness (Mviv) is due in part to the subjective experience of emotional events as more perceptually vivid, an effect we call emotionally enhanced vividness (EEV). The present study built on previously reported research in which fMRI data were collected while participants rated relative levels of visual noise overlaid on emotionally salient and neutral images. Ratings of greater EEV were associated with greater activation in the amygdala and visual cortex. In the present study, we measured BOLD activation that predicted recognition Mviv for these same images 1 week later. Results showed that, after controlling for differences between scenes in low-level objective features, hippocampus activation uniquely predicted subsequent Mviv. In contrast, amygdala and visual cortex regions that were sensitive to EEV were also modulated by subsequent ratings of Mviv. These findings suggest shared neural substrates for the influence of emotional salience on perceptual and mnemonic vividness, with amygdala and visual cortex activation at encoding contributing to the experience of both perception and subsequent memory. PMID:23653601

  14. Self-estimation of physical ability in stepping over an obstacle is not mediated by visual height perception: a comparison between young and older adults.

    PubMed

    Sakurai, Ryota; Fujiwara, Yoshinori; Ishihara, Masami; Yasunaga, Masashi; Ogawa, Susumu; Suzuki, Hiroyuki; Imanaka, Kuniyasu

    2017-07-01

    Older adults tend to overestimate their step-over ability. However, it is unclear as to whether this is caused by inaccurate self-estimation of physical ability or inaccurate perception of height. We, therefore, measured both visual height perception ability and self-estimation of step-over ability among young and older adults. Forty-seven older and 16 young adults performed a height perception test (HPT) and a step-over test (SOT). Participants visually judged the height of vertical bars from distances of 7 and 1 m away in the HPT, then self-estimated and, subsequently, actually performed a step-over action in the SOT. The results showed no significant difference between young and older adults in visual height perception. In the SOT, young adults tended to underestimate their step-over ability, whereas older adults either overestimated their abilities or underestimated them to a lesser extent than did the young adults. Moreover, visual height perception was not correlated with the self-estimation of step-over ability in both young and older adults. These results suggest that the self-overestimation of step-over ability which appeared in some healthy older adults may not be caused by the nature of visual height perception, but by other factor(s), such as the likely age-related nature of self-estimation of physical ability, per se.

  15. Effect of word familiarity on visually evoked magnetic fields.

    PubMed

    Harada, N; Iwaki, S; Nakagawa, S; Yamaguchi, M; Tonoike, M

    2004-11-30

    This study investigated the effect of word familiarity of visual stimuli on the word recognizing function of the human brain. Word familiarity is an index of the relative ease of word perception, and is characterized by facilitation and accuracy on word recognition. We studied the effect of word familiarity, using "Hiragana" (phonetic characters in Japanese orthography) characters as visual stimuli, on the elicitation of visually evoked magnetic fields with a word-naming task. The words were selected from a database of lexical properties of Japanese. The four "Hiragana" characters used were grouped and presented in 4 classes of degree of familiarity. The three components were observed in averaged waveforms of the root mean square (RMS) value on latencies at about 100 ms, 150 ms and 220 ms. The RMS value of the 220 ms component showed a significant positive correlation (F=(3/36); 5.501; p=0.035) with the value of familiarity. ECDs of the 220 ms component were observed in the intraparietal sulcus (IPS). Increments in the RMS value of the 220 ms component, which might reflect ideographical word recognition, retrieving "as a whole" were enhanced with increments of the value of familiarity. The interaction of characters, which increased with the value of familiarity, might function "as a large symbol"; and enhance a "pop-out" function with an escaping character inhibiting other characters and enhancing the segmentation of the character (as a figure) from the ground.

  16. Artificial vision support system (AVS(2)) for improved prosthetic vision.

    PubMed

    Fink, Wolfgang; Tarbell, Mark A

    2014-11-01

    State-of-the-art and upcoming camera-driven, implanted artificial vision systems provide only tens to hundreds of electrodes, affording only limited visual perception for blind subjects. Therefore, real time image processing is crucial to enhance and optimize this limited perception. Since tens or hundreds of pixels/electrodes allow only for a very crude approximation of the typically megapixel optical resolution of the external camera image feed, the preservation and enhancement of contrast differences and transitions, such as edges, are especially important compared to picture details such as object texture. An Artificial Vision Support System (AVS(2)) is devised that displays the captured video stream in a pixelation conforming to the dimension of the epi-retinal implant electrode array. AVS(2), using efficient image processing modules, modifies the captured video stream in real time, enhancing 'present but hidden' objects to overcome inadequacies or extremes in the camera imagery. As a result, visual prosthesis carriers may now be able to discern such objects in their 'field-of-view', thus enabling mobility in environments that would otherwise be too hazardous to navigate. The image processing modules can be engaged repeatedly in a user-defined order, which is a unique capability. AVS(2) is directly applicable to any artificial vision system that is based on an imaging modality (video, infrared, sound, ultrasound, microwave, radar, etc.) as the first step in the stimulation/processing cascade, such as: retinal implants (i.e. epi-retinal, sub-retinal, suprachoroidal), optic nerve implants, cortical implants, electric tongue stimulators, or tactile stimulators.

  17. A tone mapping operator based on neural and psychophysical models of visual perception

    NASA Astrophysics Data System (ADS)

    Cyriac, Praveen; Bertalmio, Marcelo; Kane, David; Vazquez-Corral, Javier

    2015-03-01

    High dynamic range imaging techniques involve capturing and storing real world radiance values that span many orders of magnitude. However, common display devices can usually reproduce intensity ranges only up to two to three orders of magnitude. Therefore, in order to display a high dynamic range image on a low dynamic range screen, the dynamic range of the image needs to be compressed without losing details or introducing artefacts, and this process is called tone mapping. A good tone mapping operator must be able to produce a low dynamic range image that matches as much as possible the perception of the real world scene. We propose a two stage tone mapping approach, in which the first stage is a global method for range compression based on a gamma curve that equalizes the lightness histogram the best, and the second stage performs local contrast enhancement and color induction using neural activity models for the visual cortex.

  18. Enhancing Visual Perception and Motor Accuracy among School Children through a Mindfulness and Compassion Program

    PubMed Central

    Tarrasch, Ricardo; Margalit-Shalom, Lilach; Berger, Rony

    2017-01-01

    The present study assessed the effects of the mindfulness/compassion cultivating program: “Call to Care-Israel” on the performance in visual perception (VP) and motor accuracy, as well as on anxiety levels and self-reported mindfulness among 4th and 5th grade students. One hundred and thirty-eight children participated in the program for 24 weekly sessions, while 78 children served as controls. Repeated measures ANOVA’s yielded significant interactions between time of measurement and group for VP, motor accuracy, reported mindfulness, and anxiety. Post hoc tests revealed significant improvements in the four aforementioned measures in the experimental group only. In addition, significant correlations were obtained between the improvement in motor accuracy and the reduction in anxiety and the increase in mindfulness. Since VP and motor accuracy are basic skills associated with quantifiable academic characteristics, such as reading and mathematical abilities, the results may suggest that mindfulness practice has the ability to improve academic achievements. PMID:28286492

  19. Lip-reading aids word recognition most in moderate noise: a Bayesian explanation using high-dimensional feature space.

    PubMed

    Ma, Wei Ji; Zhou, Xiang; Ross, Lars A; Foxe, John J; Parra, Lucas C

    2009-01-01

    Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.

  20. Tagging cortical networks in emotion: a topographical analysis

    PubMed Central

    Keil, Andreas; Costa, Vincent; Smith, J. Carson; Sabatinelli, Dean; McGinnis, E. Menton; Bradley, Margaret M.; Lang, Peter J.

    2013-01-01

    Viewing emotional pictures is associated with heightened perception and attention, indexed by a relative increase in visual cortical activity. Visual cortical modulation by emotion is hypothesized to reflect re-entrant connectivity originating in higher-order cortical and/or limbic structures. The present study used dense-array electroencephalography and individual brain anatomy to investigate functional coupling between the visual cortex and other cortical areas during affective picture viewing. Participants viewed pleasant, neutral, and unpleasant pictures that flickered at a rate of 10 Hz to evoke steady-state visual evoked potentials (ssVEPs) in the EEG. The spectral power of ssVEPs was quantified using Fourier transform, and cortical sources were estimated using beamformer spatial filters based on individual structural magnetic resonance images. In addition to lower-tier visual cortex, a network of occipito-temporal and parietal (bilateral precuneus, inferior parietal lobules) structures showed enhanced ssVEP power when participants viewed emotional (either pleasant or unpleasant), compared to neutral pictures. Functional coupling during emotional processing was enhanced between the bilateral occipital poles and a network of temporal (left middle/inferior temporal gyrus), parietal (bilateral parietal lobules), and frontal (left middle/inferior frontal gyrus) structures. These results converge with findings from hemodynamic analyses of emotional picture viewing and suggest that viewing emotionally engaging stimuli is associated with the formation of functional links between visual cortex and the cortical regions underlying attention modulation and preparation for action. PMID:21954087

  1. Human gamma band activity and perception of a gestalt.

    PubMed

    Keil, A; Müller, M M; Ray, W J; Gruber, T; Elbert, T

    1999-08-15

    Neuronal oscillations in the gamma band (above 30 Hz) have been proposed to be a possible mechanism for the visual representation of objects. The present study examined the topography of gamma band spectral power and event-related potentials in human EEG associated with perceptual switching effected by rotating ambiguous (bistable) figures. Eleven healthy human subjects were presented two rotating bistable figures: first, a face figure that allowed perception of a sad or happy face depending on orientation and therefore caused a perceptual switch at defined points in time when rotated, and, second, a modified version of the Rubin vase, allowing perception as a vase or two faces whereby the switch was orientation-independent. Nonrotating figures served as further control stimuli. EEG was recorded using a high-density array with 128 electrodes. We found a negative event-related potential associated with the switching of the sad-happy figure, which was most pronounced at central prefrontal sites. Gamma band activity (GBA) was enhanced at occipital electrode sites in the rotating bistable figures compared with the standing stimuli, being maximal at vertical stimulus orientations that allowed an easy recognition of the sad and happy face or the vase-faces, respectively. At anterior electrodes, GBA showed a complementary pattern, being maximal when stimuli were oriented horizontally. The findings support the notion that formation of a visual percept may involve oscillations in a distributed neuronal assembly.

  2. Visual analytics in medical education: impacting analytical reasoning and decision making for quality improvement.

    PubMed

    Vaitsis, Christos; Nilsson, Gunnar; Zary, Nabil

    2015-01-01

    The medical curriculum is the main tool representing the entire undergraduate medical education. Due to its complexity and multilayered structure it is of limited use to teachers in medical education for quality improvement purposes. In this study we evaluated three visualizations of curriculum data from a pilot course, using teachers from an undergraduate medical program and applying visual analytics methods. We found that visual analytics can be used to positively impacting analytical reasoning and decision making in medical education through the realization of variables capable to enhance human perception and cognition on complex curriculum data. The positive results derived from our evaluation of a medical curriculum and in a small scale, signify the need to expand this method to an entire medical curriculum. As our approach sustains low levels of complexity it opens a new promising direction in medical education informatics research.

  3. Effects of complete monocular deprivation in visuo-spatial memory.

    PubMed

    Cattaneo, Zaira; Merabet, Lotfi B; Bhatt, Ela; Vecchi, Tomaso

    2008-09-30

    Monocular deprivation has been associated with both specific deficits and enhancements in visual perception and processing. In this study, performance on a visuo-spatial memory task was compared in congenitally monocular individuals and sighted control individuals viewing monocularly (i.e., patched) and binocularly. The task required the individuals to view and memorize a series of target locations on two-dimensional matrices. Overall, congenitally monocular individuals performed worse than sighted individuals (with a specific deficit in simultaneously maintaining distinct spatial representations in memory), indicating that the lack of binocular visual experience affects the way visual information is represented in visuo-spatial memory. No difference was observed between the monocular and binocular viewing control groups, suggesting that early monocular deprivation affects the development of cortical mechanisms mediating visuo-spatial cognition.

  4. Perceptual upright: the relative effectiveness of dynamic and static images under different gravity States.

    PubMed

    Jenkin, Michael R; Dyde, Richard T; Jenkin, Heather L; Zacher, James E; Harris, Laurence R

    2011-01-01

    The perceived direction of up depends on both gravity and visual cues to orientation. Static visual cues to orientation have been shown to be less effective in influencing the perception of upright (PU) under microgravity conditions than they are on earth (Dyde et al., 2009). Here we introduce dynamic orientation cues into the visual background to ascertain whether they might increase the effectiveness of visual cues in defining the PU under different gravity conditions. Brief periods of microgravity and hypergravity were created using parabolic flight. Observers viewed a polarized, natural scene presented at various orientations on a laptop viewed through a hood which occluded all other visual cues. The visual background was either an animated video clip in which actors moved along the visual ground plane or an individual static frame taken from the same clip. We measured the perceptual upright using the oriented character recognition test (OCHART). Dynamic visual cues significantly enhance the effectiveness of vision in determining the perceptual upright under normal gravity conditions. Strong trends were found for dynamic visual cues to produce an increase in the visual effect under both microgravity and hypergravity conditions.

  5. Perceptions of the Visually Impaired toward Pursuing Geography Courses and Majors in Higher Education

    ERIC Educational Resources Information Center

    Murr, Christopher D.; Blanchard, R. Denise

    2011-01-01

    Advances in classroom technology have lowered barriers for the visually impaired to study geography, yet few participate. Employing stereotype threat theory, we examined whether beliefs held by the visually impaired affect perceptions toward completing courses and majors in visually oriented disciplines. A test group received a low-level threat…

  6. Assistive Technology Competencies of Teachers of Students with Visual Impairments: A Comparison of Perceptions

    ERIC Educational Resources Information Center

    Zhou, Li; Smith, Derrick W.; Parker, Amy T.; Griffin-Shirley, Nora

    2011-01-01

    This study surveyed teachers of students with visual impairments in Texas on their perceptions of a set of assistive technology competencies developed for teachers of students with visual impairments by Smith and colleagues (2009). Differences in opinion between practicing teachers of students with visual impairments and Smith's group of…

  7. Perception of Audio-Visual Speech Synchrony in Spanish-Speaking Children with and without Specific Language Impairment

    ERIC Educational Resources Information Center

    Pons, Ferran; Andreu, Llorenc; Sanz-Torrent, Monica; Buil-Legaz, Lucia; Lewkowicz, David J.

    2013-01-01

    Speech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the…

  8. The role of vision in auditory distance perception.

    PubMed

    Calcagno, Esteban R; Abregú, Ezequiel L; Eguía, Manuel C; Vergara, Ramiro

    2012-01-01

    In humans, multisensory interaction is an important strategy for improving the detection of stimuli of different nature and reducing the variability of response. It is known that the presence of visual information affects the auditory perception in the horizontal plane (azimuth), but there are few researches that study the influence of vision in the auditory distance perception. In general, the data obtained from these studies are contradictory and do not completely define the way in which visual cues affect the apparent distance of a sound source. Here psychophysical experiments on auditory distance perception in humans are performed, including and excluding visual cues. The results show that the apparent distance from the source is affected by the presence of visual information and that subjects can store in their memory a representation of the environment that later improves the perception of distance.

  9. 3D Visualizations of Abstract DataSets

    DTIC Science & Technology

    2010-08-01

    contrasts no shadows, drop shadows and drop lines. 15. SUBJECT TERMS 3D displays, 2.5D displays, abstract network visualizations, depth perception , human...altitude perception in airspace management and airspace route planning—simulated reality visualizations that employ altitude and heading as well as...cues employed by display designers for depicting real-world scenes on a flat surface can be applied to create a perception of depth for abstract

  10. Integrative cortical dysfunction and pervasive motion perception deficit in fragile X syndrome.

    PubMed

    Kogan, C S; Bertone, A; Cornish, K; Boutet, I; Der Kaloustian, V M; Andermann, E; Faubert, J; Chaudhuri, A

    2004-11-09

    Fragile X syndrome (FXS) is associated with neurologic deficits recently attributed to the magnocellular pathway of the lateral geniculate nucleus. To test the hypotheses that FXS individuals 1) have a pervasive visual motion perception impairment affecting neocortical circuits in the parietal lobe and 2) have deficits in integrative neocortical mechanisms necessary for perception of complex stimuli. Psychophysical tests of visual motion and form perception defined by either first-order (luminance) or second-order (texture) attributes were used to probe early and later occipito-temporal and occipito-parietal functioning. When compared to developmental- and age-matched controls, FXS individuals displayed severe impairments in first- and second-order motion perception. This deficit was accompanied by near normal perception for first-order form stimuli but not second-order form stimuli. Impaired visual motion processing for first- and second-order stimuli suggests that both early- and later-level neurologic function of the parietal lobe are affected in Fragile X syndrome (FXS). Furthermore, this deficit likely stems from abnormal input from the magnocellular compartment of the lateral geniculate nucleus. Impaired visual form and motion processing for complex visual stimuli with normal processing for simple (i.e., first-order) form stimuli suggests that FXS individuals have normal early form processing accompanied by a generalized impairment in neurologic mechanisms necessary for integrating all early visual input.

  11. Perceptions of Visual Literacy. Selected Readings from the Annual Conference of the International Visual Literacy Association (21st, Scottsdale, Arizona, October 1989).

    ERIC Educational Resources Information Center

    Braden, Roberts A., Ed.; And Others

    These proceedings contain 37 papers from 51 authors noted for their expertise in the field of visual literacy. The collection is divided into three sections: (1) "Examining Visual Literacy" (including, in addition to a 7-year International Visual Literacy Association bibliography covering the period from 1983-1989, papers on the perception of…

  12. Imagined Actions Aren't Just Weak Actions: Task Variability Promotes Skill Learning in Physical Practice but Not in Mental Practice

    ERIC Educational Resources Information Center

    Coelho, Chase J.; Nusbaum, Howard C.; Rosenbaum, David A.; Fenn, Kimberly M.

    2012-01-01

    Early research on visual imagery led investigators to suggest that mental visual images are just weak versions of visual percepts. Later research helped investigators understand that mental visual images differ in deeper and more subtle ways from visual percepts. Research on motor imagery has yet to reach this mature state, however. Many authors…

  13. Models of Speed Discrimination

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The prime purpose of this project was to investigate various theoretical issues concerning the integration of information across visual space. To date, most of the research efforts in the study of the visual system seem to have been focused in two almost non-overlaping directions. One research focus has been the low level perception as studied by psychophysics. The other focus has been the study of high level vision exemplified by the study of object perception. Most of the effort in psychophysics has been devoted to the search for the fundamental "features" of perception. The general idea is that the most peripheral processes of the visual system decompose the input into features that are then used for classification and recognition. The experimental and theoretical focus has been on finding and describing these analyzers that decompose images into useful components. Various models are then compared to the physiological measurements performed on neurons in the sensory systems. In the study of higher level perception, the work has been focused on the representation of objects and on the connections between various physical effects and object perception. In this category we find the perception of 3D from a variety of physical measurements including motion, shading and other physical phenomena. With few exceptions, there seem to be very limited development of theories describing how the visual system might combine the output of the analyzers to form the representation of visual objects. Therefore, the processes underlying the integration of information over space represent critical aspects of vision system. The understanding of these processes will have implications on our expectations for the underlying physiological mechanisms, as well as for our models of the internal representation for visual percepts. In this project, we explored several mechanisms related to spatial summation, attention, and eye movements. The project comprised three components: 1. Modeling visual search for the detection of speed deviation. 2. Perception of moving objects. 3. Exploring the role of eye movements in various visual tasks.

  14. Biocybernetic factors in human perception and memory

    NASA Technical Reports Server (NTRS)

    Lai, D. C.

    1975-01-01

    The objective of this research is to develop biocybernetic techniques for use in the analysis and development of skills required for the enhancement of concrete images of the 'eidetic' type. The scan patterns of the eye during inspection of scenes are treated as indicators of the brain's strategy for the intake of visual information. The authors determine the features that differentiate visual scan patterns associated with superior imagery from scan patterns associated with inferior imagery, and simultaneously differentiate the EEG features correlated with superior imagery from those correlated with inferior imagery. A closely-coupled man-machine system has been designed to generate image enhancement and to train the individual to exert greater voluntary control over his own imagery. The models for EEG signals and saccadic eye movement in the man-machine system have been completed. The report describes the details of these models and discusses their usefulness.

  15. Brain networks engaged in audiovisual integration during speech perception revealed by persistent homology-based network filtration.

    PubMed

    Kim, Heejung; Hahm, Jarang; Lee, Hyekyoung; Kang, Eunjoo; Kang, Hyejin; Lee, Dong Soo

    2015-05-01

    The human brain naturally integrates audiovisual information to improve speech perception. However, in noisy environments, understanding speech is difficult and may require much effort. Although the brain network is supposed to be engaged in speech perception, it is unclear how speech-related brain regions are connected during natural bimodal audiovisual or unimodal speech perception with counterpart irrelevant noise. To investigate the topological changes of speech-related brain networks at all possible thresholds, we used a persistent homological framework through hierarchical clustering, such as single linkage distance, to analyze the connected component of the functional network during speech perception using functional magnetic resonance imaging. For speech perception, bimodal (audio-visual speech cue) or unimodal speech cues with counterpart irrelevant noise (auditory white-noise or visual gum-chewing) were delivered to 15 subjects. In terms of positive relationship, similar connected components were observed in bimodal and unimodal speech conditions during filtration. However, during speech perception by congruent audiovisual stimuli, the tighter couplings of left anterior temporal gyrus-anterior insula component and right premotor-visual components were observed than auditory or visual speech cue conditions, respectively. Interestingly, visual speech is perceived under white noise by tight negative coupling in the left inferior frontal region-right anterior cingulate, left anterior insula, and bilateral visual regions, including right middle temporal gyrus, right fusiform components. In conclusion, the speech brain network is tightly positively or negatively connected, and can reflect efficient or effortful processes during natural audiovisual integration or lip-reading, respectively, in speech perception.

  16. Disentangling visual imagery and perception of real-world objects

    PubMed Central

    Lee, Sue-Hyun; Kravitz, Dwight J.; Baker, Chris I.

    2011-01-01

    During mental imagery, visual representations can be evoked in the absence of “bottom-up” sensory input. Prior studies have reported similar neural substrates for imagery and perception, but studies of brain-damaged patients have revealed a double dissociation with some patients showing preserved imagery in spite of impaired perception and others vice versa. Here, we used fMRI and multi-voxel pattern analysis to investigate the specificity, distribution, and similarity of information for individual seen and imagined objects to try and resolve this apparent contradiction. In an event-related design, participants either viewed or imagined individual named object images on which they had been trained prior to the scan. We found that the identity of both seen and imagined objects could be decoded from the pattern of activity throughout the ventral visual processing stream. Further, there was enough correspondence between imagery and perception to allow discrimination of individual imagined objects based on the response during perception. However, the distribution of object information across visual areas was strikingly different during imagery and perception. While there was an obvious posterior-anterior gradient along the ventral visual stream for seen objects, there was an opposite gradient for imagined objects. Moreover, the structure of representations (i.e. the pattern of similarity between responses to all objects) was more similar during imagery than perception in all regions along the visual stream. These results suggest that while imagery and perception have similar neural substrates, they involve different network dynamics, resolving the tension between previous imaging and neuropsychological studies. PMID:22040738

  17. How does parents' visual perception of their child's weight status affect their feeding style?

    PubMed

    Yilmaz, Resul; Erkorkmaz, Ünal; Ozcetin, Mustafa; Karaaslan, Erhan

    2013-01-01

    Eating style is one of the prominente factors that determine energy intake. One of the influencing factors that determine parental feeding style is parental perception of the weight status of the child. The aim of this study is to evaluate the relationship between maternal visual perception of their children's weight status and their feeding style. A cross-sectional survey was completed with only mother's of 380 preschool children with age of 5 to 7 (6.14 years). Visual perception scores were measured with a sketch and maternal feeding style was measured with validated "Parental Feeding Style Questionnaire". The parental feeding dimensions "emotional feeding" and "encouragement to eat" subscale scores were low in overweight children according to visual perception classification. "Emotional feeding" and "permissive control" subscale scores were statistically different in children classified as correctly perceived and incorrectly low perceived group due to maternal misperception. Various feeding styles were related to maternal visual perception. The best approach to preventing obesity and underweight may be to focus on achieving correct parental perception of the weight status of their children, thus improving parental skills and leading them to implement proper feeding styles. Copyright © AULA MEDICA EDICIONES 2013. Published by AULA MEDICA. All rights reserved.

  18. The Comparison of Visual Working Memory Representations with Perceptual Inputs

    ERIC Educational Resources Information Center

    Hyun, Joo-seok; Woodman, Geoffrey F.; Vogel, Edward K.; Hollingworth, Andrew; Luck, Steven J.

    2009-01-01

    The human visual system can notice differences between memories of previous visual inputs and perceptions of new visual inputs, but the comparison process that detects these differences has not been well characterized. In this study, the authors tested the hypothesis that differences between the memory of a stimulus array and the perception of a…

  19. Spatial perception predicts laparoscopic skills on virtual reality laparoscopy simulator.

    PubMed

    Hassan, I; Gerdes, B; Koller, M; Dick, B; Hellwig, D; Rothmund, M; Zielke, A

    2007-06-01

    This study evaluates the influence of visual-spatial perception on laparoscopic performance of novices with a virtual reality simulator (LapSim(R)). Twenty-four novices completed standardized tests of visual-spatial perception (Lameris Toegepaste Natuurwetenschappelijk Onderzoek [TNO] Test(R) and Stumpf-Fay Cube Perspectives Test(R)) and laparoscopic skills were assessed objectively, while performing 1-h practice sessions on the LapSim(R), comprising of coordination, cutting, and clip application tasks. Outcome variables included time to complete the tasks, economy of motion as well as total error scores, respectively. The degree of visual-spatial perception correlated significantly with laparoscopic performance on the LapSim(R) scores. Participants with a high degree of spatial perception (Group A) performed the tasks faster than those (Group B) who had a low degree of spatial perception (p = 0.001). Individuals with a high degree of spatial perception also scored better for economy of motion (p = 0.021), tissue damage (p = 0.009), and total error (p = 0.007). Among novices, visual-spatial perception is associated with manual skills performed on a virtual reality simulator. This result may be important for educators to develop adequate training programs that can be individually adapted.

  20. Distortions of Subjective Time Perception Within and Across Senses

    PubMed Central

    van Wassenhove, Virginie; Buonomano, Dean V.; Shimojo, Shinsuke; Shams, Ladan

    2008-01-01

    Background The ability to estimate the passage of time is of fundamental importance for perceptual and cognitive processes. One experience of time is the perception of duration, which is not isomorphic to physical duration and can be distorted by a number of factors. Yet, the critical features generating these perceptual shifts in subjective duration are not understood. Methodology/Findings We used prospective duration judgments within and across sensory modalities to examine the effect of stimulus predictability and feature change on the perception of duration. First, we found robust distortions of perceived duration in auditory, visual and auditory-visual presentations despite the predictability of the feature changes in the stimuli. For example, a looming disc embedded in a series of steady discs led to time dilation, whereas a steady disc embedded in a series of looming discs led to time compression. Second, we addressed whether visual (auditory) inputs could alter the perception of duration of auditory (visual) inputs. When participants were presented with incongruent audio-visual stimuli, the perceived duration of auditory events could be shortened or lengthened by the presence of conflicting visual information; however, the perceived duration of visual events was seldom distorted by the presence of auditory information and was never perceived shorter than their actual durations. Conclusions/Significance These results support the existence of multisensory interactions in the perception of duration and, importantly, suggest that vision can modify auditory temporal perception in a pure timing task. Insofar as distortions in subjective duration can neither be accounted for by the unpredictability of an auditory, visual or auditory-visual event, we propose that it is the intrinsic features of the stimulus that critically affect subjective time distortions. PMID:18197248

  1. Parameters of semantic multisensory integration depend on timing and modality order among people on the autism spectrum: evidence from event-related potentials.

    PubMed

    Russo, N; Mottron, L; Burack, J A; Jemel, B

    2012-07-01

    Individuals with autism spectrum disorders (ASD) report difficulty integrating simultaneously presented visual and auditory stimuli (Iarocci & McDonald, 2006), albeit showing enhanced perceptual processing of unisensory stimuli, as well as an enhanced role of perception in higher-order cognitive tasks (Enhanced Perceptual Functioning (EPF) model; Mottron, Dawson, Soulières, Hubert, & Burack, 2006). Individuals with an ASD also integrate auditory-visual inputs over longer periods of time than matched typically developing (TD) peers (Kwakye, Foss-Feig, Cascio, Stone & Wallace, 2011). To tease apart the dichotomy of both extended multisensory processing and enhanced perceptual processing, we used behavioral and electrophysiological measurements of audio-visual integration among persons with ASD. 13 TD and 14 autistics matched on IQ completed a forced choice multisensory semantic congruence task requiring speeded responses regarding the congruence or incongruence of animal sounds and pictures. Stimuli were presented simultaneously or sequentially at various stimulus onset asynchronies in both auditory first and visual first presentations. No group differences were noted in reaction time (RT) or accuracy. The latency at which congruent and incongruent waveforms diverged was the component of interest. In simultaneous presentations, congruent and incongruent waveforms diverged earlier (circa 150 ms) among persons with ASD than among TD individuals (around 350 ms). In sequential presentations, asymmetries in the timing of neuronal processing were noted in ASD which depended on stimulus order, but these were consistent with the nature of specific perceptual strengths in this group. These findings extend the Enhanced Perceptual Functioning Model to the multisensory domain, and provide a more nuanced context for interpreting ERP findings of impaired semantic processing in ASD. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Using auditory-visual speech to probe the basis of noise-impaired consonant-vowel perception in dyslexia and auditory neuropathy

    NASA Astrophysics Data System (ADS)

    Ramirez, Joshua; Mann, Virginia

    2005-08-01

    Both dyslexics and auditory neuropathy (AN) subjects show inferior consonant-vowel (CV) perception in noise, relative to controls. To better understand these impairments, natural acoustic speech stimuli that were masked in speech-shaped noise at various intensities were presented to dyslexic, AN, and control subjects either in isolation or accompanied by visual articulatory cues. AN subjects were expected to benefit from the pairing of visual articulatory cues and auditory CV stimuli, provided that their speech perception impairment reflects a relatively peripheral auditory disorder. Assuming that dyslexia reflects a general impairment of speech processing rather than a disorder of audition, dyslexics were not expected to similarly benefit from an introduction of visual articulatory cues. The results revealed an increased effect of noise masking on the perception of isolated acoustic stimuli by both dyslexic and AN subjects. More importantly, dyslexics showed less effective use of visual articulatory cues in identifying masked speech stimuli and lower visual baseline performance relative to AN subjects and controls. Last, a significant positive correlation was found between reading ability and the ameliorating effect of visual articulatory cues on speech perception in noise. These results suggest that some reading impairments may stem from a central deficit of speech processing.

  3. Factors contributing to speech perception scores in long-term pediatric cochlear implant users.

    PubMed

    Davidson, Lisa S; Geers, Ann E; Blamey, Peter J; Tobey, Emily A; Brenner, Christine A

    2011-02-01

    The objectives of this report are to (1) describe the speech perception abilities of long-term pediatric cochlear implant (CI) recipients by comparing scores obtained at elementary school (CI-E, 8 to 9 yrs) with scores obtained at high school (CI-HS, 15 to 18 yrs); (2) evaluate speech perception abilities in demanding listening conditions (i.e., noise and lower intensity levels) at adolescence; and (3) examine the relation of speech perception scores to speech and language development over this longitudinal timeframe. All 112 teenagers were part of a previous nationwide study of 8- and 9-yr-olds (N = 181) who received a CI between 2 and 5 yrs of age. The test battery included (1) the Lexical Neighborhood Test (LNT; hard and easy word lists); (2) the Bamford Kowal Bench sentence test; (3) the Children's Auditory-Visual Enhancement Test; (4) the Test of Auditory Comprehension of Language at CI-E; (5) the Peabody Picture Vocabulary Test at CI-HS; and (6) the McGarr sentences (consonants correct) at CI-E and CI-HS. CI-HS speech perception was measured in both optimal and demanding listening conditions (i.e., background noise and low-intensity level). Speech perception scores were compared based on age at test, lexical difficulty of stimuli, listening environment (optimal and demanding), input mode (visual and auditory-visual), and language age. All group mean scores significantly increased with age across the two test sessions. Scores of adolescents significantly decreased in demanding listening conditions. The effect of lexical difficulty on the LNT scores, as evidenced by the difference in performance between easy versus hard lists, increased with age and decreased for adolescents in challenging listening conditions. Calculated curves for percent correct speech perception scores (LNT and Bamford Kowal Bench) and consonants correct on the McGarr sentences plotted against age-equivalent language scores on the Test of Auditory Comprehension of Language and Peabody Picture Vocabulary Test achieved asymptote at similar ages, around 10 to 11 yrs. On average, children receiving CIs between 2 and 5 yrs of age exhibited significant improvement on tests of speech perception, lipreading, speech production, and language skills measured between primary grades and adolescence. Evidence suggests that improvement in speech perception scores with age reflects increased spoken language level up to a language age of about 10 yrs. Speech perception performance significantly decreased with softer stimulus intensity level and with introduction of background noise. Upgrades to newer speech processing strategies and greater use of frequency-modulated systems may be beneficial for ameliorating performance under these demanding listening conditions.

  4. Spatial Frequency Requirements and Gaze Strategy in Visual-Only and Audiovisual Speech Perception

    PubMed Central

    Wilson, Amanda H.; Paré, Martin; Munhall, Kevin G.

    2016-01-01

    Purpose The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. Method We presented vowel–consonant–vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent conditions (Experiment 1; N = 66). In Experiment 2 (N = 20), participants performed a visual-only speech perception task and in Experiment 3 (N = 20) an audiovisual task while having their gaze behavior monitored using eye-tracking equipment. Results In the visual-only condition, increasing image resolution led to monotonic increases in performance, and proficient speechreaders were more affected by the removal of high spatial information than were poor speechreaders. The McGurk effect also increased with increasing visual resolution, although it was less affected by the removal of high-frequency information. Observers tended to fixate on the mouth more in visual-only perception, but gaze toward the mouth did not correlate with accuracy of silent speechreading or the magnitude of the McGurk effect. Conclusions The results suggest that individual differences in silent speechreading and the McGurk effect are not related. This conclusion is supported by differential influences of high-resolution visual information on the 2 tasks and differences in the pattern of gaze. PMID:27537379

  5. Global motion perception is related to motor function in 4.5-year-old children born at risk of abnormal development

    PubMed Central

    Chakraborty, Arijit; Anstice, Nicola S.; Jacobs, Robert J.; Paudel, Nabin; LaGasse, Linda L.; Lester, Barry M.; McKinlay, Christopher J. D.; Harding, Jane E.; Wouldes, Trecia A.; Thompson, Benjamin

    2017-01-01

    Global motion perception is often used as an index of dorsal visual stream function in neurodevelopmental studies. However, the relationship between global motion perception and visuomotor control, a primary function of the dorsal stream, is unclear. We measured global motion perception (motion coherence threshold; MCT) and performance on standardized measures of motor function in 606 4.5-year-old children born at risk of abnormal neurodevelopment. Visual acuity, stereoacuity and verbal IQ were also assessed. After adjustment for verbal IQ or both visual acuity and stereoacuity, MCT was modestly, but significantly, associated with all components of motor function with the exception of gross motor scores. In a separate analysis, stereoacuity, but not visual acuity, was significantly associated with both gross and fine motor scores. These results indicate that the development of motion perception and stereoacuity are associated with motor function in pre-school children. PMID:28435122

  6. Visual motion integration for perception and pursuit

    NASA Technical Reports Server (NTRS)

    Stone, L. S.; Beutter, B. R.; Lorenceau, J.

    2000-01-01

    To examine the relationship between visual motion processing for perception and pursuit, we measured the pursuit eye-movement and perceptual responses to the same complex-motion stimuli. We show that humans can both perceive and pursue the motion of line-figure objects, even when partial occlusion makes the resulting image motion vastly different from the underlying object motion. Our results show that both perception and pursuit can perform largely accurate motion integration, i.e. the selective combination of local motion signals across the visual field to derive global object motion. Furthermore, because we manipulated perceived motion while keeping image motion identical, the observed parallel changes in perception and pursuit show that the motion signals driving steady-state pursuit and perception are linked. These findings disprove current pursuit models whose control strategy is to minimize retinal image motion, and suggest a new framework for the interplay between visual cortex and cerebellum in visuomotor control.

  7. Neuro-ophthalmic manifestations of cerebrovascular accidents.

    PubMed

    Ghannam, Alaa S Bou; Subramanian, Prem S

    2017-11-01

    Ocular functions can be affected in almost any type of cerebrovascular accident (CVA) creating a burden on the patient and family and limiting functionality. The present review summarizes the different ocular outcomes after stroke, divided into three categories: vision, ocular motility, and visual perception. We also discuss interventions that have been proposed to help restore vision and perception after CVA. Interventions that might help expand or compensate for visual field loss and visuospatial neglect include explorative saccade training, prisms, visual restoration therapy (VRT), and transcranial direct current stimulation (tDCS). VRT makes use of neuroplasticity, which has shown efficacy in animal models but remains controversial in human studies. CVAs can lead to decreased visual acuity, visual field loss, ocular motility abnormalities, and visuospatial perception deficits. Although ocular motility problems can be corrected with surgery, vision, and perception deficits are more difficult to overcome. Interventions to restore or compensate for visual field deficits are controversial despite theoretical underpinnings, animal model evidence, and case reports of their efficacies.

  8. A comparison of haptic material perception in blind and sighted individuals.

    PubMed

    Baumgartner, Elisabeth; Wiebel, Christiane B; Gegenfurtner, Karl R

    2015-10-01

    We investigated material perception in blind participants to explore the influence of visual experience on material representations and the relationship between visual and haptic material perception. In a previous study with sighted participants, we had found participants' visual and haptic judgments of material properties to be very similar (Baumgartner, Wiebel, & Gegenfurtner, 2013). In a categorization task, however, visual exploration had led to higher categorization accuracy than haptic exploration. Here, we asked congenitally blind participants to explore different materials haptically and rate several material properties in order to assess the role of the visual sense for the emergence of haptic material perception. Principal components analyses combined with a procrustes superimposition showed that the material representations of blind and blindfolded sighted participants were highly similar. We also measured haptic categorization performance, which was equal for the two groups. We conclude that haptic material representations can emerge independently of visual experience, and that there are no advantages for either group of observers in haptic categorization. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. [Perception of physiological visual illusions by individuals with schizophrenia].

    PubMed

    Ciszewski, Słowomir; Wichowicz, Hubert Michał; Żuk, Krzysztof

    2015-01-01

    Visual perception by individuals with schizophrenia has not been extensively researched. The focus of this review is the perception of physiological visual illusions by patients with schizophrenia, a differences of perception reported in a small number of studies. Increased or decreased susceptibility of these patients to various illusions seems to be unconnected to the location of origin in the visual apparatus, which also takes place in illusions connected to other modalities. The susceptibility of patients with schizophrenia to haptic illusions has not yet been investigated, although the need for such investigation has been is clear. The emerging picture is that some individuals with schizophrenia are "resistant" to some of the illusions and are able to assess visual phenomena more "rationally", yet certain illusions (ex. Müller-Lyer's) are perceived more intensely. Disturbances in the perception of visual illusions have neither been classified as possible diagnostic indicators of a dangerous mental condition, nor included in the endophenotype of schizophrenia. Although the relevant data are sparse, the ability to replicate the results is limited, and the research model lacks a "gold standard", some preliminary conclusions may be drawn. There are indications that disturbances in visual perception are connected to the extent of disorganization, poor initial social functioning, poor prognosis, and the types of schizophrenia described as neurodevelopmental. Patients with schizophrenia usually fail to perceive those illusions that require volitional controlled attention, and show lack of sensitivity to the contrast between shape and background.

  10. A Study on Quality of Public and Private Funded B.Ed Programme in Northern Region Based on Perception of Teacher Trainees with Regard to Learning Enhancement

    ERIC Educational Resources Information Center

    Barua, Sukti

    2015-01-01

    One of the key areas of a secondary teacher education programme is to train and prepare teacher trainees to function and carry out their responsibilities with commitment and most importantly as professionals. In the light of this, it is crucial for all teacher education institutions to visualize and share a common goal towards teacher preparation.…

  11. Effects of culture on musical pitch perception.

    PubMed

    Wong, Patrick C M; Ciocca, Valter; Chan, Alice H D; Ha, Louisa Y Y; Tan, Li-Hai; Peretz, Isabelle

    2012-01-01

    The strong association between music and speech has been supported by recent research focusing on musicians' superior abilities in second language learning and neural encoding of foreign speech sounds. However, evidence for a double association--the influence of linguistic background on music pitch processing and disorders--remains elusive. Because languages differ in their usage of elements (e.g., pitch) that are also essential for music, a unique opportunity for examining such language-to-music associations comes from a cross-cultural (linguistic) comparison of congenital amusia, a neurogenetic disorder affecting the music (pitch and rhythm) processing of about 5% of the Western population. In the present study, two populations (Hong Kong and Canada) were compared. One spoke a tone language in which differences in voice pitch correspond to differences in word meaning (in Hong Kong Cantonese, /si/ means 'teacher' and 'to try' when spoken in a high and mid pitch pattern, respectively). Using the On-line Identification Test of Congenital Amusia, we found Cantonese speakers as a group tend to show enhanced pitch perception ability compared to speakers of Canadian French and English (non-tone languages). This enhanced ability occurs in the absence of differences in rhythmic perception and persists even after relevant factors such as musical background and age were controlled. Following a common definition of amusia (5% of the population), we found Hong Kong pitch amusics also show enhanced pitch abilities relative to their Canadian counterparts. These findings not only provide critical evidence for a double association of music and speech, but also argue for the reconceptualization of communicative disorders within a cultural framework. Along with recent studies documenting cultural differences in visual perception, our auditory evidence challenges the common assumption of universality of basic mental processes and speaks to the domain generality of culture-to-perception influences.

  12. Effects of Culture on Musical Pitch Perception

    PubMed Central

    Wong, Patrick C. M.; Ciocca, Valter; Chan, Alice H. D.; Ha, Louisa Y. Y.; Tan, Li-Hai; Peretz, Isabelle

    2012-01-01

    The strong association between music and speech has been supported by recent research focusing on musicians' superior abilities in second language learning and neural encoding of foreign speech sounds. However, evidence for a double association—the influence of linguistic background on music pitch processing and disorders—remains elusive. Because languages differ in their usage of elements (e.g., pitch) that are also essential for music, a unique opportunity for examining such language-to-music associations comes from a cross-cultural (linguistic) comparison of congenital amusia, a neurogenetic disorder affecting the music (pitch and rhythm) processing of about 5% of the Western population. In the present study, two populations (Hong Kong and Canada) were compared. One spoke a tone language in which differences in voice pitch correspond to differences in word meaning (in Hong Kong Cantonese, /si/ means ‘teacher’ and ‘to try’ when spoken in a high and mid pitch pattern, respectively). Using the On-line Identification Test of Congenital Amusia, we found Cantonese speakers as a group tend to show enhanced pitch perception ability compared to speakers of Canadian French and English (non-tone languages). This enhanced ability occurs in the absence of differences in rhythmic perception and persists even after relevant factors such as musical background and age were controlled. Following a common definition of amusia (5% of the population), we found Hong Kong pitch amusics also show enhanced pitch abilities relative to their Canadian counterparts. These findings not only provide critical evidence for a double association of music and speech, but also argue for the reconceptualization of communicative disorders within a cultural framework. Along with recent studies documenting cultural differences in visual perception, our auditory evidence challenges the common assumption of universality of basic mental processes and speaks to the domain generality of culture-to-perception influences. PMID:22509257

  13. Effects of vicarious pain on self-pain perception: investigating the role of awareness

    PubMed Central

    Terrighena, Esslin L; Lu, Ge; Yuen, Wai Ping; Lee, Tatia MC; Keuper, Kati

    2017-01-01

    The observation of pain in others may enhance or reduce self-pain, yet the boundary conditions and factors that determine the direction of such effects are poorly understood. The current study set out to show that visual stimulus awareness plays a crucial role in determining whether vicarious pain primarily activates behavioral defense systems that enhance pain sensitivity and stimulate withdrawal or appetitive systems that attenuate pain sensitivity and stimulate approach. We employed a mixed factorial design with the between-subject factors exposure time (subliminal vs optimal) and vicarious pain (pain vs no pain images), and the within-subject factor session (baseline vs trial) to investigate how visual awareness of vicarious pain images affects subsequent self-pain in the cold-pressor test. Self-pain tolerance, intensity and unpleasantness were evaluated in a sample of 77 healthy participants. Results revealed significant interactions of exposure time and vicarious pain in all three dependent measures. In the presence of visual awareness (optimal condition), vicarious pain compared to no-pain elicited overall enhanced self-pain sensitivity, indexed by reduced pain tolerance and enhanced ratings of pain intensity and unpleasantness. Conversely, in the absence of visual awareness (subliminal condition), vicarious pain evoked decreased self-pain intensity and unpleasantness while pain tolerance remained unaffected. These findings suggest that the activation of defense mechanisms by vicarious pain depends on relatively elaborate cognitive processes, while – strikingly – the appetitive system is activated in highly automatic manner independent from stimulus awareness. Such mechanisms may have evolved to facilitate empathic, protective approach responses toward suffering individuals, ensuring survival of the protective social group. PMID:28831270

  14. Evaluation of a visual risk communication tool: effects on knowledge and perception of blood transfusion risk.

    PubMed

    Lee, D H; Mehta, M D

    2003-06-01

    Effective risk communication in transfusion medicine is important for health-care consumers, but understanding the numerical magnitude of risks can be difficult. The objective of this study was to determine the effect of a visual risk communication tool on the knowledge and perception of transfusion risk. Laypeople were randomly assigned to receive transfusion risk information with either a written or a visual presentation format for communicating and comparing the probabilities of transfusion risks relative to other hazards. Knowledge of transfusion risk was ascertained with a multiple-choice quiz and risk perception was ascertained by psychometric scaling and principal components analysis. Two-hundred subjects were recruited and randomly assigned. Risk communication with both written and visual presentation formats increased knowledge of transfusion risk and decreased the perceived dread and severity of transfusion risk. Neither format changed the perceived knowledge and control of transfusion risk, nor the perceived benefit of transfusion. No differences in knowledge or risk perception outcomes were detected between the groups randomly assigned to written or visual presentation formats. Risk communication that incorporates risk comparisons in either written or visual presentation formats can improve knowledge and reduce the perception of transfusion risk in laypeople.

  15. Neuro-cognitive mechanisms of conscious and unconscious visual perception: From a plethora of phenomena to general principles

    PubMed Central

    Kiefer, Markus; Ansorge, Ulrich; Haynes, John-Dylan; Hamker, Fred; Mattler, Uwe; Verleger, Rolf; Niedeggen, Michael

    2011-01-01

    Psychological and neuroscience approaches have promoted much progress in elucidating the cognitive and neural mechanisms that underlie phenomenal visual awareness during the last decades. In this article, we provide an overview of the latest research investigating important phenomena in conscious and unconscious vision. We identify general principles to characterize conscious and unconscious visual perception, which may serve as important building blocks for a unified model to explain the plethora of findings. We argue that in particular the integration of principles from both conscious and unconscious vision is advantageous and provides critical constraints for developing adequate theoretical models. Based on the principles identified in our review, we outline essential components of a unified model of conscious and unconscious visual perception. We propose that awareness refers to consolidated visual representations, which are accessible to the entire brain and therefore globally available. However, visual awareness not only depends on consolidation within the visual system, but is additionally the result of a post-sensory gating process, which is mediated by higher-level cognitive control mechanisms. We further propose that amplification of visual representations by attentional sensitization is not exclusive to the domain of conscious perception, but also applies to visual stimuli, which remain unconscious. Conscious and unconscious processing modes are highly interdependent with influences in both directions. We therefore argue that exactly this interdependence renders a unified model of conscious and unconscious visual perception valuable. Computational modeling jointly with focused experimental research could lead to a better understanding of the plethora of empirical phenomena in consciousness research. PMID:22253669

  16. How do visual and postural cues combine for self-tilt perception during slow pitch rotations?

    PubMed

    Scotto Di Cesare, C; Buloup, F; Mestre, D R; Bringoux, L

    2014-11-01

    Self-orientation perception relies on the integration of multiple sensory inputs which convey spatially-related visual and postural cues. In the present study, an experimental set-up was used to tilt the body and/or the visual scene to investigate how these postural and visual cues are integrated for self-tilt perception (the subjective sensation of being tilted). Participants were required to repeatedly rate a confidence level for self-tilt perception during slow (0.05°·s(-1)) body and/or visual scene pitch tilts up to 19° relative to vertical. Concurrently, subjects also had to perform arm reaching movements toward a body-fixed target at certain specific angles of tilt. While performance of a concurrent motor task did not influence the main perceptual task, self-tilt detection did vary according to the visuo-postural stimuli. Slow forward or backward tilts of the visual scene alone did not induce a marked sensation of self-tilt contrary to actual body tilt. However, combined body and visual scene tilt influenced self-tilt perception more strongly, although this effect was dependent on the direction of visual scene tilt: only a forward visual scene tilt combined with a forward body tilt facilitated self-tilt detection. In such a case, visual scene tilt did not seem to induce vection but rather may have produced a deviation of the perceived orientation of the longitudinal body axis in the forward direction, which may have lowered the self-tilt detection threshold during actual forward body tilt. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. The development and discussion of computerized visual perception assessment tool for Chinese characters structures - Concurrent estimation of the overall ability and the domain ability in item response theory approach.

    PubMed

    Wu, Huey-Min; Lin, Chin-Kai; Yang, Yu-Mao; Kuo, Bor-Chen

    2014-11-12

    Visual perception is the fundamental skill required for a child to recognize words, and to read and write. There was no visual perception assessment tool developed for preschool children based on Chinese characters in Taiwan. The purposes were to develop the computerized visual perception assessment tool for Chinese Characters Structures and to explore the psychometrical characteristic of assessment tool. This study adopted purposive sampling. The study evaluated 551 kindergarten-age children (293 boys, 258 girls) ranging from 46 to 81 months of age. The test instrument used in this study consisted of three subtests and 58 items, including tests of basic strokes, single-component characters, and compound characters. Based on the results of model fit analysis, the higher-order item response theory was used to estimate the performance in visual perception, basic strokes, single-component characters, and compound characters simultaneously. Analyses of variance were used to detect significant difference in age groups and gender groups. The difficulty of identifying items in a visual perception test ranged from -2 to 1. The visual perception ability of 4- to 6-year-old children ranged from -1.66 to 2.19. Gender did not have significant effects on performance. However, there were significant differences among the different age groups. The performance of 6-year-olds was better than that of 5-year-olds, which was better than that of 4-year-olds. This study obtained detailed diagnostic scores by using a higher-order item response theory model to understand the visual perception of basic strokes, single-component characters, and compound characters. Further statistical analysis showed that, for basic strokes and compound characters, girls performed better than did boys; there also were differences within each age group. For single-component characters, there was no difference in performance between boys and girls. However, again the performance of 6-year-olds was better than that of 4-year-olds, but there were no statistical differences between the performance of 5-year-olds and 6-year-olds. Results of tests with basic strokes, single-component characters and compound characters tests had good reliability and validity. Therefore, it can be apply to diagnose the problem of visual perception at preschool. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. The Effects of Compensatory Auditory Stimulation and High-Definition Transcranial Direct Current Stimulation (HD-tDCS) on Tinnitus Perception - A Randomized Pilot Study.

    PubMed

    Henin, Simon; Fein, Dovid; Smouha, Eric; Parra, Lucas C

    2016-01-01

    Tinnitus correlates with elevated hearing thresholds and reduced cochlear compression. We hypothesized that reduced peripheral input leads to elevated neuronal gain resulting in the perception of a phantom sound. The purpose of this pilot study was to test whether compensating for this peripheral deficit could reduce the tinnitus percept acutely using customized auditory stimulation. To further enhance the effects of auditory stimulation, this intervention was paired with high-definition transcranial direct current stimulation (HD-tDCS). A randomized sham-controlled, single blind study was conducted in a clinical setting on adult participants with chronic tinnitus (n = 14). Compensatory auditory stimulation (CAS) and HD-tDCS were administered either individually or in combination in order to access the effects of both interventions on tinnitus perception. CAS consisted of sound exposure typical to daily living (20-minute sound-track of a TV show), which was adapted with compressive gain to compensate for deficits in each subject's individual audiograms. Minimum masking levels and the visual analog scale were used to assess the strength of the tinnitus percept immediately before and after the treatment intervention. CAS reduced minimum masking levels, and visual analog scale trended towards improvement. Effects of HD-tDCS could not be resolved with the current sample size. The results of this pilot study suggest that providing tailored auditory stimulation with frequency-specific gain and compression may alleviate tinnitus in a clinical population. Further experimentation with longer interventions is warranted in order to optimize effect sizes.

  19. Visual Perception and Visual-Motor Integration in Very Preterm and/or Very Low Birth Weight Children: A Meta-Analysis

    ERIC Educational Resources Information Center

    Geldof, C. J. A.; van Wassenaer, A. G.; de Kieviet, J. F.; Kok, J. H.; Oosterlaan, J.

    2012-01-01

    A range of neurobehavioral impairments, including impaired visual perception and visual-motor integration, are found in very preterm born children, but reported findings show great variability. We aimed to aggregate the existing literature using meta-analysis, in order to provide robust estimates of the effect of very preterm birth on visual…

  20. Perception of the Auditory-Visual Illusion in Speech Perception by Children with Phonological Disorders

    ERIC Educational Resources Information Center

    Dodd, Barbara; McIntosh, Beth; Erdener, Dogu; Burnham, Denis

    2008-01-01

    An example of the auditory-visual illusion in speech perception, first described by McGurk and MacDonald, is the perception of [ta] when listeners hear [pa] in synchrony with the lip movements for [ka]. One account of the illusion is that lip-read and heard speech are combined in an articulatory code since people who mispronounce words respond…

  1. Mandarin Visual Speech Information

    ERIC Educational Resources Information Center

    Chen, Trevor H.

    2010-01-01

    While the auditory-only aspects of Mandarin speech are heavily-researched and well-known in the field, this dissertation addresses its lesser-known aspects: The visual and audio-visual perception of Mandarin segmental information and lexical-tone information. Chapter II of this dissertation focuses on the audiovisual perception of Mandarin…

  2. Monkeys and Humans Share a Common Computation for Face/Voice Integration

    PubMed Central

    Chandrasekaran, Chandramouli; Lemus, Luis; Trubanova, Andrea; Gondan, Matthias; Ghazanfar, Asif A.

    2011-01-01

    Speech production involves the movement of the mouth and other regions of the face resulting in visual motion cues. These visual cues enhance intelligibility and detection of auditory speech. As such, face-to-face speech is fundamentally a multisensory phenomenon. If speech is fundamentally multisensory, it should be reflected in the evolution of vocal communication: similar behavioral effects should be observed in other primates. Old World monkeys share with humans vocal production biomechanics and communicate face-to-face with vocalizations. It is unknown, however, if they, too, combine faces and voices to enhance their perception of vocalizations. We show that they do: monkeys combine faces and voices in noisy environments to enhance their detection of vocalizations. Their behavior parallels that of humans performing an identical task. We explored what common computational mechanism(s) could explain the pattern of results we observed across species. Standard explanations or models such as the principle of inverse effectiveness and a “race” model failed to account for their behavior patterns. Conversely, a “superposition model”, positing the linear summation of activity patterns in response to visual and auditory components of vocalizations, served as a straightforward but powerful explanatory mechanism for the observed behaviors in both species. As such, it represents a putative homologous mechanism for integrating faces and voices across primates. PMID:21998576

  3. Dementia

    MedlinePlus

    ... living. Functions affected include memory, language skills, visual perception, problem solving, self-management, and the ability to ... living. Functions affected include memory, language skills, visual perception, problem solving, self-management, and the ability to ...

  4. Face perception in women with Turner syndrome and its underlying factors.

    PubMed

    Anaki, David; Zadikov Mor, Tal; Gepstein, Vardit; Hochberg, Ze'ev

    2016-09-01

    Turner syndrome (TS) is a chromosomal condition that affects development in females. It is characterized by short stature, ovarian failure and other congenital malformations, due to a partial or complete absence of the sex chromosome. Women with TS frequently suffer from various physical and hormonal dysfunctions, along with impairments in visual-spatial processing and social cognition difficulties. Previous research has also shown difficulties in face and emotion perception. In the current study we examined two questions: First, whether women with TS, that are impaired in face perception, also suffer from deficits in face-specific processes. The second question was whether these face impairments in TS are related to visual-spatial perceptual dysfunctions exhibited by TS individuals, or to impaired social cognition skills. Twenty-six women with TS and 26 control participants were tested on various cognitive and psychological tests to assess visual-spatial perception, face and facial expression perception, and social cognition skills. Results show that women with TS were less accurate in face perception and facial expression processing, yet they exhibited normal face-specific processes (configural and holistic processing). They also showed difficulties in spatial perception and social cognition capacities. Additional analyses revealed that their face perception impairments were related to their deficits in visual-spatial processing. Thus, our results do not support the claim that the impairments in face processing observed in TS are related to difficulties in social cognition. Rather, our data point to the possibility that face perception difficulties in TS stem from visual-spatial impairments and may not be specific to faces. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Prefrontal cortex modulates posterior alpha oscillations during top-down guided visual perception

    PubMed Central

    Helfrich, Randolph F.; Huang, Melody; Wilson, Guy; Knight, Robert T.

    2017-01-01

    Conscious visual perception is proposed to arise from the selective synchronization of functionally specialized but widely distributed cortical areas. It has been suggested that different frequency bands index distinct canonical computations. Here, we probed visual perception on a fine-grained temporal scale to study the oscillatory dynamics supporting prefrontal-dependent sensory processing. We tested whether a predictive context that was embedded in a rapid visual stream modulated the perception of a subsequent near-threshold target. The rapid stream was presented either rhythmically at 10 Hz, to entrain parietooccipital alpha oscillations, or arrhythmically. We identified a 2- to 4-Hz delta signature that modulated posterior alpha activity and behavior during predictive trials. Importantly, delta-mediated top-down control diminished the behavioral effects of bottom-up alpha entrainment. Simultaneous source-reconstructed EEG and cross-frequency directionality analyses revealed that this delta activity originated from prefrontal areas and modulated posterior alpha power. Taken together, this study presents converging behavioral and electrophysiological evidence for frontal delta-mediated top-down control of posterior alpha activity, selectively facilitating visual perception. PMID:28808023

  6. A visualization system for CT based pulmonary fissure analysis

    NASA Astrophysics Data System (ADS)

    Pu, Jiantao; Zheng, Bin; Park, Sang Cheol

    2009-02-01

    In this study we describe a visualization system of pulmonary fissures depicted on CT images. The purpose is to provide clinicians with an intuitive perception of a patient's lung anatomy through an interactive examination of fissures, enhancing their understanding and accurate diagnosis of lung diseases. This system consists of four key components: (1) region-of-interest segmentation; (2) three-dimensional surface modeling; (3) fissure type classification; and (4) an interactive user interface, by which the extracted fissures are displayed flexibly in different space domains including image space, geometric space, and mixed space using simple toggling "on" and "off" operations. In this system, the different visualization modes allow users not only to examine the fissures themselves but also to analyze the relationship between fissures and their surrounding structures. In addition, the users can adjust thresholds interactively to visualize the fissure surface under different scanning and processing conditions. Such a visualization tool is expected to facilitate investigation of structures near the fissures and provide an efficient "visual aid" for other applications such as treatment planning and assessment of therapeutic efficacy as well as education of medical professionals.

  7. Enhanced visual performance in obsessive compulsive personality disorder.

    PubMed

    Ansari, Zohreh; Fadardi, Javad Salehi

    2016-12-01

    Visual performance is considered as commanding modality in human perception. We tested whether Obsessive-compulsive personality disorder (OCPD) people do differently in visual performance tasks than people without OCPD. One hundred ten students of Ferdowsi University of Mashhad and non-student participants were tested by Structured Clinical Interview for DSM-IV Axis II Personality Disorders (SCID-II), among whom 18 (mean age = 29.55; SD = 5.26; 84% female) met the criteria for OCPD classification; controls were 20 persons (mean age = 27.85; SD = 5.26; female = 84%), who did not met the OCPD criteria. Both groups were tested on a modified Flicker task for two dimensions of visual performance (i.e., visual acuity: detecting the location of change, complexity, and size; and visual contrast sensitivity). The OCPD group had responded more accurately on pairs related to size, complexity, and contrast, but spent more time to detect a change on pairs related to complexity and contrast. The OCPD individuals seem to have more accurate visual performance than non-OCPD controls. The findings support the relationship between personality characteristics and visual performance within the framework of top-down processing model. © 2016 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  8. Visual-perceptual impairment in children with cerebral palsy: a systematic review.

    PubMed

    Ego, Anne; Lidzba, Karen; Brovedani, Paola; Belmonti, Vittorio; Gonzalez-Monge, Sibylle; Boudia, Baya; Ritz, Annie; Cans, Christine

    2015-04-01

    Visual perception is one of the cognitive functions often impaired in children with cerebral palsy (CP). The aim of this systematic literature review was to assess the frequency of visual-perceptual impairment (VPI) and its relationship with patient characteristics. Eligible studies were relevant papers assessing visual perception with five common standardized assessment instruments in children with CP published from January 1990 to August 2011. Of the 84 studies selected, 15 were retained. In children with CP, the proportion of VPI ranged from 40% to 50% and the mean visual perception quotient from 70 to 90. None of the studies reported a significant influence of CP subtype, IQ level, side of motor impairment, neuro-ophthalmological outcomes, or seizures. The severity of neuroradiological lesions seemed associated with VPI. The influence of prematurity was controversial, but a lower gestational age was more often associated with lower visual motor skills than with decreased visual-perceptual abilities. The impairment of visual perception in children with CP should be considered a core disorder within the CP syndrome. Further research, including a more systematic approach to neuropsychological testing, is needed to explore the specific impact of CP subgroups and of neuroradiological features on visual-perceptual development. © 2015 The Authors. Developmental Medicine & Child Neurology © 2015 Mac Keith Press.

  9. Perception of Visual Speed While Moving

    ERIC Educational Resources Information Center

    Durgin, Frank H.; Gigone, Krista; Scott, Rebecca

    2005-01-01

    During self-motion, the world normally appears stationary. In part, this may be due to reductions in visual motion signals during self-motion. In 8 experiments, the authors used magnitude estimation to characterize changes in visual speed perception as a result of biomechanical self-motion alone (treadmill walking), physical translation alone…

  10. The Impact of Visual Impairment on Perceived School Climate

    ERIC Educational Resources Information Center

    Schade, Benjamin; Larwin, Karen H.

    2015-01-01

    The current investigation examines whether visual impairment has an impact on a student's perception of the school climate. Using a large national sample of high school students, perceptions were examined for students with vision impairment relative to students with no visual impairments. Three factors were examined: self-reported level of…

  11. Impact of Language on Development of Auditory-Visual Speech Perception

    ERIC Educational Resources Information Center

    Sekiyama, Kaoru; Burnham, Denis

    2008-01-01

    The McGurk effect paradigm was used to examine the developmental onset of inter-language differences between Japanese and English in auditory-visual speech perception. Participants were asked to identify syllables in audiovisual (with congruent or discrepant auditory and visual components), audio-only, and video-only presentations at various…

  12. Experience, Context, and the Visual Perception of Human Movement

    ERIC Educational Resources Information Center

    Jacobs, Alissa; Pinto, Jeannine; Shiffrar, Maggie

    2004-01-01

    Why are human observers particularly sensitive to human movement? Seven experiments examined the roles of visual experience and motor processes in human movement perception by comparing visual sensitivities to point-light displays of familiar, unusual, and impossible gaits across gait-speed and identity discrimination tasks. In both tasks, visual…

  13. Parents' Perceptions of Physical Activity for Their Children with Visual Impairments

    ERIC Educational Resources Information Center

    Perkins, Kara; Columna, Luis; Lieberman, Lauren; Bailey, JoEllen

    2013-01-01

    Introduction: Ongoing communication with parents and the acknowledgment of their preferences and expectations are crucial to promote the participation of physical activity by children with visual impairments. Purpose: The study presented here explored parents' perceptions of physical activity for their children with visual impairments and explored…

  14. Stochastic Resonance In Visual Perception

    NASA Astrophysics Data System (ADS)

    Simonotto, Enrico

    1996-03-01

    Stochastic resonance (SR) is a well established physical phenomenon wherein some measure of the coherence of a weak signal can be optimized by random fluctuations, or "noise" (K. Wiesenfeld and F. Moss, Nature), 373, 33 (1995). In all experiments to date the coherence has been measured using numerical analysis of the data, for example, signal-to-noise ratios obtained from power spectra. But, can this analysis be replaced by a perceptive task? Previously we had demonstrated this possibility with a numerical model of perceptual bistability applied to the interpretation of ambiguous figures(M. Riani and E. Simonotto, Phys. Rev. Lett.), 72, 3120 (1994). Here I describe an experiment wherein SR is detected in visual perception. A recognizible grayscale photograph was digitized and presented. The picture was then placed beneath a threshold. Every pixel for which the grayscale exceeded the threshold was painted white, and all others black. For large enough threshold, the picture is unrecognizable, but the addition of a random number to every pixel renders it interpretable(C. Seife and M. Roberts, The Economist), 336, 59, July 29 (1995). However the addition of dynamical noise to the pixels much enhances an observer's ability to interpret the picture. Here I report the results of psychophysics experiments wherein the effects of both the intensity of the noise and its correlation time were studied.

  15. Problems and solutions: accounts by parents and children of adhering to chest physiotherapy for cystic fibrosis.

    PubMed

    Williams, Brian; Mukhopadhyay, Somnath; Dowell, Jon; Coyle, Joanne

    2007-07-30

    Although chest physiotherapy is central to the management of cystic fibrosis (CF), adherence among children is problematic. This study explores accounts by parents and children of the difficulties of adhering to chest physiotherapy for cystic fibrosis, and identifies strategies used by families to overcome these. A qualitative study based on in-depth interviews with 32 children with a diagnosis of cystic fibrosis aged 7 - 17 years, and with 31 parents. Physiotherapy was frequently described as restrictive, threatening to identity and boring, giving rise to feelings of unfairness, inequality, 'difference', and social stigma. Motivation to adhere was influenced by perceptions of effectiveness that depended on external signs evident during or after the physiotherapy. Motivation was enhanced where parents and children visualized the accumulation of mucus. Some parents had developed distraction techniques that improved the experience of chest physiotherapy but had few opportunities to share these with other parents. The experience of physiotherapy is problematic to some parents and children. Furthermore, motivation to overcome these problems may be undermined by perceptions of ineffectiveness. Distraction techniques that change the value that the child places on the time spent doing physiotherapy and that reduces their perception of its duration may improve experience and adherence. The potential of visualization techniques to promote adherence should be investigated further.

  16. Dopaminergic stimulation enhances confidence and accuracy in seeing rapidly presented words.

    PubMed

    Lou, Hans C; Skewes, Joshua C; Thomsen, Kristine Rømer; Overgaard, Morten; Lau, Hakwan C; Mouridsen, Kim; Roepstorff, Andreas

    2011-02-23

    Liberal acceptance, overconfidence, and increased activity of the neurotransmitter dopamine have been proposed to account for abnormal sensory experiences, for instance, hallucinations in schizophrenia. In normal subjects, increased sensory experience in Yoga Nidra meditation is linked to striatal dopamine release. We therefore hypothesize that the neurotransmitter dopamine may function as a regulator of subjective confidence of visual perception in the normal brain. Although much is known about the effect of stimulation by neurotransmitters on cognitive functions, their effect on subjective confidence of perception has never been recorded experimentally before. In a controlled study of 24 normal, healthy female university students with the dopamine agonist pergolide given orally, we show that dopaminergic activation increases confidence in seeing rapidly presented words. It also improves performance in a forced-choice word recognition task. These results demonstrate neurotransmitter regulation of subjective conscious experience of perception and provide evidence for a crucial role of dopamine.

  17. Attitudes towards and perceptions of visual loss and its causes among Hong Kong Chinese adults.

    PubMed

    Lau, Joseph Tak Fai; Lee, Vincent; Fan, Dorothy; Lau, Mason; Michon, John

    2004-06-01

    As part of a study of visual function among Hong Kong Chinese adults, their attitudes and perceptions related to visual loss were examined. These included fear of visual loss, negative functional impacts of visual loss, the relationship between ageing and visual loss and help-seeking behaviours related to visual loss. Demographic factors associated with these variables were also studied. The study population were people aged 40 and above randomly selected from the Shatin district of Hong Kong. The participants underwent eye examinations that included visual acuity, intraocular pressure measurement, visual field, slit-lamp biomicroscopy and ophthalmoscopy. The primary cause of visual disability was recorded. The participants were also asked about their attitudes and perceptions regarding visual loss using a structured questionnaire. The prevalence of bilateral visual disability was 2.2% among adults aged 40 or above and 6.4% among adults aged 60 or above. Nearly 36% of the participants selected blindness as the most feared disabling medical condition, which was substantially higher than conditions such as dementia, loss of limbs, deafness or aphasia. Inability to take care of oneself (21.0%), inconvenience related to mobility (20.2%) and inability to work (14.8%) were the three most commonly mentioned 'worst impact' effects of visual loss. Fully 68% of the participants believed that loss of vision is related to ageing. A majority of participants would seek help and advice from family members in case of visual loss. Visual function is perceived to be very important by Hong Kong Chinese adults. The fear of visual loss is widespread and particularly affects self-care and functional abilities. Visual loss is commonly seen as related to ageing. Attitudes and perceptions in this population may be modified by educational and outreach efforts in order to take advantage of preventive measures.

  18. Video quality assessment method motivated by human visual perception

    NASA Astrophysics Data System (ADS)

    He, Meiling; Jiang, Gangyi; Yu, Mei; Song, Yang; Peng, Zongju; Shao, Feng

    2016-11-01

    Research on video quality assessment (VQA) plays a crucial role in improving the efficiency of video coding and the performance of video processing. It is well acknowledged that the motion energy model generates motion energy responses in a middle temporal area by simulating the receptive field of neurons in V1 for the motion perception of the human visual system. Motivated by the biological evidence for the visual motion perception, a VQA method is proposed in this paper, which comprises the motion perception quality index and the spatial index. To be more specific, the motion energy model is applied to evaluate the temporal distortion severity of each frequency component generated from the difference of Gaussian filter bank, which produces the motion perception quality index, and the gradient similarity measure is used to evaluate the spatial distortion of the video sequence to get the spatial quality index. The experimental results of the LIVE, CSIQ, and IVP video databases demonstrate that the random forests regression technique trained by the generated quality indices is highly correspondent to human visual perception and has many significant improvements than comparable well-performing methods. The proposed method has higher consistency with subjective perception and higher generalization capability.

  19. Musical Experience and the Aging Auditory System: Implications for Cognitive Abilities and Hearing Speech in Noise

    PubMed Central

    Parbery-Clark, Alexandra; Strait, Dana L.; Anderson, Samira; Hittner, Emily; Kraus, Nina

    2011-01-01

    Much of our daily communication occurs in the presence of background noise, compromising our ability to hear. While understanding speech in noise is a challenge for everyone, it becomes increasingly difficult as we age. Although aging is generally accompanied by hearing loss, this perceptual decline cannot fully account for the difficulties experienced by older adults for hearing in noise. Decreased cognitive skills concurrent with reduced perceptual acuity are thought to contribute to the difficulty older adults experience understanding speech in noise. Given that musical experience positively impacts speech perception in noise in young adults (ages 18–30), we asked whether musical experience benefits an older cohort of musicians (ages 45–65), potentially offsetting the age-related decline in speech-in-noise perceptual abilities and associated cognitive function (i.e., working memory). Consistent with performance in young adults, older musicians demonstrated enhanced speech-in-noise perception relative to nonmusicians along with greater auditory, but not visual, working memory capacity. By demonstrating that speech-in-noise perception and related cognitive function are enhanced in older musicians, our results imply that musical training may reduce the impact of age-related auditory decline. PMID:21589653

  20. Musical experience and the aging auditory system: implications for cognitive abilities and hearing speech in noise.

    PubMed

    Parbery-Clark, Alexandra; Strait, Dana L; Anderson, Samira; Hittner, Emily; Kraus, Nina

    2011-05-11

    Much of our daily communication occurs in the presence of background noise, compromising our ability to hear. While understanding speech in noise is a challenge for everyone, it becomes increasingly difficult as we age. Although aging is generally accompanied by hearing loss, this perceptual decline cannot fully account for the difficulties experienced by older adults for hearing in noise. Decreased cognitive skills concurrent with reduced perceptual acuity are thought to contribute to the difficulty older adults experience understanding speech in noise. Given that musical experience positively impacts speech perception in noise in young adults (ages 18-30), we asked whether musical experience benefits an older cohort of musicians (ages 45-65), potentially offsetting the age-related decline in speech-in-noise perceptual abilities and associated cognitive function (i.e., working memory). Consistent with performance in young adults, older musicians demonstrated enhanced speech-in-noise perception relative to nonmusicians along with greater auditory, but not visual, working memory capacity. By demonstrating that speech-in-noise perception and related cognitive function are enhanced in older musicians, our results imply that musical training may reduce the impact of age-related auditory decline.

  1. Ventral and dorsal streams processing visual motion perception (FDG-PET study)

    PubMed Central

    2012-01-01

    Background Earlier functional imaging studies on visually induced self-motion perception (vection) disclosed a bilateral network of activations within primary and secondary visual cortex areas which was combined with signal decreases, i.e., deactivations, in multisensory vestibular cortex areas. This finding led to the concept of a reciprocal inhibitory interaction between the visual and vestibular systems. In order to define areas involved in special aspects of self-motion perception such as intensity and duration of the perceived circular vection (CV) or the amount of head tilt, correlation analyses of the regional cerebral glucose metabolism, rCGM (measured by fluorodeoxyglucose positron-emission tomography, FDG-PET) and these perceptual covariates were performed in 14 healthy volunteers. For analyses of the visual-vestibular interaction, the CV data were compared to a random dot motion stimulation condition (not inducing vection) and a control group at rest (no stimulation at all). Results Group subtraction analyses showed that the visual-vestibular interaction was modified during CV, i.e., the activations within the cerebellar vermis and parieto-occipital areas were enhanced. The correlation analysis between the rCGM and the intensity of visually induced vection, experienced as body tilt, showed a relationship for areas of the multisensory vestibular cortical network (inferior parietal lobule bilaterally, anterior cingulate gyrus), the medial parieto-occipital cortex, the frontal eye fields and the cerebellar vermis. The “earlier” multisensory vestibular areas like the parieto-insular vestibular cortex and the superior temporal gyrus did not appear in the latter analysis. The duration of perceived vection after stimulus stop was positively correlated with rCGM in medial temporal lobe areas bilaterally, which included the (para-)hippocampus, known to be involved in various aspects of memory processing. The amount of head tilt was found to be positively correlated with the rCGM of bilateral basal ganglia regions responsible for the control of motor function of the head. Conclusions Our data gave further insights into subfunctions within the complex cortical network involved in the processing of visual-vestibular interaction during CV. Specific areas of this cortical network could be attributed to the ventral stream (“what” pathway) responsible for the duration after stimulus stop and to the dorsal stream (“where/how” pathway) responsible for intensity aspects. PMID:22800430

  2. Different Signal Enhancement Pathways of Attention and Consciousness Underlie Perception in Humans.

    PubMed

    van Boxtel, Jeroen J A

    2017-06-14

    It is not yet known whether attention and consciousness operate through similar or largely different mechanisms. Visual processing mechanisms are routinely characterized by measuring contrast response functions (CRFs). In this report, behavioral CRFs were obtained in humans (both males and females) by measuring afterimage durations over the entire range of inducer stimulus contrasts to reveal visual mechanisms behind attention and consciousness. Deviations relative to the standard CRF, i.e., gain functions, describe the strength of signal enhancement, which were assessed for both changes due to attentional task and conscious perception. It was found that attention displayed a response-gain function, whereas consciousness displayed a contrast-gain function. Through model comparisons, which only included contrast-gain modulations, both contrast-gain and response-gain effects can be explained with a two-level normalization model, in which consciousness affects only the first level and attention affects only the second level. These results demonstrate that attention and consciousness can effectively show different gain functions because they operate through different signal enhancement mechanisms. SIGNIFICANCE STATEMENT The relationship between attention and consciousness is still debated. Mapping contrast response functions (CRFs) has allowed (neuro)scientists to gain important insights into the mechanistic underpinnings of visual processing. Here, the influence of both attention and consciousness on these functions were measured and they displayed a strong dissociation. First, attention lowered CRFs, whereas consciousness raised them. Second, attention manifests itself as a response-gain function, whereas consciousness manifests itself as a contrast-gain function. Extensive model comparisons show that these results are best explained in a two-level normalization model in which consciousness affects only the first level, whereas attention affects only the second level. These findings show dissociations between both the computational mechanisms behind attention and consciousness and the perceptual consequences that they induce. Copyright © 2017 the authors 0270-6474/17/375912-11$15.00/0.

  3. Ambiguities and conventions in the perception of visual art.

    PubMed

    Mamassian, Pascal

    2008-09-01

    Vision perception is ambiguous and visual arts play with these ambiguities. While perceptual ambiguities are resolved with prior constraints, artistic ambiguities are resolved by conventions. Is there a relationship between priors and conventions? This review surveys recent work related to these ambiguities in composition, spatial scale, illumination and color, three-dimensional layout, shape, and movement. While most conventions seem to have their roots in perceptual constraints, those conventions that differ from priors may help us appreciate how visual arts differ from everyday perception.

  4. Cortical visual dysfunction in children: a clinical study.

    PubMed

    Dutton, G; Ballantyne, J; Boyd, G; Bradnam, M; Day, R; McCulloch, D; Mackie, R; Phillips, S; Saunders, K

    1996-01-01

    Damage to the cerebral cortex was responsible for impairment in vision in 90 of 130 consecutive children referred to the Vision Assessment Clinic in Glasgow. Cortical blindness was seen in 16 children. Only 2 were mobile, but both showed evidence of navigational blind-sight. Cortical visual impairment, in which it was possible to estimate visual acuity but generalised severe brain damage precluded estimation of cognitive visual function, was observed in 9 children. Complex disorders of cognitive vision were seen in 20 children. These could be divided into five categories and involved impairment of: (1) recognition, (2) orientation, (3) depth perception, (4) perception of movement and (5) simultaneous perception. These disorders were observed in a variety of combinations. The remaining children showed evidence of reduced visual acuity and/ or visual field loss, but without detectable disorders of congnitive visual function. Early recognition of disorders of cognitive vision is required if active training and remediation are to be implemented.

  5. Gestalt Perception and Local-Global Processing in High-Functioning Autism

    ERIC Educational Resources Information Center

    Bolte, Sven; Holtmann, Martin; Poustka, Fritz; Scheurich, Armin; Schmidt, Lutz

    2007-01-01

    This study examined gestalt perception in high-functioning autism (HFA) and its relation to tasks indicative of local visual processing. Data on of gestalt perception, visual illusions (VI), hierarchical letters (HL), Block Design (BD) and the Embedded Figures Test (EFT) were collected in adult males with HFA, schizophrenia, depression and…

  6. Functional Dissociation between Perception and Action Is Evident Early in Life

    ERIC Educational Resources Information Center

    Hadad, Bat-Sheva; Avidan, Galia; Ganel, Tzvi

    2012-01-01

    The functional distinction between vision for perception and vision for action is well documented in the mature visual system. Ganel and colleagues recently provided direct evidence for this dissociation, showing that while visual processing for perception follows Weber's fundamental law of psychophysics, action violates this law. We tracked the…

  7. A Dynamic Systems Theory Model of Visual Perception Development

    ERIC Educational Resources Information Center

    Coté, Carol A.

    2015-01-01

    This article presents a model for understanding the development of visual perception from a dynamic systems theory perspective. It contrasts to a hierarchical or reductionist model that is often found in the occupational therapy literature. In this proposed model vision and ocular motor abilities are not foundational to perception, they are seen…

  8. Subliminal perception of complex visual stimuli.

    PubMed

    Ionescu, Mihai Radu

    2016-01-01

    Rationale: Unconscious perception of various sensory modalities is an active subject of research though its function and effect on behavior is uncertain. Objective: The present study tried to assess if unconscious visual perception could occur with more complex visual stimuli than previously utilized. Methods and Results: Videos containing slideshows of indifferent complex images with interspersed frames of interest of various durations were presented to 24 healthy volunteers. The perception of the stimulus was evaluated with a forced-choice questionnaire while awareness was quantified by self-assessment with a modified awareness scale annexed to each question with 4 categories of awareness. At values of 16.66 ms of stimulus duration, conscious awareness was not possible and answers regarding the stimulus were random. At 50 ms, nonrandom answers were coupled with no self-reported awareness suggesting unconscious perception of the stimulus. At larger durations of stimulus presentation, significantly correct answers were coupled with a certain conscious awareness. Discussion: At values of 50 ms, unconscious perception is possible even with complex visual stimuli. Further studies are recommended with a focus on a range of interest of stimulus duration between 50 to 16.66 ms.

  9. The visual system prioritizes locations near corners of surfaces (not just locations near a corner).

    PubMed

    Bertamini, Marco; Helmy, Mai; Bates, Daniel

    2013-11-01

    When a new visual object appears, attention is directed toward it. However, some locations along the outline of the new object may receive more resources, perhaps as a consequence of their relative importance in describing its shape. Evidence suggests that corners receive enhanced processing, relative to the straight edges of an outline (corner enhancement effect). Using a technique similar to that in an original study in which observers had to respond to a probe presented near a contour (Cole et al. in Journal of Experimental Psychology: Human Perception and Performance 27:1356-1368, 2001), we confirmed this effect. When figure-ground relations were manipulated using shaded surfaces (Exps. 1 and 2) and stereograms (Exps. 3 and 4), two novel aspects of the phenomenon emerged: We found no difference between corners perceived as being convex or concave, and we found that the enhancement was stronger when the probe was perceived as being a feature of the surface that the corner belonged to. Therefore, the enhancement is not based on spatial aspects of the regions in the image, but critically depends on figure-ground stratification, supporting the link between the prioritization of corners and the representation of surface layout.

  10. A Causal Inference Model Explains Perception of the McGurk Effect and Other Incongruent Audiovisual Speech.

    PubMed

    Magnotti, John F; Beauchamp, Michael S

    2017-02-01

    Audiovisual speech integration combines information from auditory speech (talker's voice) and visual speech (talker's mouth movements) to improve perceptual accuracy. However, if the auditory and visual speech emanate from different talkers, integration decreases accuracy. Therefore, a key step in audiovisual speech perception is deciding whether auditory and visual speech have the same source, a process known as causal inference. A well-known illusion, the McGurk Effect, consists of incongruent audiovisual syllables, such as auditory "ba" + visual "ga" (AbaVga), that are integrated to produce a fused percept ("da"). This illusion raises two fundamental questions: first, given the incongruence between the auditory and visual syllables in the McGurk stimulus, why are they integrated; and second, why does the McGurk effect not occur for other, very similar syllables (e.g., AgaVba). We describe a simplified model of causal inference in multisensory speech perception (CIMS) that predicts the perception of arbitrary combinations of auditory and visual speech. We applied this model to behavioral data collected from 60 subjects perceiving both McGurk and non-McGurk incongruent speech stimuli. The CIMS model successfully predicted both the audiovisual integration observed for McGurk stimuli and the lack of integration observed for non-McGurk stimuli. An identical model without causal inference failed to accurately predict perception for either form of incongruent speech. The CIMS model uses causal inference to provide a computational framework for studying how the brain performs one of its most important tasks, integrating auditory and visual speech cues to allow us to communicate with others.

  11. Vividness of Visual Imagery Depends on the Neural Overlap with Perception in Visual Areas.

    PubMed

    Dijkstra, Nadine; Bosch, Sander E; van Gerven, Marcel A J

    2017-02-01

    Research into the neural correlates of individual differences in imagery vividness point to an important role of the early visual cortex. However, there is also great fluctuation of vividness within individuals, such that only looking at differences between people necessarily obscures the picture. In this study, we show that variation in moment-to-moment experienced vividness of visual imagery, within human subjects, depends on the activity of a large network of brain areas, including frontal, parietal, and visual areas. Furthermore, using a novel multivariate analysis technique, we show that the neural overlap between imagery and perception in the entire visual system correlates with experienced imagery vividness. This shows that the neural basis of imagery vividness is much more complicated than studies of individual differences seemed to suggest. Visual imagery is the ability to visualize objects that are not in our direct line of sight: something that is important for memory, spatial reasoning, and many other tasks. It is known that the better people are at visual imagery, the better they can perform these tasks. However, the neural correlates of moment-to-moment variation in visual imagery remain unclear. In this study, we show that the more the neural response during imagery is similar to the neural response during perception, the more vivid or perception-like the imagery experience is. Copyright © 2017 the authors 0270-6474/17/371367-07$15.00/0.

  12. Brain functional network connectivity based on a visual task: visual information processing-related brain regions are significantly activated in the task state.

    PubMed

    Yang, Yan-Li; Deng, Hong-Xia; Xing, Gui-Yang; Xia, Xiao-Luan; Li, Hai-Fang

    2015-02-01

    It is not clear whether the method used in functional brain-network related research can be applied to explore the feature binding mechanism of visual perception. In this study, we investigated feature binding of color and shape in visual perception. Functional magnetic resonance imaging data were collected from 38 healthy volunteers at rest and while performing a visual perception task to construct brain networks active during resting and task states. Results showed that brain regions involved in visual information processing were obviously activated during the task. The components were partitioned using a greedy algorithm, indicating the visual network existed during the resting state. Z-values in the vision-related brain regions were calculated, confirming the dynamic balance of the brain network. Connectivity between brain regions was determined, and the result showed that occipital and lingual gyri were stable brain regions in the visual system network, the parietal lobe played a very important role in the binding process of color features and shape features, and the fusiform and inferior temporal gyri were crucial for processing color and shape information. Experimental findings indicate that understanding visual feature binding and cognitive processes will help establish computational models of vision, improve image recognition technology, and provide a new theoretical mechanism for feature binding in visual perception.

  13. The role of temporo-parietal junction (TPJ) in global Gestalt perception.

    PubMed

    Huberle, Elisabeth; Karnath, Hans-Otto

    2012-07-01

    Grouping processes enable the coherent perception of our environment. A number of brain areas has been suggested to be involved in the integration of elements into objects including early and higher visual areas along the ventral visual pathway as well as motion-processing areas of the dorsal visual pathway. However, integration not only is required for the cortical representation of individual objects, but is also essential for the perception of more complex visual scenes consisting of several different objects and/or shapes. The present fMRI experiments aimed to address such integration processes. We investigated the neural correlates underlying the global Gestalt perception of hierarchically organized stimuli that allowed parametrical degrading of the object at the global level. The comparison of intact versus disturbed perception of the global Gestalt revealed a network of cortical areas including the temporo-parietal junction (TPJ), anterior cingulate cortex and the precuneus. The TPJ location corresponds well with the areas known to be typically lesioned in stroke patients with simultanagnosia following bilateral brain damage. These patients typically show a deficit in identifying the global Gestalt of a visual scene. Further, we found the closest relation between behavioral performance and fMRI activation for the TPJ. Our data thus argue for a significant role of the TPJ in human global Gestalt perception.

  14. It feels like it’s me: interpersonal multisensory stimulation enhances visual remapping of touch from other to self

    PubMed Central

    Cardini, Flavia; Tajadura-Jiménez, Ana; Serino, Andrea; Tsakiris, Manos

    2013-01-01

    Understanding other people’s feelings in social interactions depends on the ability to map onto our body the sensory experiences we observed on other people’s bodies. It has been shown that the perception of tactile stimuli on the face is improved when concurrently viewing a face being touched. This Visual Remapping of Touch (VRT) is enhanced the more similar others are perceived to be to the self, and is strongest when viewing one’s face. Here, we ask whether altering self-other boundaries can in turn change the VRT effect. We used the enfacement illusion, which relies on synchronous interpersonal multisensory stimulation (IMS), to manipulate self-other boundaries. Following synchronous, but not asynchronous, IMS, the self-related enhancement of the VRT extended to the other individual. These findings suggest that shared multisensory experiences represent one key way to overcome the boundaries between self and others, as evidenced by changes in somatosensory processing of tactile stimuli on one’s own face when concurrently viewing another person’s face being touched. PMID:23276110

  15. The role of the right hemisphere in form perception and visual gnosis organization.

    PubMed

    Belyi, B I

    1988-06-01

    Peculiarities of series of picture interpretations and Rorschach test results in patients with unilateral benign hemispheric tumours are discussed. It is concluded that visual perception in the right hemisphere has hierarchic structure, i.e., each successive area from the occipital lobe towards the frontal having a more complicated function. Visual engrams are distributed over the right hemisphere in a manner similar to the way the visual information is recorded in holographic systems. In any impairment of the right hemisphere a tendency towards whole but unclear vision arises. The preservation of lower levels of visual perception provides for clear vision only of small parts of the image. Thus, confabulatory phenomena arises, which are specific for right hemispheric lesions.

  16. Visual wetness perception based on image color statistics.

    PubMed

    Sawayama, Masataka; Adelson, Edward H; Nishida, Shin'ya

    2017-05-01

    Color vision provides humans and animals with the abilities to discriminate colors based on the wavelength composition of light and to determine the location and identity of objects of interest in cluttered scenes (e.g., ripe fruit among foliage). However, we argue that color vision can inform us about much more than color alone. Since a trichromatic image carries more information about the optical properties of a scene than a monochromatic image does, color can help us recognize complex material qualities. Here we show that human vision uses color statistics of an image for the perception of an ecologically important surface condition (i.e., wetness). Psychophysical experiments showed that overall enhancement of chromatic saturation, combined with a luminance tone change that increases the darkness and glossiness of the image, tended to make dry scenes look wetter. Theoretical analysis along with image analysis of real objects indicated that our image transformation, which we call the wetness enhancing transformation, is consistent with actual optical changes produced by surface wetting. Furthermore, we found that the wetness enhancing transformation operator was more effective for the images with many colors (large hue entropy) than for those with few colors (small hue entropy). The hue entropy may be used to separate surface wetness from other surface states having similar optical properties. While surface wetness and surface color might seem to be independent, there are higher order color statistics that can influence wetness judgments, in accord with the ecological statistics. The present findings indicate that the visual system uses color image statistics in an elegant way to help estimate the complex physical status of a scene.

  17. Working memory enhances visual perception: evidence from signal detection analysis.

    PubMed

    Soto, David; Wriglesworth, Alice; Bahrami-Balani, Alex; Humphreys, Glyn W

    2010-03-01

    We show that perceptual sensitivity to visual stimuli can be modulated by matches between the contents of working memory (WM) and stimuli in the visual field. Observers were presented with an object cue (to hold in WM or to merely attend) and subsequently had to identify a brief target presented within a colored shape. The cue could be re-presented in the display, where it surrounded either the target (on valid trials) or a distractor (on invalid trials). Perceptual identification of the target, as indexed by A', was enhanced on valid relative to invalid trials but only when the cue was kept in WM. There was minimal effect of the cue when it was merely attended and not kept in WM. Verbal cues were as effective as visual cues at modulating perceptual identification, and the effects were independent of the effects of target saliency. Matches to the contents of WM influenced perceptual sensitivity even under conditions that minimized competition for selecting the target. WM cues were also effective when targets were less likely to fall in a repeated WM stimulus than in other stimuli in the search display. There were no effects of WM on decisional criteria, in contrast to sensitivity. The findings suggest that reentrant feedback from WM can affect early stages of perceptual processing.

  18. Visual training paired with electrical stimulation of the basal forebrain improves orientation-selective visual acuity in the rat.

    PubMed

    Kang, Jun Il; Groleau, Marianne; Dotigny, Florence; Giguère, Hugo; Vaucher, Elvire

    2014-07-01

    The cholinergic afferents from the basal forebrain to the primary visual cortex play a key role in visual attention and cortical plasticity. These afferent fibers modulate acute and long-term responses of visual neurons to specific stimuli. The present study evaluates whether this cholinergic modulation of visual neurons results in cortical activity and visual perception changes. Awake adult rats were exposed repeatedly for 2 weeks to an orientation-specific grating with or without coupling this visual stimulation to an electrical stimulation of the basal forebrain. The visual acuity, as measured using a visual water maze before and after the exposure to the orientation-specific grating, was increased in the group of trained rats with simultaneous basal forebrain/visual stimulation. The increase in visual acuity was not observed when visual training or basal forebrain stimulation was performed separately or when cholinergic fibers were selectively lesioned prior to the visual stimulation. The visual evoked potentials show a long-lasting increase in cortical reactivity of the primary visual cortex after coupled visual/cholinergic stimulation, as well as c-Fos immunoreactivity of both pyramidal and GABAergic interneuron. These findings demonstrate that when coupled with visual training, the cholinergic system improves visual performance for the trained orientation probably through enhancement of attentional processes and cortical plasticity in V1 related to the ratio of excitatory/inhibitory inputs. This study opens the possibility of establishing efficient rehabilitation strategies for facilitating visual capacity.

  19. Face recognition increases during saccade preparation.

    PubMed

    Lin, Hai; Rizak, Joshua D; Ma, Yuan-ye; Yang, Shang-chuan; Chen, Lin; Hu, Xin-tian

    2014-01-01

    Face perception is integral to human perception system as it underlies social interactions. Saccadic eye movements are frequently made to bring interesting visual information, such as faces, onto the fovea for detailed processing. Just before eye movement onset, the processing of some basic features, such as the orientation, of an object improves at the saccade landing point. Interestingly, there is also evidence that indicates faces are processed in early visual processing stages similar to basic features. However, it is not known whether this early enhancement of processing includes face recognition. In this study, three experiments were performed to map the timing of face presentation to the beginning of the eye movement in order to evaluate pre-saccadic face recognition. Faces were found to be similarly processed as simple objects immediately prior to saccadic movements. Starting ∼ 120 ms before a saccade to a target face, independent of whether or not the face was surrounded by other faces, the face recognition gradually improved and the critical spacing of the crowding decreased as saccade onset was approaching. These results suggest that an upcoming saccade prepares the visual system for new information about faces at the saccade landing site and may reduce the background in a crowd to target the intended face. This indicates an important role of pre-saccadic eye movement signals in human face recognition.

  20. Behind Mathematical Learning Disabilities: What about Visual Perception and Motor Skills?

    ERIC Educational Resources Information Center

    Pieters, Stefanie; Desoete, Annemie; Roeyers, Herbert; Vanderswalmen, Ruth; Van Waelvelde, Hilde

    2012-01-01

    In a sample of 39 children with mathematical learning disabilities (MLD) and 106 typically developing controls belonging to three control groups of three different ages, we found that visual perception, motor skills and visual-motor integration explained a substantial proportion of the variance in either number fact retrieval or procedural…

  1. Commonalities between Perception and Cognition.

    PubMed

    Tacca, Michela C

    2011-01-01

    Perception and cognition are highly interrelated. Given the influence that these systems exert on one another, it is important to explain how perceptual representations and cognitive representations interact. In this paper, I analyze the similarities between visual perceptual representations and cognitive representations in terms of their structural properties and content. Specifically, I argue that the spatial structure underlying visual object representation displays systematicity - a property that is considered to be characteristic of propositional cognitive representations. To this end, I propose a logical characterization of visual feature binding as described by Treisman's Feature Integration Theory and argue that systematicity is not only a property of language-like representations, but also of spatially organized visual representations. Furthermore, I argue that if systematicity is taken to be a criterion to distinguish between conceptual and non-conceptual representations, then visual representations, that display systematicity, might count as an early type of conceptual representations. Showing these analogies between visual perception and cognition is an important step toward understanding the interface between the two systems. The ideas here presented might also set the stage for new empirical studies that directly compare binding (and other relational operations) in visual perception and higher cognition.

  2. Commonalities between Perception and Cognition

    PubMed Central

    Tacca, Michela C.

    2011-01-01

    Perception and cognition are highly interrelated. Given the influence that these systems exert on one another, it is important to explain how perceptual representations and cognitive representations interact. In this paper, I analyze the similarities between visual perceptual representations and cognitive representations in terms of their structural properties and content. Specifically, I argue that the spatial structure underlying visual object representation displays systematicity – a property that is considered to be characteristic of propositional cognitive representations. To this end, I propose a logical characterization of visual feature binding as described by Treisman’s Feature Integration Theory and argue that systematicity is not only a property of language-like representations, but also of spatially organized visual representations. Furthermore, I argue that if systematicity is taken to be a criterion to distinguish between conceptual and non-conceptual representations, then visual representations, that display systematicity, might count as an early type of conceptual representations. Showing these analogies between visual perception and cognition is an important step toward understanding the interface between the two systems. The ideas here presented might also set the stage for new empirical studies that directly compare binding (and other relational operations) in visual perception and higher cognition. PMID:22144974

  3. The effect of neurofeedback on a brain wave and visual perception in stroke: a randomized control trial.

    PubMed

    Cho, Hwi-Young; Kim, Kitae; Lee, Byounghee; Jung, Jinhwa

    2015-03-01

    [Purpose] This study investigated a brain wave and visual perception changes in stroke subjects using neurofeedback (NFB) training. [Subjects] Twenty-seven stroke subjects were randomly allocated to the NFB (n = 13) group and the control group (n=14). [Methods] Two expert therapists provided the NFB and CON groups with traditional rehabilitation therapy in 30 thirst-minute sessions over the course of 6 weeks. NFB training was provided only to the NFB group. The CON group received traditional rehabilitation therapy only. Before and after the 6-week intervention, a brain wave test and motor free visual perception test (MVPT) were performed. [Results] Both groups showed significant differences in their relative beta wave values and attention concentration quotients. Moreover, the NFB group showed a significant difference in MVPT visual discrimination, form constancy, visual memory, visual closure, spatial relation, raw score, and processing time. [Conclusion] This study demonstrated that NFB training is more effective for increasing concentration and visual perception changes than traditional rehabilitation. In further studies, detailed and diverse investigations should be performed considering the number and characteristics of subjects, and the NFB training period.

  4. Modeling the Perception of Audiovisual Distance: Bayesian Causal Inference and Other Models

    PubMed Central

    2016-01-01

    Studies of audiovisual perception of distance are rare. Here, visual and auditory cue interactions in distance are tested against several multisensory models, including a modified causal inference model. In this causal inference model predictions of estimate distributions are included. In our study, the audiovisual perception of distance was overall better explained by Bayesian causal inference than by other traditional models, such as sensory dominance and mandatory integration, and no interaction. Causal inference resolved with probability matching yielded the best fit to the data. Finally, we propose that sensory weights can also be estimated from causal inference. The analysis of the sensory weights allows us to obtain windows within which there is an interaction between the audiovisual stimuli. We find that the visual stimulus always contributes by more than 80% to the perception of visual distance. The visual stimulus also contributes by more than 50% to the perception of auditory distance, but only within a mobile window of interaction, which ranges from 1 to 4 m. PMID:27959919

  5. Bilateral Theta-Burst TMS to Influence Global Gestalt Perception

    PubMed Central

    Ritzinger, Bernd; Huberle, Elisabeth; Karnath, Hans-Otto

    2012-01-01

    While early and higher visual areas along the ventral visual pathway in the inferotemporal cortex are critical for the recognition of individual objects, the neural representation of human perception of complex global visual scenes remains under debate. Stroke patients with a selective deficit in the perception of a complex global Gestalt with intact recognition of individual objects – a deficit termed simultanagnosia – greatly helped to study this question. Interestingly, simultanagnosia typically results from bilateral lesions of the temporo-parietal junction (TPJ). The present study aimed to verify the relevance of this area for human global Gestalt perception. We applied continuous theta-burst TMS either unilaterally (left or right) or bilateral simultaneously over TPJ. Healthy subjects were presented with hierarchically organized visual stimuli that allowed parametrical degrading of the object at the global level. Identification of the global Gestalt was significantly modulated only for the bilateral TPJ stimulation condition. Our results strengthen the view that global Gestalt perception in the human brain involves TPJ and is co-dependent on both hemispheres. PMID:23110106

  6. Bilateral theta-burst TMS to influence global gestalt perception.

    PubMed

    Ritzinger, Bernd; Huberle, Elisabeth; Karnath, Hans-Otto

    2012-01-01

    While early and higher visual areas along the ventral visual pathway in the inferotemporal cortex are critical for the recognition of individual objects, the neural representation of human perception of complex global visual scenes remains under debate. Stroke patients with a selective deficit in the perception of a complex global Gestalt with intact recognition of individual objects - a deficit termed simultanagnosia - greatly helped to study this question. Interestingly, simultanagnosia typically results from bilateral lesions of the temporo-parietal junction (TPJ). The present study aimed to verify the relevance of this area for human global Gestalt perception. We applied continuous theta-burst TMS either unilaterally (left or right) or bilateral simultaneously over TPJ. Healthy subjects were presented with hierarchically organized visual stimuli that allowed parametrical degrading of the object at the global level. Identification of the global Gestalt was significantly modulated only for the bilateral TPJ stimulation condition. Our results strengthen the view that global Gestalt perception in the human brain involves TPJ and is co-dependent on both hemispheres.

  7. Global motion perception is related to motor function in 4.5-year-old children born at risk of abnormal development.

    PubMed

    Chakraborty, Arijit; Anstice, Nicola S; Jacobs, Robert J; Paudel, Nabin; LaGasse, Linda L; Lester, Barry M; McKinlay, Christopher J D; Harding, Jane E; Wouldes, Trecia A; Thompson, Benjamin

    2017-06-01

    Global motion perception is often used as an index of dorsal visual stream function in neurodevelopmental studies. However, the relationship between global motion perception and visuomotor control, a primary function of the dorsal stream, is unclear. We measured global motion perception (motion coherence threshold; MCT) and performance on standardized measures of motor function in 606 4.5-year-old children born at risk of abnormal neurodevelopment. Visual acuity, stereoacuity and verbal IQ were also assessed. After adjustment for verbal IQ or both visual acuity and stereoacuity, MCT was modestly, but significantly, associated with all components of motor function with the exception of fine motor scores. In a separate analysis, stereoacuity, but not visual acuity, was significantly associated with both gross and fine motor scores. These results indicate that the development of motion perception and stereoacuity are associated with motor function in pre-school children. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Developing effective serious games: the effect of background sound on visual fidelity perception with varying texture resolution.

    PubMed

    Rojas, David; Kapralos, Bill; Cristancho, Sayra; Collins, Karen; Hogue, Andrew; Conati, Cristina; Dubrowski, Adam

    2012-01-01

    Despite the benefits associated with virtual learning environments and serious games, there are open, fundamental issues regarding simulation fidelity and multi-modal cue interaction and their effect on immersion, transfer of knowledge, and retention. Here we describe the results of a study that examined the effect of ambient (background) sound on the perception of visual fidelity (defined with respect to texture resolution). Results suggest that the perception of visual fidelity is dependent on ambient sound and more specifically, white noise can have detrimental effects on our perception of high quality visuals. The results of this study will guide future studies that will ultimately aid in developing an understanding of the role that fidelity, and multi-modal interactions play with respect to knowledge transfer and retention for users of virtual simulations and serious games.

  9. [Visual perception abilities in children with reading disabilities].

    PubMed

    Werpup-Stüwe, Lina; Petermann, Franz

    2015-05-01

    Visual perceptual abilities are increasingly being neglected in research concerning reading disabilities. This study measures the visual perceptual abilities of children with disabilities in reading. The visual perceptual abilities of 35 children with specific reading disorder and 30 controls were compared using the German version of the Developmental Test of Visual Perception – Adolescent and Adult (DTVP-A). 11 % of the children with specific reading disorder show clinically relevant performance on the DTVP-A. The perceptual abilities of both groups differ significantly. No significant group differences exist after controlling for general IQ or Perceptional Reasoning Index, but they do remain after controlling for Verbal Comprehension, Working Memory, and Processing Speed Index. The number of children with reading difficulties suffering from visual perceptual disorders has been underestimated. For this reason, visual perceptual abilities should always be tested when making a reading disorder diagnosis. Profiles of IQ-test results of children suffering from reading and visual perceptual disorders should be interpreted carefully.

  10. Visual Motion Perception and Visual Attentive Processes.

    DTIC Science & Technology

    1988-04-01

    88-0551 Visual Motion Perception and Visual Attentive Processes George Spering , New YorkUnivesity A -cesson For DTIC TAB rant AFOSR 85-0364... Spering . HIPSt: A Unix-based image processing syslem. Computer Vision, Graphics, and Image Processing, 1984,25. 331-347. ’HIPS is the Human Information...Processing Laboratory’s Image Processing System. 1985 van Santen, Jan P. It, and George Spering . Elaborated Reichardt detectors. Journal of the Optical

  11. Compensatory shifts in visual perception are associated with hallucinations in Lewy body disorders.

    PubMed

    Bowman, Alan Robert; Bruce, Vicki; Colbourn, Christopher J; Collerton, Daniel

    2017-01-01

    Visual hallucinations are a common, distressing, and disabling symptom of Lewy body and other diseases. Current models suggest that interactions in internal cognitive processes generate hallucinations. However, these neglect external factors. Pareidolic illusions are an experimental analogue of hallucinations. They are easily induced in Lewy body disease, have similar content to spontaneous hallucinations, and respond to cholinesterase inhibitors in the same way. We used a primed pareidolia task with hallucinating participants with Lewy body disorders (n = 16), non-hallucinating participants with Lewy body disorders (n = 19), and healthy controls (n = 20). Participants were presented with visual "noise" that sometimes contained degraded visual objects and were required to indicate what they saw. Some perceptions were cued in advance by a visual prime. Results showed that hallucinating participants were impaired in discerning visual signals from noise, with a relaxed criterion threshold for perception compared to both other groups. After the presentation of a visual prime, the criterion was comparable to the other groups. The results suggest that participants with hallucinations compensate for perceptual deficits by relaxing perceptual criteria, at a cost of seeing things that are not there, and that visual cues regularize perception. This latter finding may provide a mechanism for understanding the interaction between environments and hallucinations.

  12. Visual speech perception in foveal and extrafoveal vision: further implications for divisions in hemispheric projections.

    PubMed

    Jordan, Timothy R; Sheen, Mercedes; Abedipour, Lily; Paterson, Kevin B

    2014-01-01

    When observing a talking face, it has often been argued that visual speech to the left and right of fixation may produce differences in performance due to divided projections to the two cerebral hemispheres. However, while it seems likely that such a division in hemispheric projections exists for areas away from fixation, the nature and existence of a functional division in visual speech perception at the foveal midline remains to be determined. We investigated this issue by presenting visual speech in matched hemiface displays to the left and right of a central fixation point, either exactly abutting the foveal midline or else located away from the midline in extrafoveal vision. The location of displays relative to the foveal midline was controlled precisely using an automated, gaze-contingent eye-tracking procedure. Visual speech perception showed a clear right hemifield advantage when presented in extrafoveal locations but no hemifield advantage (left or right) when presented abutting the foveal midline. Thus, while visual speech observed in extrafoveal vision appears to benefit from unilateral projections to left-hemisphere processes, no evidence was obtained to indicate that a functional division exists when visual speech is observed around the point of fixation. Implications of these findings for understanding visual speech perception and the nature of functional divisions in hemispheric projection are discussed.

  13. Auditory-visual speech integration by prelinguistic infants: perception of an emergent consonant in the McGurk effect.

    PubMed

    Burnham, Denis; Dodd, Barbara

    2004-12-01

    The McGurk effect, in which auditory [ba] dubbed onto [ga] lip movements is perceived as "da" or "tha," was employed in a real-time task to investigate auditory-visual speech perception in prelingual infants. Experiments 1A and 1B established the validity of real-time dubbing for producing the effect. In Experiment 2, 4 1/2-month-olds were tested in a habituation-test paradigm, in which an auditory-visual stimulus was presented contingent upon visual fixation of a live face. The experimental group was habituated to a McGurk stimulus (auditory [ba] visual [ga]), and the control group to matching auditory-visual [ba]. Each group was then presented with three auditory-only test trials, [ba], [da], and [(delta)a] (as in then). Visual-fixation durations in test trials showed that the experimental group treated the emergent percept in the McGurk effect, [da] or [(delta)a], as familiar (even though they had not heard these sounds previously) and [ba] as novel. For control group infants [da] and [(delta)a] were no more familiar than [ba]. These results are consistent with infants' perception of the McGurk effect, and support the conclusion that prelinguistic infants integrate auditory and visual speech information. Copyright 2004 Wiley Periodicals, Inc.

  14. Temporal dynamics of figure-ground segregation in human vision.

    PubMed

    Neri, Peter; Levi, Dennis M

    2007-01-01

    The segregation of figure from ground is arguably one of the most fundamental operations in human vision. Neural signals reflecting this operation appear in cortex as early as 50 ms and as late as 300 ms after presentation of a visual stimulus, but it is not known when these signals are used by the brain to construct the percepts of figure and ground. We used psychophysical reverse correlation to identify the temporal window for figure-ground signals in human perception and found it to lie within the range of 100-160 ms. Figure enhancement within this narrow temporal window was transient rather than sustained as may be expected from measurements in single neurons. These psychophysical results prompt and guide further electrophysiological studies.

  15. On the role of crossmodal prediction in audiovisual emotion perception.

    PubMed

    Jessen, Sarah; Kotz, Sonja A

    2013-01-01

    Humans rely on multiple sensory modalities to determine the emotional state of others. In fact, such multisensory perception may be one of the mechanisms explaining the ease and efficiency by which others' emotions are recognized. But how and when exactly do the different modalities interact? One aspect in multisensory perception that has received increasing interest in recent years is the concept of cross-modal prediction. In emotion perception, as in most other settings, visual information precedes the auditory information. Thereby, leading in visual information can facilitate subsequent auditory processing. While this mechanism has often been described in audiovisual speech perception, so far it has not been addressed in audiovisual emotion perception. Based on the current state of the art in (a) cross-modal prediction and (b) multisensory emotion perception research, we propose that it is essential to consider the former in order to fully understand the latter. Focusing on electroencephalographic (EEG) and magnetoencephalographic (MEG) studies, we provide a brief overview of the current research in both fields. In discussing these findings, we suggest that emotional visual information may allow more reliable predicting of auditory information compared to non-emotional visual information. In support of this hypothesis, we present a re-analysis of a previous data set that shows an inverse correlation between the N1 EEG response and the duration of visual emotional, but not non-emotional information. If the assumption that emotional content allows more reliable predicting can be corroborated in future studies, cross-modal prediction is a crucial factor in our understanding of multisensory emotion perception.

  16. The effects of 5.1 sound presentations on the perception of stereoscopic imagery in video games

    NASA Astrophysics Data System (ADS)

    Cullen, Brian; Galperin, Daniel; Collins, Karen; Hogue, Andrew; Kapralos, Bill

    2013-03-01

    Stereoscopic 3D (S3D) content in games, film and other audio-visual media has been steadily increasing over the past number of years. However, there are still open, fundamental questions regarding its implementation, particularly as it relates to a multi-modal experience that involves sound and haptics. Research has shown that sound has considerable impact on our perception of 2D phenomena, but very little research has considered how sound may influence stereoscopic 3D. Here we present the results of an experiment that examined the effects of 5.1 surround sound (5.1) and stereo loudspeaker setups on depth perception in relation to S3D imagery within a video game environment. Our aim was to answer the question: "can 5.1 surround sound enhance the participant's perception of depth in the stereoscopic field when compared to traditional stereo sound presentations?" In addition, our study examined how the presence or absence of Doppler frequency shift and frequency fall-off audio effects can also influence depth judgment under these conditions. Results suggest that 5.1 surround sound presentations enhance the apparent depth of stereoscopic imagery when compared to stereo presentations. Results also suggest that the addition of audio effects such as Doppler shift and frequency fall-off filters can influence the apparent depth of S3D objects.

  17. Electrocortical amplification for emotionally arousing natural scenes: The contribution of luminance and chromatic visual channels

    PubMed Central

    Miskovic, Vladimir; Martinovic, Jasna; Wieser, Matthias M.; Petro, Nathan M.; Bradley, Margaret M.; Keil, Andreas

    2015-01-01

    Emotionally arousing scenes readily capture visual attention, prompting amplified neural activity in sensory regions of the brain. The physical stimulus features and related information channels in the human visual system that contribute to this modulation, however, are not known. Here, we manipulated low-level physical parameters of complex scenes varying in hedonic valence and emotional arousal in order to target the relative contributions of luminance based versus chromatic visual channels to emotional perception. Stimulus-evoked brain electrical activity was measured during picture viewing and used to quantify neural responses sensitive to lower-tier visual cortical involvement (steady-state visual evoked potentials) as well as the late positive potential, reflecting a more distributed cortical event. Results showed that the enhancement for emotional content was stimulus-selective when examining the steady-state segments of the evoked visual potentials. Response amplification was present only for low spatial frequency, grayscale stimuli, and not for high spatial frequency, red/green stimuli. In contrast, the late positive potential was modulated by emotion regardless of the scene’s physical properties. Our findings are discussed in relation to neurophysiologically plausible constraints operating at distinct stages of the cortical processing stream. PMID:25640949

  18. Electrocortical amplification for emotionally arousing natural scenes: the contribution of luminance and chromatic visual channels.

    PubMed

    Miskovic, Vladimir; Martinovic, Jasna; Wieser, Matthias J; Petro, Nathan M; Bradley, Margaret M; Keil, Andreas

    2015-03-01

    Emotionally arousing scenes readily capture visual attention, prompting amplified neural activity in sensory regions of the brain. The physical stimulus features and related information channels in the human visual system that contribute to this modulation, however, are not known. Here, we manipulated low-level physical parameters of complex scenes varying in hedonic valence and emotional arousal in order to target the relative contributions of luminance based versus chromatic visual channels to emotional perception. Stimulus-evoked brain electrical activity was measured during picture viewing and used to quantify neural responses sensitive to lower-tier visual cortical involvement (steady-state visual evoked potentials) as well as the late positive potential, reflecting a more distributed cortical event. Results showed that the enhancement for emotional content was stimulus-selective when examining the steady-state segments of the evoked visual potentials. Response amplification was present only for low spatial frequency, grayscale stimuli, and not for high spatial frequency, red/green stimuli. In contrast, the late positive potential was modulated by emotion regardless of the scene's physical properties. Our findings are discussed in relation to neurophysiologically plausible constraints operating at distinct stages of the cortical processing stream. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Perceptions of Chemistry: Why Is the Common Perception of Chemistry, the Most Visual of Sciences, So Distorted?

    ERIC Educational Resources Information Center

    Habraken, Clarisse L.

    1996-01-01

    Highlights the need to reinvigorate chemistry education by means of the visual-spatial approach, an approach wholly in conformance with the way modern chemistry is thought about and practiced. Discusses the changing world, multiple intelligences, imagery, chemistry's pictorial language, and perceptions in chemistry. Presents suggestions on how to…

  20. To See or Not to See: Analyzing Difficulties in Geometry from the Perspective of Visual Perception

    ERIC Educational Resources Information Center

    Gal, Hagar; Linchevski, Liora

    2010-01-01

    In this paper, we consider theories about processes of visual perception and perception-based knowledge representation (VPR) in order to explain difficulties encountered in figural processing in junior high school geometry tasks. In order to analyze such difficulties, we take advantage of the following perspectives of VPR: (1) Perceptual…

  1. Ventral and Dorsal Visual Stream Contributions to the Perception of Object Shape and Object Location

    PubMed Central

    Zachariou, Valentinos; Klatzky, Roberta; Behrmann, Marlene

    2017-01-01

    Growing evidence suggests that the functional specialization of the two cortical visual pathways may not be as distinct as originally proposed. Here, we explore possible contributions of the dorsal “where/how” visual stream to shape perception and, conversely, contributions of the ventral “what” visual stream to location perception in human adults. Participants performed a shape detection task and a location detection task while undergoing fMRI. For shape detection, comparable BOLD activation in the ventral and dorsal visual streams was observed, and the magnitude of this activation was correlated with behavioral performance. For location detection, cortical activation was significantly stronger in the dorsal than ventral visual pathway and did not correlate with the behavioral outcome. This asymmetry in cortical profile across tasks is particularly noteworthy given that the visual input was identical and that the tasks were matched for difficulty in performance. We confirmed the asymmetry in a subsequent psychophysical experiment in which participants detected changes in either object location or shape, while ignoring the other, task-irrelevant dimension. Detection of a location change was slowed by an irrelevant shape change matched for difficulty, but the reverse did not hold. We conclude that both ventral and dorsal visual streams contribute to shape perception, but that location processing appears to be essentially a function of the dorsal visual pathway. PMID:24001005

  2. Audio aided electro-tactile perception training for finger posture biofeedback.

    PubMed

    Vargas, Jose Gonzalez; Yu, Wenwei

    2008-01-01

    Visual information is one of the prerequisites for most biofeedback studies. The aim of this study is to explore how the usage of an audio aided training helps in the learning process of dynamical electro-tactile perception without any visual feedback. In this research, the electrical simulation patterns associated with the experimenter's finger postures and motions were presented to the subjects. Along with the electrical stimulation patterns 2 different types of information, verbal and audio information on finger postures and motions, were presented to the verbal training subject group (group 1) and audio training subject group (group 2), respectively. The results showed an improvement in the ability to distinguish and memorize electrical stimulation patterns correspondent to finger postures and motions without visual feedback, and with audio tones aid, the learning was faster and the perception became more precise after training. Thus, this study clarified that, as a substitution to visual presentation, auditory information could help effectively in the formation of electro-tactile perception. Further research effort needed to make clear the difference between the visual guided and audio aided training in terms of information compilation, post-training effect and robustness of the perception.

  3. The Use of Animated Videos to Illustrate Oral Solid Dosage Form Manufacturing in a Pharmaceutics Course.

    PubMed

    Yellepeddi, Venkata Kashyap; Roberson, Charles

    2016-10-25

    Objective. To evaluate the impact of animated videos of oral solid dosage form manufacturing as visual instructional aids on pharmacy students' perception and learning. Design. Data were obtained using a validated, paper-based survey instrument designed to evaluate the effectiveness, appeal, and efficiency of the animated videos in a pharmaceutics course offered in spring 2014 and 2015. Basic demographic data were also collected and analyzed. Assessment data at the end of pharmaceutics course was collected for 2013 and compared with assessment data from 2014, and 2015. Assessment. Seventy-six percent of the respondents supported the idea of incorporating animated videos as instructional aids for teaching pharmaceutics. Students' performance on the formative assessment in 2014 and 2015 improved significantly compared to the performance of students in 2013 whose lectures did not include animated videos as instructional aids. Conclusions. Implementing animated videos of oral solid dosage form manufacturing as instructional aids resulted in improved student learning and favorable student perceptions about the instructional approach. Therefore, use of animated videos can be incorporated in pharmaceutics teaching to enhance visual learning.

  4. An Intraocular Camera for Retinal Prostheses: Restoring Sight to the Blind

    NASA Astrophysics Data System (ADS)

    Stiles, Noelle R. B.; McIntosh, Benjamin P.; Nasiatka, Patrick J.; Hauer, Michelle C.; Weiland, James D.; Humayun, Mark S.; Tanguay, Armand R., Jr.

    Implantation of an intraocular retinal prosthesis represents one possible approach to the restoration of sight in those with minimal light perception due to photoreceptor degenerating diseases such as retinitis pigmentosa and age-related macular degeneration. In such an intraocular retinal prosthesis, a microstimulator array attached to the retina is used to electrically stimulate still-viable retinal ganglion cells that transmit retinotopic image information to the visual cortex by means of the optic nerve, thereby creating an image percept. We describe herein an intraocular camera that is designed to be implanted in the crystalline lens sac and connected to the microstimulator array. Replacement of an extraocular (head-mounted) camera with the intraocular camera restores the natural coupling of head and eye motion associated with foveation, thereby enhancing visual acquisition, navigation, and mobility tasks. This research is in no small part inspired by the unique scientific style and research methodologies that many of us have learned from Prof. Richard K. Chang of Yale University, and is included herein as an example of the extent and breadth of his impact and legacy.

  5. Ecological validity of neuropsychological assessment and perceived employability.

    PubMed

    Wen, Johnny H; Boone, Kyle; Kim, Kevin

    2006-11-01

    Ecological validity studies that have examined the relationship between cognitive abilities and employment in psychiatric and medical populations have found that a wide range of cognitive domains predict employability, although memory and executive skills appear to be the most important. However, no information is available regarding a patient's self-perceived work attributes and objective neuropsychological performance, and whether the same cognitive domains associated with successful employment are also related to a patient's self-perception of work competence. In the present study, 73 medical and psychiatric patients underwent comprehensive neuropsychological assessment. Step-wise multiple regression analyses revealed that the visual-spatial domain was the only significant predictor of self-perceived work attributes and work competence as measured by the Working Inventory (WI) and the Work Adjustment Inventory (WAI), accounting for 7% to 10% of inventory score variability. The results raise the intriguing possibility that targeting of visual spatial skills for remediation and development might play a separate and unique role in the vocational rehabilitation of a lower SES population, specifically, by leading to enhanced self-perception of work competence as these individuals attempt to enter the job market.

  6. The Use of Animated Videos to Illustrate Oral Solid Dosage Form Manufacturing in a Pharmaceutics Course

    PubMed Central

    Roberson, Charles

    2016-01-01

    Objective. To evaluate the impact of animated videos of oral solid dosage form manufacturing as visual instructional aids on pharmacy students’ perception and learning. Design. Data were obtained using a validated, paper-based survey instrument designed to evaluate the effectiveness, appeal, and efficiency of the animated videos in a pharmaceutics course offered in spring 2014 and 2015. Basic demographic data were also collected and analyzed. Assessment data at the end of pharmaceutics course was collected for 2013 and compared with assessment data from 2014, and 2015. Assessment. Seventy-six percent of the respondents supported the idea of incorporating animated videos as instructional aids for teaching pharmaceutics. Students’ performance on the formative assessment in 2014 and 2015 improved significantly compared to the performance of students in 2013 whose lectures did not include animated videos as instructional aids. Conclusions. Implementing animated videos of oral solid dosage form manufacturing as instructional aids resulted in improved student learning and favorable student perceptions about the instructional approach. Therefore, use of animated videos can be incorporated in pharmaceutics teaching to enhance visual learning. PMID:27899837

  7. Children with dyslexia show a reduced processing benefit from bimodal speech information compared to their typically developing peers.

    PubMed

    Schaadt, Gesa; van der Meer, Elke; Pannekamp, Ann; Oberecker, Regine; Männel, Claudia

    2018-01-17

    During information processing, individuals benefit from bimodally presented input, as has been demonstrated for speech perception (i.e., printed letters and speech sounds) or the perception of emotional expressions (i.e., facial expression and voice tuning). While typically developing individuals show this bimodal benefit, school children with dyslexia do not. Currently, it is unknown whether the bimodal processing deficit in dyslexia also occurs for visual-auditory speech processing that is independent of reading and spelling acquisition (i.e., no letter-sound knowledge is required). Here, we tested school children with and without spelling problems on their bimodal perception of video-recorded mouth movements pronouncing syllables. We analyzed the event-related potential Mismatch Response (MMR) to visual-auditory speech information and compared this response to the MMR to monomodal speech information (i.e., auditory-only, visual-only). We found a reduced MMR with later onset to visual-auditory speech information in children with spelling problems compared to children without spelling problems. Moreover, when comparing bimodal and monomodal speech perception, we found that children without spelling problems showed significantly larger responses in the visual-auditory experiment compared to the visual-only response, whereas children with spelling problems did not. Our results suggest that children with dyslexia exhibit general difficulties in bimodal speech perception independently of letter-speech sound knowledge, as apparent in altered bimodal speech perception and lacking benefit from bimodal information. This general deficit in children with dyslexia may underlie the previously reported reduced bimodal benefit for letter-speech sound combinations and similar findings in emotion perception. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Association of auditory-verbal and visual hallucinations with impaired and improved recognition of colored pictures.

    PubMed

    Brébion, Gildas; Stephan-Otto, Christian; Usall, Judith; Huerta-Ramos, Elena; Perez del Olmo, Mireia; Cuevas-Esteban, Jorge; Haro, Josep Maria; Ochoa, Susana

    2015-09-01

    A number of cognitive underpinnings of auditory hallucinations have been established in schizophrenia patients, but few have, as yet, been uncovered for visual hallucinations. In previous research, we unexpectedly observed that auditory hallucinations were associated with poor recognition of color, but not black-and-white (b/w), pictures. In this study, we attempted to replicate and explain this finding. Potential associations with visual hallucinations were explored. B/w and color pictures were presented to 50 schizophrenia patients and 45 healthy individuals under 2 conditions of visual context presentation corresponding to 2 levels of visual encoding complexity. Then, participants had to recognize the target pictures among distractors. Auditory-verbal hallucinations were inversely associated with the recognition of the color pictures presented under the most effortful encoding condition. This association was fully mediated by working-memory span. Visual hallucinations were associated with improved recognition of the color pictures presented under the less effortful condition. Patients suffering from visual hallucinations were not impaired, relative to the healthy participants, in the recognition of these pictures. Decreased working-memory span in patients with auditory-verbal hallucinations might impede the effortful encoding of stimuli. Visual hallucinations might be associated with facilitation in the visual encoding of natural scenes, or with enhanced color perception abilities. (c) 2015 APA, all rights reserved).

  9. Is improved contrast sensitivity a natural consequence of visual training?

    PubMed Central

    Levi, Aaron; Shaked, Danielle; Tadin, Duje; Huxlin, Krystel R.

    2015-01-01

    Many studies have shown that training and testing conditions modulate specificity of visual learning to trained stimuli and tasks. In visually impaired populations, generalizability of visual learning to untrained stimuli/tasks is almost always reported, with contrast sensitivity (CS) featuring prominently among these collaterally-improved functions. To understand factors underlying this difference, we measured CS for direction and orientation discrimination in the visual periphery of three groups of visually-intact subjects. Group 1 trained on an orientation discrimination task with static Gabors whose luminance contrast was decreased as performance improved. Group 2 trained on a global direction discrimination task using high-contrast random dot stimuli previously used to recover motion perception in cortically blind patients. Group 3 underwent no training. Both forms of training improved CS with some degree of specificity for basic attributes of the trained stimulus/task. Group 1's largest enhancement was in CS around the trained spatial/temporal frequencies; similarly, Group 2's largest improvements occurred in CS for discriminating moving and flickering stimuli. Group 3 saw no significant CS changes. These results indicate that CS improvements may be a natural consequence of multiple forms of visual training in visually intact humans, albeit with some specificity to the trained visual domain(s). PMID:26305736

  10. The Time Is Up: Compression of Visual Time Interval Estimations of Bimodal Aperiodic Patterns

    PubMed Central

    Duarte, Fabiola; Lemus, Luis

    2017-01-01

    The ability to estimate time intervals subserves many of our behaviors and perceptual experiences. However, it is not clear how aperiodic (AP) stimuli affect our perception of time intervals across sensory modalities. To address this question, we evaluated the human capacity to discriminate between two acoustic (A), visual (V) or audiovisual (AV) time intervals of trains of scattered pulses. We first measured the periodicity of those stimuli and then sought for correlations with the accuracy and reaction times (RTs) of the subjects. We found that, for all time intervals tested in our experiment, the visual system consistently perceived AP stimuli as being shorter than the periodic (P) ones. In contrast, such a compression phenomenon was not apparent during auditory trials. Our conclusions are: first, the subjects exposed to P stimuli are more likely to measure their durations accurately. Second, perceptual time compression occurs for AP visual stimuli. Lastly, AV discriminations are determined by A dominance rather than by AV enhancement. PMID:28848406

  11. Visual speech information: a help or hindrance in perceptual processing of dysarthric speech.

    PubMed

    Borrie, Stephanie A

    2015-03-01

    This study investigated the influence of visual speech information on perceptual processing of neurologically degraded speech. Fifty listeners identified spastic dysarthric speech under both audio (A) and audiovisual (AV) conditions. Condition comparisons revealed that the addition of visual speech information enhanced processing of the neurologically degraded input in terms of (a) acuity (percent phonemes correct) of vowels and consonants and (b) recognition (percent words correct) of predictive and nonpredictive phrases. Listeners exploited stress-based segmentation strategies more readily in AV conditions, suggesting that the perceptual benefit associated with adding visual speech information to the auditory signal-the AV advantage-has both segmental and suprasegmental origins. Results also revealed that the magnitude of the AV advantage can be predicted, to some degree, by the extent to which an individual utilizes syllabic stress cues to inform word recognition in AV conditions. Findings inform the development of a listener-specific model of speech perception that applies to processing of dysarthric speech in everyday communication contexts.

  12. An fMRI-study of locally oriented perception in autism: altered early visual processing of the block design test.

    PubMed

    Bölte, S; Hubl, D; Dierks, T; Holtmann, M; Poustka, F

    2008-01-01

    Autism has been associated with enhanced local processing on visual tasks. Originally, this was based on findings that individuals with autism exhibited peak performance on the block design test (BDT) from the Wechsler Intelligence Scales. In autism, the neurofunctional correlates of local bias on this test have not yet been established, although there is evidence of alterations in the early visual cortex. Functional MRI was used to analyze hemodynamic responses in the striate and extrastriate visual cortex during BDT performance and a color counting control task in subjects with autism compared to healthy controls. In autism, BDT processing was accompanied by low blood oxygenation level-dependent signal changes in the right ventral quadrant of V2. Findings indicate that, in autism, locally oriented processing of the BDT is associated with altered responses of angle and grating-selective neurons, that contribute to shape representation, figure-ground, and gestalt organization. The findings favor a low-level explanation of BDT performance in autism.

  13. Implications on visual apperception: energy, duration, structure and synchronization.

    PubMed

    Bókkon, I; Vimal, Ram Lakhan Pandey

    2010-07-01

    Although primary visual cortex (V1 or striate) activity per se is not sufficient for visual apperception (normal conscious visual experiences and conscious functions such as detection, discrimination, and recognition), the same is also true for extrastriate visual areas (such as V2, V3, V4/V8/VO, V5/M5/MST, IT, and GF). In the lack of V1 area, visual signals can still reach several extrastriate parts but appear incapable of generating normal conscious visual experiences. It is scarcely emphasized in the scientific literature that conscious perceptions and representations must have also essential energetic conditions. These energetic conditions are achieved by spatiotemporal networks of dynamic mitochondrial distributions inside neurons. However, the highest density of neurons in neocortex (number of neurons per degree of visual angle) devoted to representing the visual field is found in retinotopic V1. It means that the highest mitochondrial (energetic) activity can be achieved in mitochondrial cytochrome oxidase-rich V1 areas. Thus, V1 bear the highest energy allocation for visual representation. In addition, the conscious perceptions also demand structural conditions, presence of adequate duration of information representation, and synchronized neural processes and/or 'interactive hierarchical structuralism.' For visual apperception, various visual areas are involved depending on context such as stimulus characteristics such as color, form/shape, motion, and other features. Here, we focus primarily on V1 where specific mitochondrial-rich retinotopic structures are found; we will concisely discuss V2 where smaller riches of these structures are found. We also point out that residual brain states are not fully reflected in active neural patterns after visual perception. Namely, after visual perception, subliminal residual states are not being reflected in passive neural recording techniques, but require active stimulation to be revealed.

  14. Optimization of LED light spectrum to enhance colorfulness of illuminated objects with white light constraints.

    PubMed

    Wu, Haining; Dong, Jianfei; Qi, Gaojin; Zhang, Guoqi

    2015-07-01

    Enhancing the colorfulness of illuminated objects is a promising application of LED lighting for commercial, exhibiting, and scientific purposes. This paper proposes a method to enhance the color of illuminated objects for a given polychromatic lamp. Meanwhile, the light color is restricted to white. We further relax the white light constraints by introducing soft margins. Based on the spectral and electrical characteristics of LEDs and object surface properties, we determine the optimal mixing of the LED light spectrum by solving a numerical optimization problem, which is a quadratic fractional programming problem by formulation. Simulation studies show that the trade-off between the white light constraint and the level of the color enhancement can be adjusted by tuning an upper limit value of the soft margin. Furthermore, visual evaluation experiments are performed to evaluate human perception of the color enhancement. The experiments have verified the effectiveness of the proposed method.

  15. Public health nurse perceptions of Omaha System data visualization.

    PubMed

    Lee, Seonah; Kim, Era; Monsen, Karen A

    2015-10-01

    Electronic health records (EHRs) provide many benefits related to the storage, deployment, and retrieval of large amounts of patient data. However, EHRs have not fully met the need to reuse data for decision making on follow-up care plans. Visualization offers new ways to present health data, especially in EHRs. Well-designed data visualization allows clinicians to communicate information efficiently and effectively, contributing to improved interpretation of clinical data and better patient care monitoring and decision making. Public health nurse (PHN) perceptions of Omaha System data visualization prototypes for use in EHRs have not been evaluated. To visualize PHN-generated Omaha System data and assess PHN perceptions regarding the visual validity, helpfulness, usefulness, and importance of the visualizations, including interactive functionality. Time-oriented visualization for problems and outcomes and Matrix visualization for problems and interventions were developed using PHN-generated Omaha System data to help PHNs consume data and plan care at the point of care. Eleven PHNs evaluated prototype visualizations. Overall PHNs response to visualizations was positive, and feedback for improvement was provided. This study demonstrated the potential for using visualization techniques within EHRs to summarize Omaha System patient data for clinicians. Further research is needed to improve and refine these visualizations and assess the potential to incorporate visualizations within clinical EHRs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  16. Top-Down and Bottom-Up Visual Information Processing of Non-Social Stimuli in High-Functioning Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Maekawa, Toshihiko; Tobimatsu, Shozo; Inada, Naoko; Oribe, Naoya; Onitsuka, Toshiaki; Kanba, Shigenobu; Kamio, Yoko

    2011-01-01

    Individuals with high-functioning autism spectrum disorder (HF-ASD) often show superior performance in simple visual tasks, despite difficulties in the perception of socially important information such as facial expression. The neural basis of visual perception abnormalities associated with HF-ASD is currently unclear. We sought to elucidate the…

  17. Biometric Research in Perception and Neurology Related to the Study of Visual Communication.

    ERIC Educational Resources Information Center

    Metallinos, Nikos

    Contemporary research findings in the fields of perceptual psychology and neurology of the human brain that are directly related to the study of visual communication are reviewed and briefly discussed in this paper. Specifically, the paper identifies those major research findings in visual perception that are relevant to the study of visual…

  18. Perceptual flexibility is coupled with reduced executive inhibition in students of the visual arts.

    PubMed

    Chamberlain, Rebecca; Swinnen, Lena; Heeren, Sarah; Wagemans, Johan

    2018-05-01

    Artists often report that seeing familiar stimuli in novel and interesting ways plays a role in visual art creation. However, the attentional mechanisms which underpin this ability have yet to be fully investigated. More specifically, it is unclear whether the ability to reinterpret visual stimuli in novel and interesting ways is facilitated by endogenously generated switches of attention, and whether it is linked in turn to executive functions such as inhibition and response switching. To address this issue, the current study explored ambiguous figure reversal and executive function in a sample of undergraduate students studying arts and non-art subjects (N = 141). Art students showed more frequent perceptual reversals in an ambiguous figure task, both when viewing the stimulus passively and when eliciting perceptual reversals voluntarily, but showed no difference from non-art students when asked to actively maintain specific percepts. In addition, art students were worse than non-art students at inhibiting distracting flankers in an executive inhibition task. The findings suggest that art students can elicit endogenous shifts of attention more easily than non-art students but that this faculty is not directly associated with enhanced executive function. It is proposed that the signature of artistic skill may be increased perceptual flexibility accompanied by reduced cognitive inhibition; however, future research will be necessary to determine which particular subskills in the visual arts are linked to aspects of perception and executive function. © 2017 The British Psychological Society.

  19. Neural time course of visually enhanced echo suppression.

    PubMed

    Bishop, Christopher W; London, Sam; Miller, Lee M

    2012-10-01

    Auditory spatial perception plays a critical role in day-to-day communication. For instance, listeners utilize acoustic spatial information to segregate individual talkers into distinct auditory "streams" to improve speech intelligibility. However, spatial localization is an exceedingly difficult task in everyday listening environments with numerous distracting echoes from nearby surfaces, such as walls. Listeners' brains overcome this unique challenge by relying on acoustic timing and, quite surprisingly, visual spatial information to suppress short-latency (1-10 ms) echoes through a process known as "the precedence effect" or "echo suppression." In the present study, we employed electroencephalography (EEG) to investigate the neural time course of echo suppression both with and without the aid of coincident visual stimulation in human listeners. We find that echo suppression is a multistage process initialized during the auditory N1 (70-100 ms) and followed by space-specific suppression mechanisms from 150 to 250 ms. Additionally, we find a robust correlate of listeners' spatial perception (i.e., suppressing or not suppressing the echo) over central electrode sites from 300 to 500 ms. Contrary to our hypothesis, vision's powerful contribution to echo suppression occurs late in processing (250-400 ms), suggesting that vision contributes primarily during late sensory or decision making processes. Together, our findings support growing evidence that echo suppression is a slow, progressive mechanism modifiable by visual influences during late sensory and decision making stages. Furthermore, our findings suggest that audiovisual interactions are not limited to early, sensory-level modulations but extend well into late stages of cortical processing.

  20. The Coordination Dynamics of Observational Learning: Relative Motion Direction and Relative Phase as Informational Content Linking Action-Perception to Action-Production.

    PubMed

    Buchanan, John J

    2016-01-01

    The primary goal of this chapter is to merge together the visual perception perspective of observational learning and the coordination dynamics theory of pattern formation in perception and action. Emphasis is placed on identifying movement features that constrain and inform action-perception and action-production processes. Two sources of visual information are examined, relative motion direction and relative phase. The visual perception perspective states that the topological features of relative motion between limbs and joints remains invariant across an actor's motion and therefore are available for pickup by an observer. Relative phase has been put forth as an informational variable that links perception to action within the coordination dynamics theory. A primary assumption of the coordination dynamics approach is that environmental information is meaningful only in terms of the behavior it modifies. Across a series of single limb tasks and bimanual tasks it is shown that the relative motion and relative phase between limbs and joints is picked up through visual processes and supports observational learning of motor skills. Moreover, internal estimations of motor skill proficiency and competency are linked to the informational content found in relative motion and relative phase. Thus, the chapter links action to perception and vice versa and also links cognitive evaluations to the coordination dynamics that support action-perception and action-production processes.

  1. Spatiotemporal Filter for Visual Motion Integration from Pursuit Eye Movements in Humans and Monkeys

    PubMed Central

    Liu, Bing

    2017-01-01

    Despite the enduring interest in motion integration, a direct measure of the space–time filter that the brain imposes on a visual scene has been elusive. This is perhaps because of the challenge of estimating a 3D function from perceptual reports in psychophysical tasks. We take a different approach. We exploit the close connection between visual motion estimates and smooth pursuit eye movements to measure stimulus–response correlations across space and time, computing the linear space–time filter for global motion direction in humans and monkeys. Although derived from eye movements, we find that the filter predicts perceptual motion estimates quite well. To distinguish visual from motor contributions to the temporal duration of the pursuit motion filter, we recorded single-unit responses in the monkey middle temporal cortical area (MT). We find that pursuit response delays are consistent with the distribution of cortical neuron latencies and that temporal motion integration for pursuit is consistent with a short integration MT subpopulation. Remarkably, the visual system appears to preferentially weight motion signals across a narrow range of foveal eccentricities rather than uniformly over the whole visual field, with a transiently enhanced contribution from locations along the direction of motion. We find that the visual system is most sensitive to motion falling at approximately one-third the radius of the stimulus aperture. Hypothesizing that the visual drive for pursuit is related to the filtered motion energy in a motion stimulus, we compare measured and predicted eye acceleration across several other target forms. SIGNIFICANCE STATEMENT A compact model of the spatial and temporal processing underlying global motion perception has been elusive. We used visually driven smooth eye movements to find the 3D space–time function that best predicts both eye movements and perception of translating dot patterns. We found that the visual system does not appear to use all available motion signals uniformly, but rather weights motion preferentially in a narrow band at approximately one-third the radius of the stimulus. Although not universal, the filter predicts responses to other types of stimuli, demonstrating a remarkable degree of generalization that may lead to a deeper understanding of visual motion processing. PMID:28003348

  2. Perception of linear horizontal self-motion induced by peripheral vision /linearvection/ - Basic characteristics and visual-vestibular interactions

    NASA Technical Reports Server (NTRS)

    Berthoz, A.; Pavard, B.; Young, L. R.

    1975-01-01

    The basic characteristics of the sensation of linear horizontal motion have been studied. Objective linear motion was induced by means of a moving cart. Visually induced linear motion perception (linearvection) was obtained by projection of moving images at the periphery of the visual field. Image velocity and luminance thresholds for the appearance of linearvection have been measured and are in the range of those for image motion detection (without sensation of self motion) by the visual system. Latencies of onset are around 1 sec and short term adaptation has been shown. The dynamic range of the visual analyzer as judged by frequency analysis is lower than the vestibular analyzer. Conflicting situations in which visual cues contradict vestibular and other proprioceptive cues show, in the case of linearvection a dominance of vision which supports the idea of an essential although not independent role of vision in self motion perception.

  3. Assessing the effect of physical differences in the articulation of consonants and vowels on audiovisual temporal perception

    PubMed Central

    Vatakis, Argiro; Maragos, Petros; Rodomagoulakis, Isidoros; Spence, Charles

    2012-01-01

    We investigated how the physical differences associated with the articulation of speech affect the temporal aspects of audiovisual speech perception. Video clips of consonants and vowels uttered by three different speakers were presented. The video clips were analyzed using an auditory-visual signal saliency model in order to compare signal saliency and behavioral data. Participants made temporal order judgments (TOJs) regarding which speech-stream (auditory or visual) had been presented first. The sensitivity of participants' TOJs and the point of subjective simultaneity (PSS) were analyzed as a function of the place, manner of articulation, and voicing for consonants, and the height/backness of the tongue and lip-roundedness for vowels. We expected that in the case of the place of articulation and roundedness, where the visual-speech signal is more salient, temporal perception of speech would be modulated by the visual-speech signal. No such effect was expected for the manner of articulation or height. The results demonstrate that for place and manner of articulation, participants' temporal percept was affected (although not always significantly) by highly-salient speech-signals with the visual-signals requiring smaller visual-leads at the PSS. This was not the case when height was evaluated. These findings suggest that in the case of audiovisual speech perception, a highly salient visual-speech signal may lead to higher probabilities regarding the identity of the auditory-signal that modulate the temporal window of multisensory integration of the speech-stimulus. PMID:23060756

  4. Hearing faces: how the infant brain matches the face it sees with the speech it hears.

    PubMed

    Bristow, Davina; Dehaene-Lambertz, Ghislaine; Mattout, Jeremie; Soares, Catherine; Gliga, Teodora; Baillet, Sylvain; Mangin, Jean-François

    2009-05-01

    Speech is not a purely auditory signal. From around 2 months of age, infants are able to correctly match the vowel they hear with the appropriate articulating face. However, there is no behavioral evidence of integrated audiovisual perception until 4 months of age, at the earliest, when an illusory percept can be created by the fusion of the auditory stimulus and of the facial cues (McGurk effect). To understand how infants initially match the articulatory movements they see with the sounds they hear, we recorded high-density ERPs in response to auditory vowels that followed a congruent or incongruent silently articulating face in 10-week-old infants. In a first experiment, we determined that auditory-visual integration occurs during the early stages of perception as in adults. The mismatch response was similar in timing and in topography whether the preceding vowels were presented visually or aurally. In the second experiment, we studied audiovisual integration in the linguistic (vowel perception) and nonlinguistic (gender perception) domain. We observed a mismatch response for both types of change at similar latencies. Their topographies were significantly different demonstrating that cross-modal integration of these features is computed in parallel by two different networks. Indeed, brain source modeling revealed that phoneme and gender computations were lateralized toward the left and toward the right hemisphere, respectively, suggesting that each hemisphere possesses an early processing bias. We also observed repetition suppression in temporal regions and repetition enhancement in frontal regions. These results underscore how complex and structured is the human cortical organization which sustains communication from the first weeks of life on.

  5. Putative mechanisms mediating tolerance for audiovisual stimulus onset asynchrony.

    PubMed

    Bhat, Jyoti; Miller, Lee M; Pitt, Mark A; Shahin, Antoine J

    2015-03-01

    Audiovisual (AV) speech perception is robust to temporal asynchronies between visual and auditory stimuli. We investigated the neural mechanisms that facilitate tolerance for audiovisual stimulus onset asynchrony (AVOA) with EEG. Individuals were presented with AV words that were asynchronous in onsets of voice and mouth movement and judged whether they were synchronous or not. Behaviorally, individuals tolerated (perceived as synchronous) longer AVOAs when mouth movement preceded the speech (V-A) stimuli than when the speech preceded mouth movement (A-V). Neurophysiologically, the P1-N1-P2 auditory evoked potentials (AEPs), time-locked to sound onsets and known to arise in and surrounding the primary auditory cortex (PAC), were smaller for the in-sync than the out-of-sync percepts. Spectral power of oscillatory activity in the beta band (14-30 Hz) following the AEPs was larger during the in-sync than out-of-sync perception for both A-V and V-A conditions. However, alpha power (8-14 Hz), also following AEPs, was larger for the in-sync than out-of-sync percepts only in the V-A condition. These results demonstrate that AVOA tolerance is enhanced by inhibiting low-level auditory activity (e.g., AEPs representing generators in and surrounding PAC) that code for acoustic onsets. By reducing sensitivity to acoustic onsets, visual-to-auditory onset mapping is weakened, allowing for greater AVOA tolerance. In contrast, beta and alpha results suggest the involvement of higher-level neural processes that may code for language cues (phonetic, lexical), selective attention, and binding of AV percepts, allowing for wider neural windows of temporal integration, i.e., greater AVOA tolerance. Copyright © 2015 the American Physiological Society.

  6. Contextual effects on motion perception and smooth pursuit eye movements.

    PubMed

    Spering, Miriam; Gegenfurtner, Karl R

    2008-08-15

    Smooth pursuit eye movements are continuous, slow rotations of the eyes that allow us to follow the motion of a visual object of interest. These movements are closely related to sensory inputs from the visual motion processing system. To track a moving object in the natural environment, its motion first has to be segregated from the motion signals provided by surrounding stimuli. Here, we review experiments on the effect of the visual context on motion processing with a focus on the relationship between motion perception and smooth pursuit eye movements. While perception and pursuit are closely linked, we show that they can behave quite distinctly when required by the visual context.

  7. Adaptation, perceptual learning, and plasticity of brain functions.

    PubMed

    Horton, Jonathan C; Fahle, Manfred; Mulder, Theo; Trauzettel-Klosinski, Susanne

    2017-03-01

    The capacity for functional restitution after brain damage is quite different in the sensory and motor systems. This series of presentations highlights the potential for adaptation, plasticity, and perceptual learning from an interdisciplinary perspective. The chances for restitution in the primary visual cortex are limited. Some patterns of visual field loss and recovery after stroke are common, whereas others are impossible, which can be explained by the arrangement and plasticity of the cortical map. On the other hand, compensatory mechanisms are effective, can occur spontaneously, and can be enhanced by training. In contrast to the human visual system, the motor system is highly flexible. This is based on special relationships between perception and action and between cognition and action. In addition, the healthy adult brain can learn new functions, e.g. increasing resolution above the retinal one. The significance of these studies for rehabilitation after brain damage will be discussed.

  8. Perceptual learning effect on decision and confidence thresholds.

    PubMed

    Solovey, Guillermo; Shalom, Diego; Pérez-Schuster, Verónica; Sigman, Mariano

    2016-10-01

    Practice can enhance of perceptual sensitivity, a well-known phenomenon called perceptual learning. However, the effect of practice on subjective perception has received little attention. We approach this problem from a visual psychophysics and computational modeling perspective. In a sequence of visual search experiments, subjects significantly increased the ability to detect a "trained target". Before and after training, subjects performed two psychophysical protocols that parametrically vary the visibility of the "trained target": an attentional blink and a visual masking task. We found that confidence increased after learning only in the attentional blink task. Despite large differences in some observables and task settings, we identify common mechanisms for decision-making and confidence. Specifically, our behavioral results and computational model suggest that perceptual ability is independent of processing time, indicating that changes in early cortical representations are effective, and learning changes decision criteria to convey choice and confidence. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Mediated-reality magnification for macular degeneration rehabilitation

    NASA Astrophysics Data System (ADS)

    Martin-Gonzalez, Anabel; Kotliar, Konstantin; Rios-Martinez, Jorge; Lanzl, Ines; Navab, Nassir

    2014-10-01

    Age-related macular degeneration (AMD) is a gradually progressive eye condition, which is one of the leading causes of blindness and low vision in the Western world. Prevailing optical visual aids compensate part of the lost visual function, but omitting helpful complementary information. This paper proposes an efficient magnification technique, which can be implemented on a head-mounted display, for improving vision of patients with AMD, by preserving global information of the scene. Performance of the magnification approach is evaluated by simulating central vision loss in normally sighted subjects. Visual perception was measured as a function of text reading speed and map route following speed. Statistical analysis of experimental results suggests that our magnification method improves reading speed 1.2 times and spatial orientation to find routes on a map 1.5 times compared to a conventional magnification approach, being capable to enhance peripheral vision of AMD subjects along with their life quality.

  10. Sensori-motor experience leads to changes in visual processing in the developing brain.

    PubMed

    James, Karin Harman

    2010-03-01

    Since Broca's studies on language processing, cortical functional specialization has been considered to be integral to efficient neural processing. A fundamental question in cognitive neuroscience concerns the type of learning that is required for functional specialization to develop. To address this issue with respect to the development of neural specialization for letters, we used functional magnetic resonance imaging (fMRI) to compare brain activation patterns in pre-school children before and after different letter-learning conditions: a sensori-motor group practised printing letters during the learning phase, while the control group practised visual recognition. Results demonstrated an overall left-hemisphere bias for processing letters in these pre-literate participants, but, more interestingly, showed enhanced blood oxygen-level-dependent activation in the visual association cortex during letter perception only after sensori-motor (printing) learning. It is concluded that sensori-motor experience augments processing in the visual system of pre-school children. The change of activation in these neural circuits provides important evidence that 'learning-by-doing' can lay the foundation for, and potentially strengthen, the neural systems used for visual letter recognition.

  11. The implementation of thermal image visualization by HDL based on pseudo-color

    NASA Astrophysics Data System (ADS)

    Zhu, Yong; Zhang, JiangLing

    2004-11-01

    The pseudo-color method which maps the sampled data to intuitive perception colors is a kind of powerful visualization way. And the all-around system of pseudo-color visualization, which includes the primary principle, model and HDL (Hardware Description Language) implementation for the thermal images, is expatiated on in the paper. The thermal images whose signal is modulated as video reflect the temperature distribution of measured object, so they have the speciality of mass and real-time. The solution to the intractable problem is as follows: First, the reasonable system, i.e. the combining of global pseudo-color visualization and local special area accurate measure, muse be adopted. Then, the HDL pseudo-color algorithms in SoC (System on Chip) carry out the system to ensure the real-time. Finally, the key HDL algorithms for direct gray levels connection coding, proportional gray levels map coding and enhanced gray levels map coding are presented, and its simulation results are showed. The pseudo-color visualization of thermal images implemented by HDL in the paper has effective application in the aspect of electric power equipment test and medical health diagnosis.

  12. Improving spatial perception in 5-yr.-old Spanish children.

    PubMed

    Jiménez, Andrés Canto; Sicilia, Antonio Oña; Vera, Juan Granda

    2007-06-01

    Assimilation of distance perception was studied in 70 Spanish primary school children. This assimilation involves the generation of projective images which are acquired through two mechanisms. One mechanism is spatial perception, wherein perceptual processes develop ensuring successful immersion in space and the acquisition of visual cues which a person may use to interpret images seen in the distance. The other mechanism is movement through space so that these images are produced. The present study evaluated the influence on improvements in spatial perception of using increasingly larger spaces for training sessions within a motor skills program. Visual parameters were measured in relation to the capture and tracking of moving objects or ocular motility and speed of detection or visual reaction time. Analysis showed that for the group trained in increasingly larger spaces, ocular motility and visual reaction time were significantly improved during. different phases of the program.

  13. Body ownership promotes visual awareness.

    PubMed

    van der Hoort, Björn; Reingardt, Maria; Ehrsson, H Henrik

    2017-08-17

    The sense of ownership of one's body is important for survival, e.g., in defending the body against a threat. However, in addition to affecting behavior, it also affects perception of the world. In the case of visuospatial perception, it has been shown that the sense of ownership causes external space to be perceptually scaled according to the size of the body. Here, we investigated the effect of ownership on another fundamental aspect of visual perception: visual awareness. In two binocular rivalry experiments, we manipulated the sense of ownership of a stranger's hand through visuotactile stimulation while that hand was one of the rival stimuli. The results show that ownership, but not mere visuotactile stimulation, increases the dominance of the hand percept. This effect is due to a combination of longer perceptual dominance durations and shorter suppression durations. Together, these results suggest that the sense of body ownership promotes visual awareness.

  14. Curvilinear approach to an intersection and visual detection of a collision.

    PubMed

    Berthelon, C; Mestre, D

    1993-09-01

    Visual motion perception plays a fundamental role in vehicle control. Recent studies have shown that the pattern of optical flow resulting from the observer's self-motion through a stable environment is used by the observer to accurately control his or her movements. However, little is known about the perception of another vehicle during self-motion--for instance, when a car driver approaches an intersection with traffic. In a series of experiments using visual simulations of car driving, we show that observers are able to detect the presence of a moving object during self-motion. However, the perception of the other car's trajectory appears to be strongly dependent on environmental factors, such as the presence of a road sign near the intersection or the shape of the road. These results suggest that local and global visual factors determine the perception of a car's trajectory during self-motion.

  15. Body ownership promotes visual awareness

    PubMed Central

    Reingardt, Maria; Ehrsson, H Henrik

    2017-01-01

    The sense of ownership of one’s body is important for survival, e.g., in defending the body against a threat. However, in addition to affecting behavior, it also affects perception of the world. In the case of visuospatial perception, it has been shown that the sense of ownership causes external space to be perceptually scaled according to the size of the body. Here, we investigated the effect of ownership on another fundamental aspect of visual perception: visual awareness. In two binocular rivalry experiments, we manipulated the sense of ownership of a stranger’s hand through visuotactile stimulation while that hand was one of the rival stimuli. The results show that ownership, but not mere visuotactile stimulation, increases the dominance of the hand percept. This effect is due to a combination of longer perceptual dominance durations and shorter suppression durations. Together, these results suggest that the sense of body ownership promotes visual awareness. PMID:28826500

  16. Visual Perception Based Rate Control Algorithm for HEVC

    NASA Astrophysics Data System (ADS)

    Feng, Zeqi; Liu, PengYu; Jia, Kebin

    2018-01-01

    For HEVC, rate control is an indispensably important video coding technology to alleviate the contradiction between video quality and the limited encoding resources during video communication. However, the rate control benchmark algorithm of HEVC ignores subjective visual perception. For key focus regions, bit allocation of LCU is not ideal and subjective quality is unsatisfied. In this paper, a visual perception based rate control algorithm for HEVC is proposed. First bit allocation weight of LCU level is optimized based on the visual perception of luminance and motion to ameliorate video subjective quality. Then λ and QP are adjusted in combination with the bit allocation weight to improve rate distortion performance. Experimental results show that the proposed algorithm reduces average 0.5% BD-BR and maximum 1.09% BD-BR at no cost in bitrate accuracy compared with HEVC (HM15.0). The proposed algorithm devotes to improving video subjective quality under various video applications.

  17. Perception of Emotion: Differences in Mode of Presentation, Sex of Perceiver, and Race of Expressor.

    ERIC Educational Resources Information Center

    Kozel, Nicholas J.; Gitter, A. George

    A 2 x 2 x 4 factorial design was utilized to investigate the effects of sex of perceiver, race of expressor (Negro and White), and mode of presentation of stimuli (audio and visual, visual only, audio only, and still pictures) on perception of emotion (POE). Perception of seven emotions (anger, happiness, surprise, fear, disgust, pain, and…

  18. The contribution of visual processing to academic achievement in adolescents born extremely preterm or extremely low birth weight.

    PubMed

    Molloy, Carly S; Di Battista, Ashley M; Anderson, Vicki A; Burnett, Alice; Lee, Katherine J; Roberts, Gehan; Cheong, Jeanie Ly; Anderson, Peter J; Doyle, Lex W

    2017-04-01

    Children born extremely preterm (EP, <28 weeks) and/or extremely low birth weight (ELBW, <1000 g) have more academic deficiencies than their term-born peers, which may be due to problems with visual processing. The aim of this study is to determine (1) if visual processing is related to poor academic outcomes in EP/ELBW adolescents, and (2) how much of the variance in academic achievement in EP/ELBW adolescents is explained by visual processing ability after controlling for perinatal risk factors and other known contributors to academic performance, particularly attention and working memory. A geographically determined cohort of 228 surviving EP/ELBW adolescents (mean age 17 years) was studied. The relationships between measures of visual processing (visual acuity, binocular stereopsis, eye convergence, and visual perception) and academic achievement were explored within the EP/ELBW group. Analyses were repeated controlling for perinatal and social risk, and measures of attention and working memory. It was found that visual acuity, convergence and visual perception are related to scores for academic achievement on univariable regression analyses. After controlling for potential confounds (perinatal and social risk, working memory and attention), visual acuity, convergence and visual perception remained associated with reading and math computation, but only convergence and visual perception are related to spelling. The additional variance explained by visual processing is up to 6.6% for reading, 2.7% for spelling, and 2.2% for math computation. None of the visual processing variables or visual motor integration are associated with handwriting on multivariable analysis. Working memory is generally a stronger predictor of reading, spelling, and math computation than visual processing. It was concluded that visual processing difficulties are significantly related to academic outcomes in EP/ELBW adolescents; therefore, specific attention should be paid to academic remediation strategies incorporating the management of working memory and visual processing in EP/ELBW children.

  19. Visual contribution to the multistable perception of speech.

    PubMed

    Sato, Marc; Basirat, Anahita; Schwartz, Jean-Luc

    2007-11-01

    The multistable perception of speech, or verbal transformation effect, refers to perceptual changes experienced while listening to a speech form that is repeated rapidly and continuously. In order to test whether visual information from the speaker's articulatory gestures may modify the emergence and stability of verbal auditory percepts, subjects were instructed to report any perceptual changes during unimodal, audiovisual, and incongruent audiovisual presentations of distinct repeated syllables. In a first experiment, the perceptual stability of reported auditory percepts was significantly modulated by the modality of presentation. In a second experiment, when audiovisual stimuli consisting of a stable audio track dubbed with a video track that alternated between congruent and incongruent stimuli were presented, a strong correlation between the timing of perceptual transitions and the timing of video switches was found. Finally, a third experiment showed that the vocal tract opening onset event provided by the visual input could play the role of a bootstrap mechanism in the search for transformations. Altogether, these results demonstrate the capacity of visual information to control the multistable perception of speech in its phonetic content and temporal course. The verbal transformation effect thus provides a useful experimental paradigm to explore audiovisual interactions in speech perception.

  20. Tracking without perceiving: a dissociation between eye movements and motion perception.

    PubMed

    Spering, Miriam; Pomplun, Marc; Carrasco, Marisa

    2011-02-01

    Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept.

  1. Tracking Without Perceiving: A Dissociation Between Eye Movements and Motion Perception

    PubMed Central

    Spering, Miriam; Pomplun, Marc; Carrasco, Marisa

    2011-01-01

    Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept. PMID:21189353

  2. The visually impaired patient.

    PubMed

    Rosenberg, Eric A; Sperazza, Laura C

    2008-05-15

    Blindness or low vision affects more than 3 million Americans 40 years and older, and this number is projected to reach 5.5 million by 2020. In addition to treating a patient's vision loss and comorbid medical issues, physicians must be aware of the physical limitations and social issues associated with vision loss to optimize health and independent living for the visually impaired patient. In the United States, the four most prevalent etiologies of vision loss in persons 40 years and older are age-related macular degeneration, cataracts, glaucoma, and diabetic retinopathy. Exudative macular degeneration is treated with laser therapy, and progression of nonexudative macular degeneration in its advanced stages may be slowed with high-dose antioxidant and zinc regimens. The value of screening for glaucoma is uncertain; management of this condition relies on topical ocular medications. Cataract symptoms include decreased visual acuity, decreased color perception, decreased contrast sensitivity, and glare disability. Lifestyle and environmental interventions can improve function in patients with cataracts, but surgery is commonly performed if the condition worsens. Diabetic retinopathy responds to tight glucose control, and severe cases marked by macular edema are treated with laser photocoagulation. Vision-enhancing devices can help magnify objects, and nonoptical interventions include special filters and enhanced lighting.

  3. The value of bedside shift reporting enhancing nurse surveillance, accountability, and patient safety.

    PubMed

    Jeffs, Lianne; Acott, Ashley; Simpson, Elisa; Campbell, Heather; Irwin, Terri; Lo, Joyce; Beswick, Susan; Cardoso, Roberta

    2013-01-01

    A study was undertaken to explore nurses' experiences and perceptions associated with implementation of bedside nurse-to-nurse shift handoff reporting. Interviews were conducted with nurses and analyzed using directed content analysis. Two themes emerged that illustrated the value of bedside shift reporting. These themes included clarifying information and intercepting errors and visualizing patients and prioritizing care. Nurse leaders can leverage study findings in their efforts to embed nurse-to-nurse bedside shift reporting in their respective organizations.

  4. Compensatory Plasticity in the Deaf Brain: Effects on Perception of Music

    PubMed Central

    Good, Arla; Reed, Maureen J.; Russo, Frank A.

    2014-01-01

    When one sense is unavailable, sensory responsibilities shift and processing of the remaining modalities becomes enhanced to compensate for missing information. This shift, referred to as compensatory plasticity, results in a unique sensory experience for individuals who are deaf, including the manner in which music is perceived. This paper evaluates the neural, behavioural and cognitive evidence for compensatory plasticity following auditory deprivation and considers how this manifests in a unique experience of music that emphasizes visual and vibrotactile modalities. PMID:25354235

  5. Human infrared vision is triggered by two-photon chromophore isomerization

    PubMed Central

    Palczewska, Grazyna; Vinberg, Frans; Stremplewski, Patrycjusz; Bircher, Martin P.; Salom, David; Komar, Katarzyna; Zhang, Jianye; Cascella, Michele; Wojtkowski, Maciej; Kefalov, Vladimir J.; Palczewski, Krzysztof

    2014-01-01

    Vision relies on photoactivation of visual pigments in rod and cone photoreceptor cells of the retina. The human eye structure and the absorption spectra of pigments limit our visual perception of light. Our visual perception is most responsive to stimulating light in the 400- to 720-nm (visible) range. First, we demonstrate by psychophysical experiments that humans can perceive infrared laser emission as visible light. Moreover, we show that mammalian photoreceptors can be directly activated by near infrared light with a sensitivity that paradoxically increases at wavelengths above 900 nm, and display quadratic dependence on laser power, indicating a nonlinear optical process. Biochemical experiments with rhodopsin, cone visual pigments, and a chromophore model compound 11-cis-retinyl-propylamine Schiff base demonstrate the direct isomerization of visual chromophore by a two-photon chromophore isomerization. Indeed, quantum mechanics modeling indicates the feasibility of this mechanism. Together, these findings clearly show that human visual perception of near infrared light occurs by two-photon isomerization of visual pigments. PMID:25453064

  6. The effect of phasic auditory alerting on visual perception.

    PubMed

    Petersen, Anders; Petersen, Annemarie Hilkjær; Bundesen, Claus; Vangkilde, Signe; Habekost, Thomas

    2017-08-01

    Phasic alertness refers to a short-lived change in the preparatory state of the cognitive system following an alerting signal. In the present study, we examined the effect of phasic auditory alerting on distinct perceptual processes, unconfounded by motor components. We combined an alerting/no-alerting design with a pure accuracy-based single-letter recognition task. Computational modeling based on Bundesen's Theory of Visual Attention was used to examine the effect of phasic alertness on visual processing speed and threshold of conscious perception. Results show that phasic auditory alertness affects visual perception by increasing the visual processing speed and lowering the threshold of conscious perception (Experiment 1). By manipulating the intensity of the alerting cue, we further observed a positive relationship between alerting intensity and processing speed, which was not seen for the threshold of conscious perception (Experiment 2). This was replicated in a third experiment, in which pupil size was measured as a physiological marker of alertness. Results revealed that the increase in processing speed was accompanied by an increase in pupil size, substantiating the link between alertness and processing speed (Experiment 3). The implications of these results are discussed in relation to a newly developed mathematical model of the relationship between levels of alertness and the speed with which humans process visual information. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. The working memory Ponzo illusion: Involuntary integration of visuospatial information stored in visual working memory.

    PubMed

    Shen, Mowei; Xu, Haokui; Zhang, Haihang; Shui, Rende; Zhang, Meng; Zhou, Jifan

    2015-08-01

    Visual working memory (VWM) has been traditionally viewed as a mental structure subsequent to visual perception that stores the final output of perceptual processing. However, VWM has recently been emphasized as a critical component of online perception, providing storage for the intermediate perceptual representations produced during visual processing. This interactive view holds the core assumption that VWM is not the terminus of perceptual processing; the stored visual information rather continues to undergo perceptual processing if necessary. The current study tests this assumption, demonstrating an example of involuntary integration of the VWM content, by creating the Ponzo illusion in VWM: when the Ponzo illusion figure was divided into its individual components and sequentially encoded into VWM, the temporally separated components were involuntarily integrated, leading to the distorted length perception of the two horizontal lines. This VWM Ponzo illusion was replicated when the figure components were presented in different combinations and presentation order. The magnitude of the illusion was significantly correlated between VWM and perceptual versions of the Ponzo illusion. These results suggest that the information integration underling the VWM Ponzo illusion is constrained by the laws of visual perception and similarly affected by the common individual factors that govern its perception. Thus, our findings provide compelling evidence that VWM functions as a buffer serving perceptual processes at early stages. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. The mere exposure effect is modulated by selective attention but not visual awareness.

    PubMed

    Huang, Yu-Feng; Hsieh, Po-Jang

    2013-10-18

    Repeated exposures to an object will lead to an enhancement of evaluation toward that object. Although this mere exposure effect may occur when the objects are presented subliminally, the role of conscious perception per se on evaluation has never been examined. Here we use a binocular rivalry paradigm to investigate whether a variance in conscious perceptual duration of faces has an effect on their subsequent evaluation, and how selective attention and memory interact with this effect. Our results show that face evaluation is positively biased by selective attention but not affected by visual awareness. Furthermore, this effect is not due to participants recalling which face had been attended to. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Object Persistence Enhances Spatial Navigation: A Case Study in Smartphone Vision Science.

    PubMed

    Liverence, Brandon M; Scholl, Brian J

    2015-07-01

    Violations of spatiotemporal continuity disrupt performance in many tasks involving attention and working memory, but experiments on this topic have been limited to the study of moment-by-moment on-line perception, typically assessed by passive monitoring tasks. We tested whether persisting object representations also serve as underlying units of longer-term memory and active spatial navigation, using a novel paradigm inspired by the visual interfaces common to many smartphones. Participants used key presses to navigate through simple visual environments consisting of grids of icons (depicting real-world objects), only one of which was visible at a time through a static virtual window. Participants found target icons faster when navigation involved persistence cues (via sliding animations) than when persistence was disrupted (e.g., via temporally matched fading animations), with all transitions inspired by smartphone interfaces. Moreover, this difference occurred even after explicit memorization of the relevant information, which demonstrates that object persistence enhances spatial navigation in an automatic and irresistible fashion. © The Author(s) 2015.

  10. Tolerance for audiovisual asynchrony is enhanced by the spectrotemporal fidelity of the speaker's mouth movements and speech.

    PubMed

    Shahin, Antoine J; Shen, Stanley; Kerlin, Jess R

    2017-01-01

    We examined the relationship between tolerance for audiovisual onset asynchrony (AVOA) and the spectrotemporal fidelity of the spoken words and the speaker's mouth movements. In two experiments that only varied in the temporal order of sensory modality, visual speech leading (exp1) or lagging (exp2) acoustic speech, participants watched intact and blurred videos of a speaker uttering trisyllabic words and nonwords that were noise vocoded with 4-, 8-, 16-, and 32-channels. They judged whether the speaker's mouth movements and the speech sounds were in-sync or out-of-sync . Individuals perceived synchrony (tolerated AVOA) on more trials when the acoustic speech was more speech-like (8 channels and higher vs. 4 channels), and when visual speech was intact than blurred (exp1 only). These findings suggest that enhanced spectrotemporal fidelity of the audiovisual (AV) signal prompts the brain to widen the window of integration promoting the fusion of temporally distant AV percepts.

  11. Early Binocular Input Is Critical for Development of Audiovisual but Not Visuotactile Simultaneity Perception.

    PubMed

    Chen, Yi-Chuan; Lewis, Terri L; Shore, David I; Maurer, Daphne

    2017-02-20

    Temporal simultaneity provides an essential cue for integrating multisensory signals into a unified perception. Early visual deprivation, in both animals and humans, leads to abnormal neural responses to audiovisual signals in subcortical and cortical areas [1-5]. Behavioral deficits in integrating complex audiovisual stimuli in humans are also observed [6, 7]. It remains unclear whether early visual deprivation affects visuotactile perception similarly to audiovisual perception and whether the consequences for either pairing differ after monocular versus binocular deprivation [8-11]. Here, we evaluated the impact of early visual deprivation on the perception of simultaneity for audiovisual and visuotactile stimuli in humans. We tested patients born with dense cataracts in one or both eyes that blocked all patterned visual input until the cataractous lenses were removed and the affected eyes fitted with compensatory contact lenses (mean duration of deprivation = 4.4 months; range = 0.3-28.8 months). Both monocularly and binocularly deprived patients demonstrated lower precision in judging audiovisual simultaneity. However, qualitatively different outcomes were observed for the two patient groups: the performance of monocularly deprived patients matched that of young children at immature stages, whereas that of binocularly deprived patients did not match any stage in typical development. Surprisingly, patients performed normally in judging visuotactile simultaneity after either monocular or binocular deprivation. Therefore, early binocular input is necessary to develop normal neural substrates for simultaneity perception of visual and auditory events but not visual and tactile events. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Clonal selection versus clonal cooperation: the integrated perception of immune objects

    PubMed Central

    Nataf, Serge

    2016-01-01

    Analogies between the immune and nervous systems were first envisioned by the immunologist Niels Jerne who introduced the concepts of antigen "recognition" and immune "memory". However, since then, it appears that only the cognitive immunology paradigm proposed by Irun Cohen, attempted to further theorize the immune system functions through the prism of neurosciences. The present paper is aimed at revisiting this analogy-based reasoning. In particular, a parallel is drawn between the brain pathways of visual perception and the processes allowing the global perception of an "immune object". Thus, in the visual system, distinct features of a visual object (shape, color, motion) are perceived separately by distinct neuronal populations during a primary perception task. The output signals generated during this first step instruct then an integrated perception task performed by other neuronal networks. Such a higher order perception step is by essence a cooperative task that is mandatory for the global perception of visual objects. Based on a re-interpretation of recent experimental data, it is suggested that similar general principles drive the integrated perception of immune objects in secondary lymphoid organs (SLOs). In this scheme, the four main categories of signals characterizing an immune object (antigenic, contextual, temporal and localization signals) are first perceived separately by distinct networks of immunocompetent cells.  Then, in a multitude of SLO niches, the output signals generated during this primary perception step are integrated by TH-cells at the single cell level. This process eventually generates a multitude of T-cell and B-cell clones that perform, at the scale of SLOs, an integrated perception of immune objects. Overall, this new framework proposes that integrated immune perception and, consequently, integrated immune responses, rely essentially on clonal cooperation rather than clonal selection. PMID:27830060

  13. Perception of CPR quality: Influence of CPR feedback, Just-in-Time CPR training and provider role.

    PubMed

    Cheng, Adam; Overly, Frank; Kessler, David; Nadkarni, Vinay M; Lin, Yiqun; Doan, Quynh; Duff, Jonathan P; Tofil, Nancy M; Bhanji, Farhan; Adler, Mark; Charnovich, Alex; Hunt, Elizabeth A; Brown, Linda L

    2015-02-01

    Many healthcare providers rely on visual perception to guide cardiopulmonary resuscitation (CPR), but little is known about the accuracy of provider perceptions of CPR quality. We aimed to describe the difference between perceived versus measured CPR quality, and to determine the impact of provider role, real-time visual CPR feedback and Just-in-Time (JIT) CPR training on provider perceptions. We conducted secondary analyses of data collected from a prospective, multicenter, randomized trial of 324 healthcare providers who participated in a simulated cardiac arrest scenario between July 2012 and April 2014. Participants were randomized to one of four permutations of: JIT CPR training and real-time visual CPR feedback. We calculated the difference between perceived and measured quality of CPR and reported the proportion of subjects accurately estimating the quality of CPR within each study arm. Participants overestimated achieving adequate chest compression depth (mean difference range: 16.1-60.6%) and rate (range: 0.2-51%), and underestimated chest compression fraction (0.2-2.9%) across all arms. Compared to no intervention, the use of real-time feedback and JIT CPR training (alone or in combination) improved perception of depth (p<0.001). Accurate estimation of CPR quality was poor for chest compression depth (0-13%), rate (5-46%) and chest compression fraction (60-63%). Perception of depth is more accurate in CPR providers versus team leaders (27.8% vs. 7.4%; p=0.043) when using real-time feedback. Healthcare providers' visual perception of CPR quality is poor. Perceptions of CPR depth are improved by using real-time visual feedback and with prior JIT CPR training. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  14. Stimulus size and eccentricity in visually induced perception of horizontally translational self-motion.

    PubMed

    Nakamura, S; Shimojo, S

    1998-10-01

    The effects of the size and eccentricity of the visual stimulus upon visually induced perception of self-motion (vection) were examined with various sizes of central and peripheral visual stimulation. Analysis indicated the strength of vection increased linearly with the size of the area in which the moving pattern was presented, but there was no difference in vection strength between central and peripheral stimuli when stimulus sizes were the same. Thus, the effect of stimulus size is homogeneous across eccentricities in the visual field.

  15. Conscious Vision Proceeds from Global to Local Content in Goal-Directed Tasks and Spontaneous Vision.

    PubMed

    Campana, Florence; Rebollo, Ignacio; Urai, Anne; Wyart, Valentin; Tallon-Baudry, Catherine

    2016-05-11

    The reverse hierarchy theory (Hochstein and Ahissar, 2002) makes strong, but so far untested, predictions on conscious vision. In this theory, local details encoded in lower-order visual areas are unconsciously processed before being automatically and rapidly combined into global information in higher-order visual areas, where conscious percepts emerge. Contingent on current goals, local details can afterward be consciously retrieved. This model therefore predicts that (1) global information is perceived faster than local details, (2) global information is computed regardless of task demands during early visual processing, and (3) spontaneous vision is dominated by global percepts. We designed novel textured stimuli that are, as opposed to the classic Navon's letters, truly hierarchical (i.e., where global information is solely defined by local information but where local and global orientations can still be manipulated separately). In line with the predictions, observers were systematically faster reporting global than local properties of those stimuli. Second, global information could be decoded from magneto-encephalographic data during early visual processing regardless of task demands. Last, spontaneous subjective reports were dominated by global information and the frequency and speed of spontaneous global perception correlated with the accuracy and speed in the global task. No such correlation was observed for local information. We therefore show that information at different levels of the visual hierarchy is not equally likely to become conscious; rather, conscious percepts emerge preferentially at a global level. We further show that spontaneous reports can be reliable and are tightly linked to objective performance at the global level. Is information encoded at different levels of the visual system (local details in low-level areas vs global shapes in high-level areas) equally likely to become conscious? We designed new hierarchical stimuli and provide the first empirical evidence based on behavioral and MEG data that global information encoded at high levels of the visual hierarchy dominates perception. This result held both in the presence and in the absence of task demands. The preferential emergence of percepts at high levels can account for two properties of conscious vision, namely, the dominance of global percepts and the feeling of visual richness reported independently of the perception of local details. Copyright © 2016 the authors 0270-6474/16/365200-14$15.00/0.

  16. Effects of auditory information on self-motion perception during simultaneous presentation of visual shearing motion

    PubMed Central

    Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu

    2015-01-01

    Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828

  17. Next gen perception and cognition: augmenting perception and enhancing cognition through mobile technologies

    NASA Astrophysics Data System (ADS)

    Goma, Sergio R.

    2015-03-01

    In current times, mobile technologies are ubiquitous and the complexity of problems is continuously increasing. In the context of advancement of engineering, we explore in this paper possible reasons that could cause a saturation in technology evolution - namely the ability of problem solving based on previous results and the ability of expressing solutions in a more efficient way, concluding that `thinking outside of brain' - as in solving engineering problems that are expressed in a virtual media due to their complexity - would benefit from mobile technology augmentation. This could be the necessary evolutionary step that would provide the efficiency required to solve new complex problems (addressing the `running out of time' issue) and remove the communication of results barrier (addressing the human `perception/expression imbalance' issue). Some consequences are discussed, as in this context the artificial intelligence becomes an automation tool aid instead of a necessary next evolutionary step. The paper concludes that research in modeling as problem solving aid and data visualization as perception aid augmented with mobile technologies could be the path to an evolutionary step in advancing engineering.

  18. Coherent modulation of stimulus colour can affect visually induced self-motion perception.

    PubMed

    Nakamura, Shinji; Seno, Takeharu; Ito, Hiroyuki; Sunaga, Shoji

    2010-01-01

    The effects of dynamic colour modulation on vection were investigated to examine whether perceived variation of illumination affects self-motion perception. Participants observed expanding optic flow which simulated their forward self-motion. Onset latency, accumulated duration, and estimated magnitude of the self-motion were measured as indices of vection strength. Colour of the dots in the visual stimulus was modulated between white and red (experiment 1), white and grey (experiment 2), and grey and red (experiment 3). The results indicated that coherent colour oscillation in the visual stimulus significantly suppressed the strength of vection, whereas incoherent or static colour modulation did not affect vection. There was no effect of the types of the colour modulation; both achromatic and chromatic modulations turned out to be effective in inhibiting self-motion perception. Moreover, in a situation where the simulated direction of a spotlight was manipulated dynamically, vection strength was also suppressed (experiment 4). These results suggest that observer's perception of illumination is critical for self-motion perception, and rapid variation of perceived illumination would impair the reliabilities of visual information in determining self-motion.

  19. Neural Integration in Body Perception.

    PubMed

    Ramsey, Richard

    2018-06-19

    The perception of other people is instrumental in guiding social interactions. For example, the appearance of the human body cues a wide range of inferences regarding sex, age, health, and personality, as well as emotional state and intentions, which influence social behavior. To date, most neuroscience research on body perception has aimed to characterize the functional contribution of segregated patches of cortex in the ventral visual stream. In light of the growing prominence of network architectures in neuroscience, the current article reviews neuroimaging studies that measure functional integration between different brain regions during body perception. The review demonstrates that body perception is not restricted to processing in the ventral visual stream but instead reflects a functional alliance between the ventral visual stream and extended neural systems associated with action perception, executive functions, and theory of mind. Overall, these findings demonstrate how body percepts are constructed through interactions in distributed brain networks and underscore that functional segregation and integration should be considered together when formulating neurocognitive theories of body perception. Insight from such an updated model of body perception generalizes to inform the organizational structure of social perception and cognition more generally and also informs disorders of body image, such as anorexia nervosa, which may rely on atypical integration of body-related information.

  20. Sleeping on the rubber-hand illusion: Memory reactivation during sleep facilitates multisensory recalibration.

    PubMed

    Honma, Motoyasu; Plass, John; Brang, David; Florczak, Susan M; Grabowecky, Marcia; Paller, Ken A

    2016-01-01

    Plasticity is essential in body perception so that physical changes in the body can be accommodated and assimilated. Multisensory integration of visual, auditory, tactile, and proprioceptive signals contributes both to conscious perception of the body's current state and to associated learning. However, much is unknown about how novel information is assimilated into body perception networks in the brain. Sleep-based consolidation can facilitate various types of learning via the reactivation of networks involved in prior encoding or through synaptic down-scaling. Sleep may likewise contribute to perceptual learning of bodily information by providing an optimal time for multisensory recalibration. Here we used methods for targeted memory reactivation (TMR) during slow-wave sleep to examine the influence of sleep-based reactivation of experimentally induced alterations in body perception. The rubber-hand illusion was induced with concomitant auditory stimulation in 24 healthy participants on 3 consecutive days. While each participant was sleeping in his or her own bed during intervening nights, electrophysiological detection of slow-wave sleep prompted covert stimulation with either the sound heard during illusion induction, a counterbalanced novel sound, or neither. TMR systematically enhanced feelings of bodily ownership after subsequent inductions of the rubber-hand illusion. TMR also enhanced spatial recalibration of perceived hand location in the direction of the rubber hand. This evidence for a sleep-based facilitation of a body-perception illusion demonstrates that the spatial recalibration of multisensory signals can be altered overnight to stabilize new learning of bodily representations. Sleep-based memory processing may thus constitute a fundamental component of body-image plasticity.

Top