The role of human ventral visual cortex in motion perception
Saygin, Ayse P.; Lorenzi, Lauren J.; Egan, Ryan; Rees, Geraint; Behrmann, Marlene
2013-01-01
Visual motion perception is fundamental to many aspects of visual perception. Visual motion perception has long been associated with the dorsal (parietal) pathway and the involvement of the ventral ‘form’ (temporal) visual pathway has not been considered critical for normal motion perception. Here, we evaluated this view by examining whether circumscribed damage to ventral visual cortex impaired motion perception. The perception of motion in basic, non-form tasks (motion coherence and motion detection) and complex structure-from-motion, for a wide range of motion speeds, all centrally displayed, was assessed in five patients with a circumscribed lesion to either the right or left ventral visual pathway. Patients with a right, but not with a left, ventral visual lesion displayed widespread impairments in central motion perception even for non-form motion, for both slow and for fast speeds, and this held true independent of the integrity of areas MT/V5, V3A or parietal regions. In contrast with the traditional view in which only the dorsal visual stream is critical for motion perception, these novel findings implicate a more distributed circuit in which the integrity of the right ventral visual pathway is also necessary even for the perception of non-form motion. PMID:23983030
NASA Astrophysics Data System (ADS)
Assadi, Amir H.
2001-11-01
Perceptual geometry is an emerging field of interdisciplinary research whose objectives focus on study of geometry from the perspective of visual perception, and in turn, apply such geometric findings to the ecological study of vision. Perceptual geometry attempts to answer fundamental questions in perception of form and representation of space through synthesis of cognitive and biological theories of visual perception with geometric theories of the physical world. Perception of form and space are among fundamental problems in vision science. In recent cognitive and computational models of human perception, natural scenes are used systematically as preferred visual stimuli. Among key problems in perception of form and space, we have examined perception of geometry of natural surfaces and curves, e.g. as in the observer's environment. Besides a systematic mathematical foundation for a remarkably general framework, the advantages of the Gestalt theory of natural surfaces include a concrete computational approach to simulate or recreate images whose geometric invariants and quantities might be perceived and estimated by an observer. The latter is at the very foundation of understanding the nature of perception of space and form, and the (computer graphics) problem of rendering scenes to visually invoke virtual presence.
Visual Form Perception Can Be a Cognitive Correlate of Lower Level Math Categories for Teenagers.
Cui, Jiaxin; Zhang, Yiyun; Cheng, Dazhi; Li, Dawei; Zhou, Xinlin
2017-01-01
Numerous studies have assessed the cognitive correlates of performance in mathematics, but little research has been conducted to systematically examine the relations between visual perception as the starting point of visuospatial processing and typical mathematical performance. In the current study, we recruited 223 seventh graders to perform a visual form perception task (figure matching), numerosity comparison, digit comparison, exact computation, approximate computation, and curriculum-based mathematical achievement tests. Results showed that, after controlling for gender, age, and five general cognitive processes (choice reaction time, visual tracing, mental rotation, spatial working memory, and non-verbal matrices reasoning), visual form perception had unique contributions to numerosity comparison, digit comparison, and exact computation, but had no significant relation with approximate computation or curriculum-based mathematical achievement. These results suggest that visual form perception is an important independent cognitive correlate of lower level math categories, including the approximate number system, digit comparison, and exact computation.
Visual Form Perception Can Be a Cognitive Correlate of Lower Level Math Categories for Teenagers
Cui, Jiaxin; Zhang, Yiyun; Cheng, Dazhi; Li, Dawei; Zhou, Xinlin
2017-01-01
Numerous studies have assessed the cognitive correlates of performance in mathematics, but little research has been conducted to systematically examine the relations between visual perception as the starting point of visuospatial processing and typical mathematical performance. In the current study, we recruited 223 seventh graders to perform a visual form perception task (figure matching), numerosity comparison, digit comparison, exact computation, approximate computation, and curriculum-based mathematical achievement tests. Results showed that, after controlling for gender, age, and five general cognitive processes (choice reaction time, visual tracing, mental rotation, spatial working memory, and non-verbal matrices reasoning), visual form perception had unique contributions to numerosity comparison, digit comparison, and exact computation, but had no significant relation with approximate computation or curriculum-based mathematical achievement. These results suggest that visual form perception is an important independent cognitive correlate of lower level math categories, including the approximate number system, digit comparison, and exact computation. PMID:28824513
Integrative cortical dysfunction and pervasive motion perception deficit in fragile X syndrome.
Kogan, C S; Bertone, A; Cornish, K; Boutet, I; Der Kaloustian, V M; Andermann, E; Faubert, J; Chaudhuri, A
2004-11-09
Fragile X syndrome (FXS) is associated with neurologic deficits recently attributed to the magnocellular pathway of the lateral geniculate nucleus. To test the hypotheses that FXS individuals 1) have a pervasive visual motion perception impairment affecting neocortical circuits in the parietal lobe and 2) have deficits in integrative neocortical mechanisms necessary for perception of complex stimuli. Psychophysical tests of visual motion and form perception defined by either first-order (luminance) or second-order (texture) attributes were used to probe early and later occipito-temporal and occipito-parietal functioning. When compared to developmental- and age-matched controls, FXS individuals displayed severe impairments in first- and second-order motion perception. This deficit was accompanied by near normal perception for first-order form stimuli but not second-order form stimuli. Impaired visual motion processing for first- and second-order stimuli suggests that both early- and later-level neurologic function of the parietal lobe are affected in Fragile X syndrome (FXS). Furthermore, this deficit likely stems from abnormal input from the magnocellular compartment of the lateral geniculate nucleus. Impaired visual form and motion processing for complex visual stimuli with normal processing for simple (i.e., first-order) form stimuli suggests that FXS individuals have normal early form processing accompanied by a generalized impairment in neurologic mechanisms necessary for integrating all early visual input.
Posture-based processing in visual short-term memory for actions.
Vicary, Staci A; Stevens, Catherine J
2014-01-01
Visual perception of human action involves both form and motion processing, which may rely on partially dissociable neural networks. If form and motion are dissociable during visual perception, then they may also be dissociable during their retention in visual short-term memory (VSTM). To elicit form-plus-motion and form-only processing of dance-like actions, individual action frames can be presented in the correct or incorrect order. The former appears coherent and should elicit action perception, engaging both form and motion pathways, whereas the latter appears incoherent and should elicit posture perception, engaging form pathways alone. It was hypothesized that, if form and motion are dissociable in VSTM, then recognition of static body posture should be better after viewing incoherent than after viewing coherent actions. However, as VSTM is capacity limited, posture-based encoding of actions may be ineffective with increased number of items or frames. Using a behavioural change detection task, recognition of a single test posture was significantly more likely after studying incoherent than after studying coherent stimuli. However, this effect only occurred for spans of two (but not three) items and for stimuli with five (but not nine) frames. As in perception, posture and motion are dissociable in VSTM.
van den Boomen, C.; van der Smagt, M. J.; Kemner, C.
2012-01-01
Visual form perception is essential for correct interpretation of, and interaction with, our environment. Form perception depends on visual acuity and processing of specific form characteristics, such as luminance contrast, spatial frequency, color, orientation, depth, and even motion information. As other cognitive processes, form perception matures with age. This paper aims at providing a concise overview of our current understanding of the typical development, from birth to adulthood, of form-characteristic processing, as measured both behaviorally and neurophysiologically. Two main conclusions can be drawn. First, the current literature conveys that for most reviewed characteristics a developmental pattern is apparent. These trajectories are discussed in relation to the organization of the visual system. The second conclusion is that significant gaps in the literature exist for several age-ranges. To complete our understanding of the typical and, by consequence, atypical development of visual mechanisms underlying form processing, future research should uncover these missing segments. PMID:22416236
Audiovisual associations alter the perception of low-level visual motion
Kafaligonul, Hulusi; Oluk, Can
2015-01-01
Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role. PMID:25873869
Two different streams form the dorsal visual system: anatomy and functions.
Rizzolatti, Giacomo; Matelli, Massimo
2003-11-01
There are two radically different views on the functional role of the dorsal visual stream. One considers it as a system involved in space perception. The other is of a system that codes visual information for action organization. On the basis of new anatomical data and a reconsideration of previous functional and clinical data, we propose that the dorsal stream and its recipient parietal areas form two distinct functional systems: the dorso-dorsal stream (d-d stream) and the ventro-dorsal stream (v-d stream). The d-d stream is formed by area V6 (main d-d extrastriate visual node) and areas V6A and MIP of the superior parietal lobule. Its major functional role is the control of actions "on line". Its damage leads to optic ataxia. The v-d stream is formed by area MT (main v-d extrastriate visual node) and by the visual areas of the inferior parietal lobule. As the d-d stream, v-d stream is responsible for action organization. It, however, also plays a crucial role in space perception and action understanding. The putative mechanisms linking action and perception in the v-d stream is discussed.
NASA Astrophysics Data System (ADS)
Ahmetoglu, Emine; Aral, Neriman; Butun Ayhan, Aynur
This study was conducted in order to (a) compare the visual perceptions of seven-year-old children diagnosed with attention deficit hyperactivity disorder with those of normally developing children of the same age and development level and (b) determine whether the visual perceptions of children with attention deficit hyperactivity disorder vary with respect to gender, having received preschool education and parents` educational level. A total of 60 children, 30 with attention deficit hyperactivity disorder and 30 with normal development, were assigned to the study. Data about children with attention deficit hyperactivity disorder and their families was collected by using a General Information Form and the visual perception of children was examined through the Frostig Developmental Test of Visual Perception. The Mann-Whitney U-test and Kruskal-Wallis variance analysis was used to determine whether there was a difference of between the visual perceptions of children with normal development and those diagnosed with attention deficit hyperactivity disorder and to discover whether the variables of gender, preschool education and parents` educational status affected the visual perceptions of children with attention deficit hyperactivity disorder. The results showed that there was a statistically meaningful difference between the visual perceptions of the two groups and that the visual perceptions of children with attention deficit hyperactivity disorder were affected meaningfully by gender, preschool education and parents` educational status.
Normal form from biological motion despite impaired ventral stream function.
Gilaie-Dotan, S; Bentin, S; Harel, M; Rees, G; Saygin, A P
2011-04-01
We explored the extent to which biological motion perception depends on ventral stream integration by studying LG, an unusual case of developmental visual agnosia. LG has significant ventral stream processing deficits but no discernable structural cortical abnormality. LG's intermediate visual areas and object-sensitive regions exhibit abnormal activation during visual object perception, in contrast to area V5/MT+ which responds normally to visual motion (Gilaie-Dotan, Perry, Bonneh, Malach, & Bentin, 2009). Here, in three studies we used point light displays, which require visual integration, in adaptive threshold experiments to examine LG's ability to detect form from biological and non-biological motion cues. LG's ability to detect and discriminate form from biological motion was similar to healthy controls. In contrast, he was significantly deficient in processing form from non-biological motion. Thus, LG can rely on biological motion cues to perceive human forms, but is considerably impaired in extracting form from non-biological motion. Finally, we found that while LG viewed biological motion, activity in a network of brain regions associated with processing biological motion was functionally correlated with his V5/MT+ activity, indicating that normal inputs from V5/MT+ might suffice to activate his action perception system. These results indicate that processing of biologically moving form can dissociate from other form processing in the ventral pathway. Furthermore, the present results indicate that integrative ventral stream processing is necessary for uncompromised processing of non-biological form from motion. Copyright © 2011 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Clayman, Deborah P. Goldweber
The ability of 100 second-grade boys and girls to self-correct oral reading errors was studied in relationship to visual-form perception, phonic skills, response speed, and reading level. Each child was tested individually with the Bender-Error Test, the Gray Oral Paragraphs, and the Roswell-Chall Diagnostic Reading Test and placed into a group of…
Milford Visual Communications Project.
ERIC Educational Resources Information Center
Milford Exempted Village Schools, OH.
This study discusses a visual communications project designed to develop activities to promote visual literacy at the elementary and secondary school levels. The project has four phases: (1) perception of basic forms in the environment, what these forms represent, and how they inter-relate; (2) discovery and communication of more complex…
Auditory-visual fusion in speech perception in children with cochlear implants
Schorr, Efrat A.; Fox, Nathan A.; van Wassenhove, Virginie; Knudsen, Eric I.
2005-01-01
Speech, for most of us, is a bimodal percept whenever we both hear the voice and see the lip movements of a speaker. Children who are born deaf never have this bimodal experience. We tested children who had been deaf from birth and who subsequently received cochlear implants for their ability to fuse the auditory information provided by their implants with visual information about lip movements for speech perception. For most of the children with implants (92%), perception was dominated by vision when visual and auditory speech information conflicted. For some, bimodal fusion was strong and consistent, demonstrating a remarkable plasticity in their ability to form auditory-visual associations despite the atypical stimulation provided by implants. The likelihood of consistent auditory-visual fusion declined with age at implant beyond 2.5 years, suggesting a sensitive period for bimodal integration in speech perception. PMID:16339316
Perceiving groups: The people perception of diversity and hierarchy.
Phillips, L Taylor; Slepian, Michael L; Hughes, Brent L
2018-05-01
The visual perception of individuals has received considerable attention (visual person perception), but little social psychological work has examined the processes underlying the visual perception of groups of people (visual people perception). Ensemble-coding is a visual mechanism that automatically extracts summary statistics (e.g., average size) of lower-level sets of stimuli (e.g., geometric figures), and also extends to the visual perception of groups of faces. Here, we consider whether ensemble-coding supports people perception, allowing individuals to form rapid, accurate impressions about groups of people. Across nine studies, we demonstrate that people visually extract high-level properties (e.g., diversity, hierarchy) that are unique to social groups, as opposed to individual persons. Observers rapidly and accurately perceived group diversity and hierarchy, or variance across race, gender, and dominance (Studies 1-3). Further, results persist when observers are given very short display times, backward pattern masks, color- and contrast-controlled stimuli, and absolute versus relative response options (Studies 4a-7b), suggesting robust effects supported specifically by ensemble-coding mechanisms. Together, we show that humans can rapidly and accurately perceive not only individual persons, but also emergent social information unique to groups of people. These people perception findings demonstrate the importance of visual processes for enabling people to perceive social groups and behave effectively in group-based social interactions. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Segregation of Form, Color, Movement, and Depth: Anatomy, Physiology, and Perception
NASA Astrophysics Data System (ADS)
Livingstone, Margaret; Hubel, David
1988-05-01
Anatomical and physiological observations in monkeys indicate that the primate visual system consists of several separate and independent subdivisions that analyze different aspects of the same retinal image: cells in cortical visual areas 1 and 2 and higher visual areas are segregated into three interdigitating subdivisions that differ in their selectivity for color, stereopsis, movement, and orientation. The pathways selective for form and color seem to be derived mainly from the parvocellular geniculate subdivisions, the depth- and movement-selective components from the magnocellular. At lower levels, in the retina and in the geniculate, cells in these two subdivisions differ in their color selectivity, contrast sensitivity, temporal properties, and spatial resolution. These major differences in the properties of cells at lower levels in each of the subdivisions led to the prediction that different visual functions, such as color, depth, movement, and form perception, should exhibit corresponding differences. Human perceptual experiments are remarkably consistent with these predictions. Moreover, perceptual experiments can be designed to ask which subdivisions of the system are responsible for particular visual abilities, such as figure/ground discrimination or perception of depth from perspective or relative movement--functions that might be difficult to deduce from single-cell response properties.
Perception of biological motion from size-invariant body representations.
Lappe, Markus; Wittinghofer, Karin; de Lussanet, Marc H E
2015-01-01
The visual recognition of action is one of the socially most important and computationally demanding capacities of the human visual system. It combines visual shape recognition with complex non-rigid motion perception. Action presented as a point-light animation is a striking visual experience for anyone who sees it for the first time. Information about the shape and posture of the human body is sparse in point-light animations, but it is essential for action recognition. In the posturo-temporal filter model of biological motion perception posture information is picked up by visual neurons tuned to the form of the human body before body motion is calculated. We tested whether point-light stimuli are processed through posture recognition of the human body form by using a typical feature of form recognition, namely size invariance. We constructed a point-light stimulus that can only be perceived through a size-invariant mechanism. This stimulus changes rapidly in size from one image to the next. It thus disrupts continuity of early visuo-spatial properties but maintains continuity of the body posture representation. Despite this massive manipulation at the visuo-spatial level, size-changing point-light figures are spontaneously recognized by naive observers, and support discrimination of human body motion.
Does visual attention drive the dynamics of bistable perception?
Dieter, Kevin C.; Brascamp, Jan; Tadin, Duje; Blake, Randolph
2016-01-01
How does attention interact with incoming sensory information to determine what we perceive? One domain in which this question has received serious consideration is that of bistable perception: a captivating class of phenomena that involves fluctuating visual experience in the face of physically unchanging sensory input. Here, some investigations have yielded support for the idea that attention alone determines what is seen, while others have implicated entirely attention-independent processes in driving alternations during bistable perception. We review the body of literature addressing this divide and conclude that in fact both sides are correct – depending on the form of bistable perception being considered. Converging evidence suggests that visual attention is required for alternations in the type of bistable perception called binocular rivalry, while alternations during other types of bistable perception appear to continue without requiring attention. We discuss some implications of this differential effect of attention for our understanding of the mechanisms underlying bistable perception, and examine how these mechanisms operate during our everyday visual experiences. PMID:27230785
Does visual attention drive the dynamics of bistable perception?
Dieter, Kevin C; Brascamp, Jan; Tadin, Duje; Blake, Randolph
2016-10-01
How does attention interact with incoming sensory information to determine what we perceive? One domain in which this question has received serious consideration is that of bistable perception: a captivating class of phenomena that involves fluctuating visual experience in the face of physically unchanging sensory input. Here, some investigations have yielded support for the idea that attention alone determines what is seen, while others have implicated entirely attention-independent processes in driving alternations during bistable perception. We review the body of literature addressing this divide and conclude that in fact both sides are correct-depending on the form of bistable perception being considered. Converging evidence suggests that visual attention is required for alternations in the type of bistable perception called binocular rivalry, while alternations during other types of bistable perception appear to continue without requiring attention. We discuss some implications of this differential effect of attention for our understanding of the mechanisms underlying bistable perception, and examine how these mechanisms operate during our everyday visual experiences.
The role of the right hemisphere in form perception and visual gnosis organization.
Belyi, B I
1988-06-01
Peculiarities of series of picture interpretations and Rorschach test results in patients with unilateral benign hemispheric tumours are discussed. It is concluded that visual perception in the right hemisphere has hierarchic structure, i.e., each successive area from the occipital lobe towards the frontal having a more complicated function. Visual engrams are distributed over the right hemisphere in a manner similar to the way the visual information is recorded in holographic systems. In any impairment of the right hemisphere a tendency towards whole but unclear vision arises. The preservation of lower levels of visual perception provides for clear vision only of small parts of the image. Thus, confabulatory phenomena arises, which are specific for right hemispheric lesions.
Ventral aspect of the visual form pathway is not critical for the perception of biological motion
Gilaie-Dotan, Sharon; Saygin, Ayse Pinar; Lorenzi, Lauren J.; Rees, Geraint; Behrmann, Marlene
2015-01-01
Identifying the movements of those around us is fundamental for many daily activities, such as recognizing actions, detecting predators, and interacting with others socially. A key question concerns the neurobiological substrates underlying biological motion perception. Although the ventral “form” visual cortex is standardly activated by biologically moving stimuli, whether these activations are functionally critical for biological motion perception or are epiphenomenal remains unknown. To address this question, we examined whether focal damage to regions of the ventral visual cortex, resulting in significant deficits in form perception, adversely affects biological motion perception. Six patients with damage to the ventral cortex were tested with sensitive point-light display paradigms. All patients were able to recognize unmasked point-light displays and their perceptual thresholds were not significantly different from those of three different control groups, one of which comprised brain-damaged patients with spared ventral cortex (n > 50). Importantly, these six patients performed significantly better than patients with damage to regions critical for biological motion perception. To assess the necessary contribution of different regions in the ventral pathway to biological motion perception, we complement the behavioral findings with a fine-grained comparison between the lesion location and extent, and the cortical regions standardly implicated in biological motion processing. This analysis revealed that the ventral aspects of the form pathway (e.g., fusiform regions, ventral extrastriate body area) are not critical for biological motion perception. We hypothesize that the role of these ventral regions is to provide enhanced multiview/posture representations of the moving person rather than to represent biological motion perception per se. PMID:25583504
[Social behavior, musicality and visual perception in monogloid children (author's transl)].
Rabensteiner, B
1975-01-01
Forty-nine mongoloid and 48 non-mongol test persons of equivalent age and intelligence were selected and studied with respect to social behavior, speech disorders (observation of behavior), musicality and visual perception. There were significant differences in favor of the mongols with respect to social adaption. Speech disorders of all kinds occurred significantly more frequently in mongol children; stuttering was significantly more frequent in the boys. The mongol group did significantly better in the musicality test; the difference in the rhythmical part was highly significant. The average differences in the capacity for visual discrimination of colors, geometrical forms and the spatial relationship of geometrical forms were not significant.
Dissociating 'what' and 'how' in visual form agnosia: a computational investigation.
Vecera, S P
2002-01-01
Patients with visual form agnosia exhibit a profound impairment in shape perception (what an object is) coupled with intact visuomotor functions (how to act on an object), demonstrating a dissociation between visual perception and action. How can these patients act on objects that they cannot perceive? Although two explanations of this 'what-how' dissociation have been offered, each explanation has shortcomings. A 'pathway information' account of the 'what-how' dissociation is presented in this paper. This account hypothesizes that 'where' and 'how' tasks require less information than 'what' tasks, thereby allowing 'where/how' to remain relatively spared in the face of neurological damage. Simulations with a neural network model test the predictions of the pathway information account. Following damage to an input layer common to the 'what' and 'where/how' pathways, the model performs object identification more poorly than spatial localization. Thus, the model offers a parsimonious explanation of differential 'what-how' performance in visual form agnosia. The simulation results are discussed in terms of their implications for visual form agnosia and other neuropsychological syndromes.
Herrera-Guzmán, I; Peña-Casanova, J; Lara, J P; Gudayol-Ferré, E; Böhm, P
2004-08-01
The assessment of visual perception and cognition forms an important part of any general cognitive evaluation. We have studied the possible influence of age, sex, and education on a normal elderly Spanish population (90 healthy subjects) in performance in visual perception tasks. To evaluate visual perception and cognition, we have used the subjects performance with The Visual Object and Space Perception Battery (VOSP). The test consists of 8 subtests: 4 measure visual object perception (Incomplete Letters, Silhouettes, Object Decision, and Progressive Silhouettes) while the other 4 measure visual space perception (Dot Counting, Position Discrimination, Number Location, and Cube Analysis). The statistical procedures employed were either simple or multiple linear regression analyses (subtests with normal distribution) and Mann-Whitney tests, followed by ANOVA with Scheffe correction (subtests without normal distribution). Age and sex were found to be significant modifying factors in the Silhouettes, Object Decision, Progressive Silhouettes, Position Discrimination, and Cube Analysis subtests. Educational level was found to be a significant predictor of function for the Silhouettes and Object Decision subtests. The results of the sample were adjusted in line with the differences observed. Our study also offers preliminary normative data for the administration of the VOSP to an elderly Spanish population. The results are discussed and compared with similar studies performed in different cultural backgrounds.
Visual body perception in anorexia nervosa.
Urgesi, Cosimo; Fornasari, Livia; Perini, Laura; Canalaz, Francesca; Cremaschi, Silvana; Faleschini, Laura; Balestrieri, Matteo; Fabbro, Franco; Aglioti, Salvatore Maria; Brambilla, Paolo
2012-05-01
Disturbance of body perception is a central aspect of anorexia nervosa (AN) and several neuroimaging studies have documented structural and functional alterations of occipito-temporal cortices involved in visual body processing. However, it is unclear whether these perceptual deficits involve more basic aspects of others' body perception. A consecutive sample of 15 adolescent patients with AN were compared with a group of 15 age- and gender-matched controls in delayed matching to sample tasks requiring the visual discrimination of the form or of the action of others' body. Patients showed better visual discrimination performance than controls in detail-based processing of body forms but not of body actions, which positively correlated with their increased tendency to convert a signal of punishment into a signal of reinforcement (higher persistence scores). The paradoxical advantage of patients with AN in detail-based body processing may be associated to their tendency to routinely explore body parts as a consequence of their obsessive worries about body appearance. Copyright © 2012 Wiley Periodicals, Inc.
Brief Report: Autism-Like Traits Are Associated with Enhanced Ability to Disembed Visual Forms
ERIC Educational Resources Information Center
Sabatino DiCriscio, Antoinette; Troiani, Vanessa
2017-01-01
Atypical visual perceptual skills are thought to underlie unusual visual attention in autism spectrum disorders. We assessed whether individual differences in visual processing skills scaled with quantitative traits associated with the broader autism phenotype (BAP). Visual perception was assessed using the Figure-ground subtest of the Test of…
Neural dynamics of motion processing and speed discrimination.
Chey, J; Grossberg, S; Mingolla, E
1998-09-01
A neural network model of visual motion perception and speed discrimination is presented. The model shows how a distributed population code of speed tuning, that realizes a size-speed correlation, can be derived from the simplest mechanisms whereby activations of multiple spatially short-range filters of different size are transformed into speed-turned cell responses. These mechanisms use transient cell responses to moving stimuli, output thresholds that covary with filter size, and competition. These mechanisms are proposed to occur in the V1-->MT cortical processing stream. The model reproduces empirically derived speed discrimination curves and simulates data showing how visual speed perception and discrimination can be affected by stimulus contrast, duration, dot density and spatial frequency. Model motion mechanisms are analogous to mechanisms that have been used to model 3-D form and figure-ground perception. The model forms the front end of a larger motion processing system that has been used to simulate how global motion capture occurs, and how spatial attention is drawn to moving forms. It provides a computational foundation for an emerging neural theory of 3-D form and motion perception.
Structural and functional changes across the visual cortex of a patient with visual form agnosia.
Bridge, Holly; Thomas, Owen M; Minini, Loredana; Cavina-Pratesi, Cristiana; Milner, A David; Parker, Andrew J
2013-07-31
Loss of shape recognition in visual-form agnosia occurs without equivalent losses in the use of vision to guide actions, providing support for the hypothesis of two visual systems (for "perception" and "action"). The human individual DF received a toxic exposure to carbon monoxide some years ago, which resulted in a persisting visual-form agnosia that has been extensively characterized at the behavioral level. We conducted a detailed high-resolution MRI study of DF's cortex, combining structural and functional measurements. We present the first accurate quantification of the changes in thickness across DF's occipital cortex, finding the most substantial loss in the lateral occipital cortex (LOC). There are reduced white matter connections between LOC and other areas. Functional measures show pockets of activity that survive within structurally damaged areas. The topographic mapping of visual areas showed that ordered retinotopic maps were evident for DF in the ventral portions of visual cortical areas V1, V2, V3, and hV4. Although V1 shows evidence of topographic order in its dorsal portion, such maps could not be found in the dorsal parts of V2 and V3. We conclude that it is not possible to understand fully the deficits in object perception in visual-form agnosia without the exploitation of both structural and functional measurements. Our results also highlight for DF the cortical routes through which visual information is able to pass to support her well-documented abilities to use visual information to guide actions.
Rise and fall of the two visual systems theory.
Rossetti, Yves; Pisella, Laure; McIntosh, Robert D
2017-06-01
Among the many dissociations describing the visual system, the dual theory of two visual systems, respectively dedicated to perception and action, has yielded a lot of support. There are psychophysical, anatomical and neuropsychological arguments in favor of this theory. Several behavioral studies that used sensory and motor psychophysical parameters observed differences between perceptive and motor responses. The anatomical network of the visual system in the non-human primate was very readily organized according to two major pathways, dorsal and ventral. Neuropsychological studies, exploring optic ataxia and visual agnosia as characteristic deficits of these two pathways, led to the proposal of a functional double dissociation between visuomotor and visual perceptual functions. After a major wave of popularity that promoted great advances, particularly in knowledge of visuomotor functions, the guiding theory is now being reconsidered. Firstly, the idea of a double dissociation between optic ataxia and visual form agnosia, as cleanly separating visuomotor from visual perceptual functions, is no longer tenable; optic ataxia does not support a dissociation between perception and action and might be more accurately viewed as a negative image of action blindsight. Secondly, dissociations between perceptive and motor responses highlighted in the framework of this theory concern a very elementary level of action, even automatically guided action routines. Thirdly, the very rich interconnected network of the visual brain yields few arguments in favor of a strict perception/action dissociation. Overall, the dissociation between motor function and perceptive function explored by these behavioral and neuropsychological studies can help define an automatic level of action organization deficient in optic ataxia and preserved in action blindsight, and underlines the renewed need to consider the perception-action circle as a functional ensemble. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Exposure to Organic Solvents Used in Dry Cleaning Reduces Low and High Level Visual Function
Jiménez Barbosa, Ingrid Astrid
2015-01-01
Purpose To investigate whether exposure to occupational levels of organic solvents in the dry cleaning industry is associated with neurotoxic symptoms and visual deficits in the perception of basic visual features such as luminance contrast and colour, higher level processing of global motion and form (Experiment 1), and cognitive function as measured in a visual search task (Experiment 2). Methods The Q16 neurotoxic questionnaire, a commonly used measure of neurotoxicity (by the World Health Organization), was administered to assess the neurotoxic status of a group of 33 dry cleaners exposed to occupational levels of organic solvents (OS) and 35 age-matched non dry-cleaners who had never worked in the dry cleaning industry. In Experiment 1, to assess visual function, contrast sensitivity, colour/hue discrimination (Munsell Hue 100 test), global motion and form thresholds were assessed using computerised psychophysical tests. Sensitivity to global motion or form structure was quantified by varying the pattern coherence of global dot motion (GDM) and Glass pattern (oriented dot pairs) respectively (i.e., the percentage of dots/dot pairs that contribute to the perception of global structure). In Experiment 2, a letter visual-search task was used to measure reaction times (as a function of the number of elements: 4, 8, 16, 32, 64 and 100) in both parallel and serial search conditions. Results Dry cleaners exposed to organic solvents had significantly higher scores on the Q16 compared to non dry-cleaners indicating that dry cleaners experienced more neurotoxic symptoms on average. The contrast sensitivity function for dry cleaners was significantly lower at all spatial frequencies relative to non dry-cleaners, which is consistent with previous studies. Poorer colour discrimination performance was also noted in dry cleaners than non dry-cleaners, particularly along the blue/yellow axis. In a new finding, we report that global form and motion thresholds for dry cleaners were also significantly higher and almost double than that obtained from non dry-cleaners. However, reaction time performance on both parallel and serial visual search was not different between dry cleaners and non dry-cleaners. Conclusions Exposure to occupational levels of organic solvents is associated with neurotoxicity which is in turn associated with both low level deficits (such as the perception of contrast and discrimination of colour) and high level visual deficits such as the perception of global form and motion, but not visual search performance. The latter finding indicates that the deficits in visual function are unlikely to be due to changes in general cognitive performance. PMID:25933026
Perception of shapes targeting local and global processes in autism spectrum disorders.
Grinter, Emma J; Maybery, Murray T; Pellicano, Elizabeth; Badcock, Johanna C; Badcock, David R
2010-06-01
Several researchers have found evidence for impaired global processing in the dorsal visual stream in individuals with autism spectrum disorders (ASDs). However, support for a similar pattern of visual processing in the ventral visual stream is less consistent. Critical to resolving the inconsistency is the assessment of local and global form processing ability. Within the visual domain, radial frequency (RF) patterns - shapes formed by sinusoidally varying the radius of a circle to add 'bumps' of a certain number to a circle - can be used to examine local and global form perception. Typically developing children and children with an ASD discriminated between circles and RF patterns that are processed either locally (RF24) or globally (RF3). Children with an ASD required greater shape deformation to identify RF3 shapes compared to typically developing children, consistent with difficulty in global processing in the ventral stream. No group difference was observed for RF24 shapes, suggesting intact local ventral-stream processing. These outcomes support the position that a deficit in global visual processing is present in ASDs, consistent with the notion of Weak Central Coherence.
Perception of ensemble statistics requires attention.
Jackson-Nielsen, Molly; Cohen, Michael A; Pitts, Michael A
2017-02-01
To overcome inherent limitations in perceptual bandwidth, many aspects of the visual world are represented as summary statistics (e.g., average size, orientation, or density of objects). Here, we investigated the relationship between summary (ensemble) statistics and visual attention. Recently, it was claimed that one ensemble statistic in particular, color diversity, can be perceived without focal attention. However, a broader debate exists over the attentional requirements of conscious perception, and it is possible that some form of attention is necessary for ensemble perception. To test this idea, we employed a modified inattentional blindness paradigm and found that multiple types of summary statistics (color and size) often go unnoticed without attention. In addition, we found attentional costs in dual-task situations, further implicating a role for attention in statistical perception. Overall, we conclude that while visual ensembles may be processed efficiently, some amount of attention is necessary for conscious perception of ensemble statistics. Copyright © 2016 Elsevier Inc. All rights reserved.
Humphreys, Glyn W
2016-10-01
The Treisman Bartlett lecture, reported in the Quarterly Journal of Experimental Psychology in 1988, provided a major overview of the feature integration theory of attention. This has continued to be a dominant account of human visual attention to this day. The current paper provides a summary of the work reported in the lecture and an update on critical aspects of the theory as applied to visual object perception. The paper highlights the emergence of findings that pose significant challenges to the theory and which suggest that revisions are required that allow for (a) several rather than a single form of feature integration, (b) some forms of feature integration to operate preattentively, (c) stored knowledge about single objects and interactions between objects to modulate perceptual integration, (d) the application of feature-based inhibition to object files where visual features are specified, which generates feature-based spreading suppression and scene segmentation, and (e) a role for attention in feature confirmation rather than feature integration in visual selection. A feature confirmation account of attention in object perception is outlined.
Pictorial communication in virtual and real environments
NASA Technical Reports Server (NTRS)
Ellis, Stephen R. (Editor)
1991-01-01
Papers about the communication between human users and machines in real and synthetic environments are presented. Individual topics addressed include: pictorial communication, distortions in memory for visual displays, cartography and map displays, efficiency of graphical perception, volumetric visualization of 3D data, spatial displays to increase pilot situational awareness, teleoperation of land vehicles, computer graphics system for visualizing spacecraft in orbit, visual display aid for orbital maneuvering, multiaxis control in telemanipulation and vehicle guidance, visual enhancements in pick-and-place tasks, target axis effects under transformed visual-motor mappings, adapting to variable prismatic displacement. Also discussed are: spatial vision within egocentric and exocentric frames of reference, sensory conflict in motion sickness, interactions of form and orientation, perception of geometrical structure from congruence, prediction of three-dimensionality across continuous surfaces, effects of viewpoint in the virtual space of pictures, visual slant underestimation, spatial constraints of stereopsis in video displays, stereoscopic stance perception, paradoxical monocular stereopsis and perspective vergence. (No individual items are abstracted in this volume)
Brief Report: Autism-like Traits are Associated With Enhanced Ability to Disembed Visual Forms.
Sabatino DiCriscio, Antoinette; Troiani, Vanessa
2017-05-01
Atypical visual perceptual skills are thought to underlie unusual visual attention in autism spectrum disorders. We assessed whether individual differences in visual processing skills scaled with quantitative traits associated with the broader autism phenotype (BAP). Visual perception was assessed using the Figure-ground subtest of the Test of visual perceptual skills-3rd Edition (TVPS). In a large adult cohort (n = 209), TVPS-Figure Ground scores were positively correlated with autistic-like social features as assessed by the Broader autism phenotype questionnaire. This relationship was gender-specific, with males showing a correspondence between visual perceptual skills and autistic-like traits. This work supports the link between atypical visual perception and autism and highlights the importance in characterizing meaningful individual differences in clinically relevant behavioral phenotypes.
Cho, Hwi-Young; Kim, Kitae; Lee, Byounghee; Jung, Jinhwa
2015-03-01
[Purpose] This study investigated a brain wave and visual perception changes in stroke subjects using neurofeedback (NFB) training. [Subjects] Twenty-seven stroke subjects were randomly allocated to the NFB (n = 13) group and the control group (n=14). [Methods] Two expert therapists provided the NFB and CON groups with traditional rehabilitation therapy in 30 thirst-minute sessions over the course of 6 weeks. NFB training was provided only to the NFB group. The CON group received traditional rehabilitation therapy only. Before and after the 6-week intervention, a brain wave test and motor free visual perception test (MVPT) were performed. [Results] Both groups showed significant differences in their relative beta wave values and attention concentration quotients. Moreover, the NFB group showed a significant difference in MVPT visual discrimination, form constancy, visual memory, visual closure, spatial relation, raw score, and processing time. [Conclusion] This study demonstrated that NFB training is more effective for increasing concentration and visual perception changes than traditional rehabilitation. In further studies, detailed and diverse investigations should be performed considering the number and characteristics of subjects, and the NFB training period.
Pavan, Andrea; Ghin, Filippo; Donato, Rita; Campana, Gianluca; Mather, George
2017-08-15
A long-held view of the visual system is that form and motion are independently analysed. However, there is physiological and psychophysical evidence of early interaction in the processing of form and motion. In this study, we used a combination of Glass patterns (GPs) and repetitive Transcranial Magnetic Stimulation (rTMS) to investigate in human observers the neural mechanisms underlying form-motion integration. GPs consist of randomly distributed dot pairs (dipoles) that induce the percept of an oriented stimulus. GPs can be either static or dynamic. Dynamic GPs have both a form component (i.e., orientation) and a non-directional motion component along the orientation axis. GPs were presented in two temporal intervals and observers were asked to discriminate the temporal interval containing the most coherent GP. rTMS was delivered over early visual area (V1/V2) and over area V5/MT shortly after the presentation of the GP in each interval. The results showed that rTMS applied over early visual areas affected the perception of static GPs, but the stimulation of area V5/MT did not affect observers' performance. On the other hand, rTMS was delivered over either V1/V2 or V5/MT strongly impaired the perception of dynamic GPs. These results suggest that early visual areas seem to be involved in the processing of the spatial structure of GPs, and interfering with the extraction of the global spatial structure also affects the extraction of the motion component, possibly interfering with early form-motion integration. However, visual area V5/MT is likely to be involved only in the processing of the motion component of dynamic GPs. These results suggest that motion and form cues may interact as early as V1/V2. Copyright © 2017 Elsevier Inc. All rights reserved.
Thurman, Steven M; Lu, Hongjing
2014-01-01
Visual form analysis is fundamental to shape perception and likely plays a central role in perception of more complex dynamic shapes, such as moving objects or biological motion. Two primary form-based cues serve to represent the overall shape of an object: the spatial position and the orientation of locations along the boundary of the object. However, it is unclear how the visual system integrates these two sources of information in dynamic form analysis, and in particular how the brain resolves ambiguities due to sensory uncertainty and/or cue conflict. In the current study, we created animations of sparsely-sampled dynamic objects (human walkers or rotating squares) comprised of oriented Gabor patches in which orientation could either coincide or conflict with information provided by position cues. When the cues were incongruent, we found a characteristic trade-off between position and orientation information whereby position cues increasingly dominated perception as the relative uncertainty of orientation increased and vice versa. Furthermore, we found no evidence for differences in the visual processing of biological and non-biological objects, casting doubt on the claim that biological motion may be specialized in the human brain, at least in specific terms of form analysis. To explain these behavioral results quantitatively, we adopt a probabilistic template-matching model that uses Bayesian inference within local modules to estimate object shape separately from either spatial position or orientation signals. The outputs of the two modules are integrated with weights that reflect individual estimates of subjective cue reliability, and integrated over time to produce a decision about the perceived dynamics of the input data. Results of this model provided a close fit to the behavioral data, suggesting a mechanism in the human visual system that approximates rational Bayesian inference to integrate position and orientation signals in dynamic form analysis.
Sparing of Sensitivity to Biological Motion but Not of Global Motion after Early Visual Deprivation
ERIC Educational Resources Information Center
Hadad, Bat-Sheva; Maurer, Daphne; Lewis, Terri L.
2012-01-01
Patients deprived of visual experience during infancy by dense bilateral congenital cataracts later show marked deficits in the perception of global motion (dorsal visual stream) and global form (ventral visual stream). We expected that they would also show marked deficits in sensitivity to biological motion, which is normally processed in the…
The role of Broca's area in speech perception: evidence from aphasia revisited.
Hickok, Gregory; Costanzo, Maddalena; Capasso, Rita; Miceli, Gabriele
2011-12-01
Motor theories of speech perception have been re-vitalized as a consequence of the discovery of mirror neurons. Some authors have even promoted a strong version of the motor theory, arguing that the motor speech system is critical for perception. Part of the evidence that is cited in favor of this claim is the observation from the early 1980s that individuals with Broca's aphasia, and therefore inferred damage to Broca's area, can have deficits in speech sound discrimination. Here we re-examine this issue in 24 patients with radiologically confirmed lesions to Broca's area and various degrees of associated non-fluent speech production. Patients performed two same-different discrimination tasks involving pairs of CV syllables, one in which both CVs were presented auditorily, and the other in which one syllable was auditorily presented and the other visually presented as an orthographic form; word comprehension was also assessed using word-to-picture matching tasks in both auditory and visual forms. Discrimination performance on the all-auditory task was four standard deviations above chance, as measured using d', and was unrelated to the degree of non-fluency in the patients' speech production. Performance on the auditory-visual task, however, was worse than, and not correlated with, the all-auditory task. The auditory-visual task was related to the degree of speech non-fluency. Word comprehension was at ceiling for the auditory version (97% accuracy) and near ceiling for the orthographic version (90% accuracy). We conclude that the motor speech system is not necessary for speech perception as measured both by discrimination and comprehension paradigms, but may play a role in orthographic decoding or in auditory-visual matching of phonological forms. 2011 Elsevier Inc. All rights reserved.
Segregation of Form, Color, Movement, and Depth: Anatomy, Physiology, and Perception.
ERIC Educational Resources Information Center
Livingstone, Margaret; Hubel, David
1988-01-01
Summarizes the anatomical, physiological, and psychological evidence related to the primate visual system. States that comparison of perceptual abilities with the electrophysiological properties of neurons may help deduce functions of visual areas. (RT)
Cronly-Dillon, J; Persaud, K; Gregory, R P
1999-01-01
This study demonstrates the ability of blind (previously sighted) and blindfolded (sighted) subjects in reconstructing and identifying a number of visual targets transformed into equivalent musical representations. Visual images are deconstructed through a process which selectively segregates different features of the image into separate packages. These are then encoded in sound and presented as a polyphonic musical melody which resembles a Baroque fugue with many voices, allowing subjects to analyse the component voices selectively in combination, or separately in sequence, in a manner which allows a subject to patch together and bind the different features of the object mentally into a mental percept of a single recognizable entity. The visual targets used in this study included a variety of geometrical figures, simple high-contrast line drawings of man-made objects, natural and urban scenes, etc., translated into sound and presented to the subject in polyphonic musical form. PMID:10643086
High-resolution remotely sensed small target detection by imitating fly visual perception mechanism.
Huang, Fengchen; Xu, Lizhong; Li, Min; Tang, Min
2012-01-01
The difficulty and limitation of small target detection methods for high-resolution remote sensing data have been a recent research hot spot. Inspired by the information capture and processing theory of fly visual system, this paper endeavors to construct a characterized model of information perception and make use of the advantages of fast and accurate small target detection under complex varied nature environment. The proposed model forms a theoretical basis of small target detection for high-resolution remote sensing data. After the comparison of prevailing simulation mechanism behind fly visual systems, we propose a fly-imitated visual system method of information processing for high-resolution remote sensing data. A small target detector and corresponding detection algorithm are designed by simulating the mechanism of information acquisition, compression, and fusion of fly visual system and the function of pool cell and the character of nonlinear self-adaption. Experiments verify the feasibility and rationality of the proposed small target detection model and fly-imitated visual perception method.
Sharpening vision by adapting to flicker.
Arnold, Derek H; Williams, Jeremy D; Phipps, Natasha E; Goodale, Melvyn A
2016-11-01
Human vision is surprisingly malleable. A static stimulus can seem to move after prolonged exposure to movement (the motion aftereffect), and exposure to tilted lines can make vertical lines seem oppositely tilted (the tilt aftereffect). The paradigm used to induce such distortions (adaptation) can provide powerful insights into the computations underlying human visual experience. Previously spatial form and stimulus dynamics were thought to be encoded independently, but here we show that adaptation to stimulus dynamics can sharpen form perception. We find that fast flicker adaptation (FFAd) shifts the tuning of face perception to higher spatial frequencies, enhances the acuity of spatial vision-allowing people to localize inputs with greater precision and to read finer scaled text, and it selectively reduces sensitivity to coarse-scale form signals. These findings are consistent with two interrelated influences: FFAd reduces the responsiveness of magnocellular neurons (which are important for encoding dynamics, but can have poor spatial resolution), and magnocellular responses contribute coarse spatial scale information when the visual system synthesizes form signals. Consequently, when magnocellular responses are mitigated via FFAd, human form perception is transiently sharpened because "blur" signals are mitigated.
Sharpening vision by adapting to flicker
Arnold, Derek H.; Williams, Jeremy D.; Phipps, Natasha E.; Goodale, Melvyn A.
2016-01-01
Human vision is surprisingly malleable. A static stimulus can seem to move after prolonged exposure to movement (the motion aftereffect), and exposure to tilted lines can make vertical lines seem oppositely tilted (the tilt aftereffect). The paradigm used to induce such distortions (adaptation) can provide powerful insights into the computations underlying human visual experience. Previously spatial form and stimulus dynamics were thought to be encoded independently, but here we show that adaptation to stimulus dynamics can sharpen form perception. We find that fast flicker adaptation (FFAd) shifts the tuning of face perception to higher spatial frequencies, enhances the acuity of spatial vision—allowing people to localize inputs with greater precision and to read finer scaled text, and it selectively reduces sensitivity to coarse-scale form signals. These findings are consistent with two interrelated influences: FFAd reduces the responsiveness of magnocellular neurons (which are important for encoding dynamics, but can have poor spatial resolution), and magnocellular responses contribute coarse spatial scale information when the visual system synthesizes form signals. Consequently, when magnocellular responses are mitigated via FFAd, human form perception is transiently sharpened because “blur” signals are mitigated. PMID:27791115
More Than Meets the Eye: Split-Second Social Perception
Freeman, Jonathan B.; Johnson, Kerri L.
2017-01-01
Recent research suggests that visual perception of social categories is shaped not only by facial features but also by higher-order social cognitive processes (e.g., stereotypes, attitudes, goals). Building on neural computational models of social perception, we outline a perspective of how multiple bottom-up visual cues are flexibly integrated with a range of top-down processes to form perceptions, and we identify a set of key brain regions involved. During this integration, ‘hidden’ social category activations are often triggered which temporarily impact perception without manifesting in explicit perceptual judgments. Importantly, these hidden impacts and other aspects of the perceptual process predict downstream social consequences – from politicians’ electoral success to several evaluative biases – independently of the outcomes of that process. PMID:27050834
Visual contribution to the multistable perception of speech.
Sato, Marc; Basirat, Anahita; Schwartz, Jean-Luc
2007-11-01
The multistable perception of speech, or verbal transformation effect, refers to perceptual changes experienced while listening to a speech form that is repeated rapidly and continuously. In order to test whether visual information from the speaker's articulatory gestures may modify the emergence and stability of verbal auditory percepts, subjects were instructed to report any perceptual changes during unimodal, audiovisual, and incongruent audiovisual presentations of distinct repeated syllables. In a first experiment, the perceptual stability of reported auditory percepts was significantly modulated by the modality of presentation. In a second experiment, when audiovisual stimuli consisting of a stable audio track dubbed with a video track that alternated between congruent and incongruent stimuli were presented, a strong correlation between the timing of perceptual transitions and the timing of video switches was found. Finally, a third experiment showed that the vocal tract opening onset event provided by the visual input could play the role of a bootstrap mechanism in the search for transformations. Altogether, these results demonstrate the capacity of visual information to control the multistable perception of speech in its phonetic content and temporal course. The verbal transformation effect thus provides a useful experimental paradigm to explore audiovisual interactions in speech perception.
Kim, Kyung Hwan; Kim, Ja Hyun
2006-02-20
The aim of this study was to compare spatiotemporal cortical activation patterns during the visual perception of Korean, English, and Chinese words. The comparison of these three languages offers an opportunity to study the effect of written forms on cortical processing of visually presented words, because of partial similarity/difference among words of these languages, and the familiarity of native Koreans with these three languages at the word level. Single-character words and pictograms were excluded from the stimuli in order to activate neuronal circuitries that are involved only in word perception. Since a variety of cerebral processes are sequentially evoked during visual word perception, a high-temporal resolution is required and thus we utilized event-related potential (ERP) obtained from high-density electroencephalograms. The differences and similarities observed from statistical analyses of ERP amplitudes, the correlation between ERP amplitudes and response times, and the patterns of current source density, appear to be in line with demands of visual and semantic analysis resulting from the characteristics of each language, and the expected task difficulties for native Korean subjects.
Visual Perception of Elevation
1992-01-20
by bloe num r ) FIELD oOUP suB.O. ’Spatial localization, Pitch, Roll, Eve level, Visual Localizatiyn1 VPR, VPV, Perception,Focentri(i spatia ca...TIlEIPHONE NUMBER 22c. OFFICE SYMIO. Dr. John Tangney (202) 767-5021 AFOSR/N DO FORM 1473, 83 APR EDITION OF I JAN 73 I OBSOLETE. 19 SECURITY... Schermerhorn Hall New York, NY 10027 20 January 1992 Interim Report for Period 1 January 1991 - 31 December 1991 Unclassified Prepared for :i
Jerath, Ravinder; Cearley, Shannon M; Barnes, Vernon A; Jensen, Mike
2018-01-01
A fundamental function of the visual system is detecting motion, yet visual perception is poorly understood. Current research has determined that the retina and ganglion cells elicit responses for motion detection; however, the underlying mechanism for this is incompletely understood. Previously we proposed that retinogeniculo-cortical oscillations and photoreceptors work in parallel to process vision. Here we propose that motion could also be processed within the retina, and not in the brain as current theory suggests. In this paper, we discuss: 1) internal neural space formation; 2) primary, secondary, and tertiary roles of vision; 3) gamma as the secondary role; and 4) synchronization and coherence. Movement within the external field is instantly detected by primary processing within the space formed by the retina, providing a unified view of the world from an internal point of view. Our new theory begins to answer questions about: 1) perception of space, erect images, and motion, 2) purpose of lateral inhibition, 3) speed of visual perception, and 4) how peripheral color vision occurs without a large population of cones located peripherally in the retina. We explain that strong oscillatory activity influences on brain activity and is necessary for: 1) visual processing, and 2) formation of the internal visuospatial area necessary for visual consciousness, which could allow rods to receive precise visual and visuospatial information, while retinal waves could link the lateral geniculate body with the cortex to form a neural space formed by membrane potential-based oscillations and photoreceptors. We propose that vision is tripartite, with three components that allow a person to make sense of the world, terming them "primary, secondary, and tertiary roles" of vision. Finally, we propose that Gamma waves that are higher in strength and volume allow communication among the retina, thalamus, and various areas of the cortex, and synchronization brings cortical faculties to the retina, while the thalamus is the link that couples the retina to the rest of the brain through activity by gamma oscillations. This novel theory lays groundwork for further research by providing a theoretical understanding that expands upon the functions of the retina, photoreceptors, and retinal plexus to include parallel processing needed to form the internal visual space that we perceive as the external world. Copyright © 2017 Elsevier Ltd. All rights reserved.
Maekawa, Toshihiko; Miyanaga, Yuka; Takahashi, Kenji; Takamiya, Naomi; Ogata, Katsuya; Tobimatsu, Shozo
2017-01-01
Individuals with autism spectrum disorder (ASD) show superior performance in processing fine detail, but often exhibit impaired gestalt face perception. The ventral visual stream from the primary visual cortex (V1) to the fusiform gyrus (V4) plays an important role in form (including faces) and color perception. The aim of this study was to investigate how the ventral stream is functionally altered in ASD. Visual evoked potentials were recorded in high-functioning ASD adults (n = 14) and typically developing (TD) adults (n = 14). We used three types of visual stimuli as follows: isoluminant chromatic (red/green, RG) gratings, high-contrast achromatic (black/white, BW) gratings with high spatial frequency (HSF, 5.3 cycles/degree), and face (neutral, happy, and angry faces) stimuli. Compared with TD controls, ASD adults exhibited longer N1 latency for RG, shorter N1 latency for BW, and shorter P1 latency, but prolonged N170 latency, for face stimuli. Moreover, a greater difference in latency between P1 and N170, or between N1 for BW and N170 (i.e., the prolongation of cortico-cortical conduction time between V1 and V4) was observed in ASD adults. These findings indicate that ASD adults have enhanced fine-form (local HSF) processing, but impaired color processing at V1. In addition, they exhibit impaired gestalt face processing due to deficits in integration of multiple local HSF facial information at V4. Thus, altered ventral stream function may contribute to abnormal social processing in ASD. PMID:28146575
Perceptions of Schooling, Pedagogy and Notation in the Lives of Visually-Impaired Musicians
ERIC Educational Resources Information Center
Baker, David; Green, Lucy
2016-01-01
This article discusses findings on schooling, pedagogy and notation in the life-experiences of amateur and professional visually-impaired musicians/music teachers, and the professional experiences of sighted music teachers who work with visually-impaired learners. The study formed part of a broader UK Arts and Humanities Research Council funded…
Models of Speed Discrimination
NASA Technical Reports Server (NTRS)
1997-01-01
The prime purpose of this project was to investigate various theoretical issues concerning the integration of information across visual space. To date, most of the research efforts in the study of the visual system seem to have been focused in two almost non-overlaping directions. One research focus has been the low level perception as studied by psychophysics. The other focus has been the study of high level vision exemplified by the study of object perception. Most of the effort in psychophysics has been devoted to the search for the fundamental "features" of perception. The general idea is that the most peripheral processes of the visual system decompose the input into features that are then used for classification and recognition. The experimental and theoretical focus has been on finding and describing these analyzers that decompose images into useful components. Various models are then compared to the physiological measurements performed on neurons in the sensory systems. In the study of higher level perception, the work has been focused on the representation of objects and on the connections between various physical effects and object perception. In this category we find the perception of 3D from a variety of physical measurements including motion, shading and other physical phenomena. With few exceptions, there seem to be very limited development of theories describing how the visual system might combine the output of the analyzers to form the representation of visual objects. Therefore, the processes underlying the integration of information over space represent critical aspects of vision system. The understanding of these processes will have implications on our expectations for the underlying physiological mechanisms, as well as for our models of the internal representation for visual percepts. In this project, we explored several mechanisms related to spatial summation, attention, and eye movements. The project comprised three components: 1. Modeling visual search for the detection of speed deviation. 2. Perception of moving objects. 3. Exploring the role of eye movements in various visual tasks.
Bridging views in cinema: a review of the art and science of view integration.
Levin, Daniel T; Baker, Lewis J
2017-09-01
Recently, there has been a surge of interest in the relationship between film and cognitive science. This is reflected in a new science of cinema that can help us both to understand this art form, and to produce new insights about cognition and perception. In this review, we begin by describing how the initial development of cinema involved close observation of audience response. This allowed filmmakers to develop an informal theory of visual cognition that helped them to isolate and creatively recombine fundamental elements of visual experience. We review research exploring naturalistic forms of visual perception and cognition that have opened the door to a productive convergence between the dynamic visual art of cinema and science of visual cognition that can enrich both. In particular, we discuss how parallel understandings of view integration in cinema and in cognitive science have been converging to support a new understanding of meaningful visual experience. WIREs Cogn Sci 2017, 8:e1436. doi: 10.1002/wcs.1436 For further resources related to this article, please visit the WIREs website. © 2017 Wiley Periodicals, Inc.
Magnotti, John F; Beauchamp, Michael S
2017-02-01
Audiovisual speech integration combines information from auditory speech (talker's voice) and visual speech (talker's mouth movements) to improve perceptual accuracy. However, if the auditory and visual speech emanate from different talkers, integration decreases accuracy. Therefore, a key step in audiovisual speech perception is deciding whether auditory and visual speech have the same source, a process known as causal inference. A well-known illusion, the McGurk Effect, consists of incongruent audiovisual syllables, such as auditory "ba" + visual "ga" (AbaVga), that are integrated to produce a fused percept ("da"). This illusion raises two fundamental questions: first, given the incongruence between the auditory and visual syllables in the McGurk stimulus, why are they integrated; and second, why does the McGurk effect not occur for other, very similar syllables (e.g., AgaVba). We describe a simplified model of causal inference in multisensory speech perception (CIMS) that predicts the perception of arbitrary combinations of auditory and visual speech. We applied this model to behavioral data collected from 60 subjects perceiving both McGurk and non-McGurk incongruent speech stimuli. The CIMS model successfully predicted both the audiovisual integration observed for McGurk stimuli and the lack of integration observed for non-McGurk stimuli. An identical model without causal inference failed to accurately predict perception for either form of incongruent speech. The CIMS model uses causal inference to provide a computational framework for studying how the brain performs one of its most important tasks, integrating auditory and visual speech cues to allow us to communicate with others.
Global processing in amblyopia: a review
Hamm, Lisa M.; Black, Joanna; Dai, Shuan; Thompson, Benjamin
2014-01-01
Amblyopia is a neurodevelopmental disorder of the visual system that is associated with disrupted binocular vision during early childhood. There is evidence that the effects of amblyopia extend beyond the primary visual cortex to regions of the dorsal and ventral extra-striate visual cortex involved in visual integration. Here, we review the current literature on global processing deficits in observers with either strabismic, anisometropic, or deprivation amblyopia. A range of global processing tasks have been used to investigate the extent of the cortical deficit in amblyopia including: global motion perception, global form perception, face perception, and biological motion. These tasks appear to be differentially affected by amblyopia. In general, observers with unilateral amblyopia appear to show deficits for local spatial processing and global tasks that require the segregation of signal from noise. In bilateral cases, the global processing deficits are exaggerated, and appear to extend to specialized perceptual systems such as those involved in face processing. PMID:24987383
More Than Meets the Eye: Split-Second Social Perception.
Freeman, Jonathan B; Johnson, Kerri L
2016-05-01
Recent research suggests that visual perception of social categories is shaped not only by facial features but also by higher-order social cognitive processes (e.g., stereotypes, attitudes, goals). Building on neural computational models of social perception, we outline a perspective of how multiple bottom-up visual cues are flexibly integrated with a range of top-down processes to form perceptions, and we identify a set of key brain regions involved. During this integration, 'hidden' social category activations are often triggered which temporarily impact perception without manifesting in explicit perceptual judgments. Importantly, these hidden impacts and other aspects of the perceptual process predict downstream social consequences - from politicians' electoral success to several evaluative biases - independently of the outcomes of that process. Copyright © 2016 Elsevier Ltd. All rights reserved.
Attentional Routes to Conscious Perception
Chica, Ana B.; Bartolomeo, Paolo
2012-01-01
The relationships between spatial attention and conscious perception are currently the object of intense debate. Recent evidence of double dissociations between attention and consciousness cast doubt on the time-honored concept of attention as a gateway to consciousness. Here we review evidence from behavioral, neurophysiologic, neuropsychological, and neuroimaging experiments, showing that distinct sorts of spatial attention can have different effects on visual conscious perception. While endogenous, or top-down attention, has weak influence on subsequent conscious perception of near-threshold stimuli, exogenous, or bottom-up forms of spatial attention appear instead to be a necessary, although not sufficient, step in the development of reportable visual experiences. Fronto-parietal networks important for spatial attention, with peculiar inter-hemispheric differences, constitute plausible neural substrates for the interactions between exogenous spatial attention and conscious perception. PMID:22279440
The physiology and psychophysics of the color-form relationship: a review
Moutoussis, Konstantinos
2015-01-01
The relationship between color and form has been a long standing issue in visual science. A picture of functional segregation and topographic clustering emerges from anatomical and electrophysiological studies in animals, as well as by brain imaging studies in human. However, one of the many roles of chromatic information is to support form perception, and in some cases it can do so in a way superior to achromatic (luminance) information. This occurs both at an early, contour-detection stage, as well as in late, higher stages involving spatial integration and the perception of global shapes. Pure chromatic contrast can also support several visual illusions related to form-perception. On the other hand, form seems a necessary prerequisite for the computation and assignment of color across space, and there are several respects in which the color of an object can be influenced by its form. Evidently, color and form are mutually dependent. Electrophysiological studies have revealed neurons in the visual brain able to signal contours determined by pure chromatic contrast, the spatial tuning of which is similar to that of neurons carrying luminance information. It seems that, especially at an early stage, form is processed by several, independent systems that interact with each other, each one having different tuning characteristics in color space. At later processing stages, mechanisms able to combine information coming from different sources emerge. A clear interaction between color and form is manifested by the fact that color-form contingencies can be observed in various perceptual phenomena such as adaptation aftereffects and illusions. Such an interaction suggests a possible early binding between these two attributes, something that has been verified by both electrophysiological and fMRI studies. PMID:26578989
The physiology and psychophysics of the color-form relationship: a review.
Moutoussis, Konstantinos
2015-01-01
The relationship between color and form has been a long standing issue in visual science. A picture of functional segregation and topographic clustering emerges from anatomical and electrophysiological studies in animals, as well as by brain imaging studies in human. However, one of the many roles of chromatic information is to support form perception, and in some cases it can do so in a way superior to achromatic (luminance) information. This occurs both at an early, contour-detection stage, as well as in late, higher stages involving spatial integration and the perception of global shapes. Pure chromatic contrast can also support several visual illusions related to form-perception. On the other hand, form seems a necessary prerequisite for the computation and assignment of color across space, and there are several respects in which the color of an object can be influenced by its form. Evidently, color and form are mutually dependent. Electrophysiological studies have revealed neurons in the visual brain able to signal contours determined by pure chromatic contrast, the spatial tuning of which is similar to that of neurons carrying luminance information. It seems that, especially at an early stage, form is processed by several, independent systems that interact with each other, each one having different tuning characteristics in color space. At later processing stages, mechanisms able to combine information coming from different sources emerge. A clear interaction between color and form is manifested by the fact that color-form contingencies can be observed in various perceptual phenomena such as adaptation aftereffects and illusions. Such an interaction suggests a possible early binding between these two attributes, something that has been verified by both electrophysiological and fMRI studies.
Use of cues in virtual reality depends on visual feedback.
Fulvio, Jacqueline M; Rokers, Bas
2017-11-22
3D motion perception is of central importance to daily life. However, when tested in laboratory settings, sensitivity to 3D motion signals is found to be poor, leading to the view that heuristics and prior assumptions are critical for 3D motion perception. Here we explore an alternative: sensitivity to 3D motion signals is context-dependent and must be learned based on explicit visual feedback in novel environments. The need for action-contingent visual feedback is well-established in the developmental literature. For example, young kittens that are passively moved through an environment, but unable to move through it themselves, fail to develop accurate depth perception. We find that these principles also obtain in adult human perception. Observers that do not experience visual consequences of their actions fail to develop accurate 3D motion perception in a virtual reality environment, even after prolonged exposure. By contrast, observers that experience the consequences of their actions improve performance based on available sensory cues to 3D motion. Specifically, we find that observers learn to exploit the small motion parallax cues provided by head jitter. Our findings advance understanding of human 3D motion processing and form a foundation for future study of perception in virtual and natural 3D environments.
Implications of a Gestalt Approach to Research in Visual Communications.
ERIC Educational Resources Information Center
Becker, Ann
Gestalt theory deals with the act of thinking and the construction of concepts in a situated manner, and, therefore, could be used to study how meaning is extracted from a visual display. Using the Gestalt framework of form cues and their usage patterns in the perception of, and learning from, visual media, researchers could study frame, line…
Inferring the direction of implied motion depends on visual awareness
Faivre, Nathan; Koch, Christof
2014-01-01
Visual awareness of an event, object, or scene is, by essence, an integrated experience, whereby different visual features composing an object (e.g., orientation, color, shape) appear as an unified percept and are processed as a whole. Here, we tested in human observers whether perceptual integration of static motion cues depends on awareness by measuring the capacity to infer the direction of motion implied by a static visible or invisible image under continuous flash suppression. Using measures of directional adaptation, we found that visible but not invisible implied motion adaptors biased the perception of real motion probes. In a control experiment, we found that invisible adaptors implying motion primed the perception of subsequent probes when they were identical (i.e., repetition priming), but not when they only shared the same direction (i.e., direction priming). Furthermore, using a model of visual processing, we argue that repetition priming effects are likely to arise as early as in the primary visual cortex. We conclude that although invisible images implying motion undergo some form of nonconscious processing, visual awareness is necessary to make inferences about motion direction. PMID:24706951
Inferring the direction of implied motion depends on visual awareness.
Faivre, Nathan; Koch, Christof
2014-04-04
Visual awareness of an event, object, or scene is, by essence, an integrated experience, whereby different visual features composing an object (e.g., orientation, color, shape) appear as an unified percept and are processed as a whole. Here, we tested in human observers whether perceptual integration of static motion cues depends on awareness by measuring the capacity to infer the direction of motion implied by a static visible or invisible image under continuous flash suppression. Using measures of directional adaptation, we found that visible but not invisible implied motion adaptors biased the perception of real motion probes. In a control experiment, we found that invisible adaptors implying motion primed the perception of subsequent probes when they were identical (i.e., repetition priming), but not when they only shared the same direction (i.e., direction priming). Furthermore, using a model of visual processing, we argue that repetition priming effects are likely to arise as early as in the primary visual cortex. We conclude that although invisible images implying motion undergo some form of nonconscious processing, visual awareness is necessary to make inferences about motion direction.
Shared sensory estimates for human motion perception and pursuit eye movements.
Mukherjee, Trishna; Battifarano, Matthew; Simoncini, Claudio; Osborne, Leslie C
2015-06-03
Are sensory estimates formed centrally in the brain and then shared between perceptual and motor pathways or is centrally represented sensory activity decoded independently to drive awareness and action? Questions about the brain's information flow pose a challenge because systems-level estimates of environmental signals are only accessible indirectly as behavior. Assessing whether sensory estimates are shared between perceptual and motor circuits requires comparing perceptual reports with motor behavior arising from the same sensory activity. Extrastriate visual cortex both mediates the perception of visual motion and provides the visual inputs for behaviors such as smooth pursuit eye movements. Pursuit has been a valuable testing ground for theories of sensory information processing because the neural circuits and physiological response properties of motion-responsive cortical areas are well studied, sensory estimates of visual motion signals are formed quickly, and the initiation of pursuit is closely coupled to sensory estimates of target motion. Here, we analyzed variability in visually driven smooth pursuit and perceptual reports of target direction and speed in human subjects while we manipulated the signal-to-noise level of motion estimates. Comparable levels of variability throughout viewing time and across conditions provide evidence for shared noise sources in the perception and action pathways arising from a common sensory estimate. We found that conditions that create poor, low-gain pursuit create a discrepancy between the precision of perception and that of pursuit. Differences in pursuit gain arising from differences in optic flow strength in the stimulus reconcile much of the controversy on this topic. Copyright © 2015 the authors 0270-6474/15/358515-16$15.00/0.
Shared Sensory Estimates for Human Motion Perception and Pursuit Eye Movements
Mukherjee, Trishna; Battifarano, Matthew; Simoncini, Claudio
2015-01-01
Are sensory estimates formed centrally in the brain and then shared between perceptual and motor pathways or is centrally represented sensory activity decoded independently to drive awareness and action? Questions about the brain's information flow pose a challenge because systems-level estimates of environmental signals are only accessible indirectly as behavior. Assessing whether sensory estimates are shared between perceptual and motor circuits requires comparing perceptual reports with motor behavior arising from the same sensory activity. Extrastriate visual cortex both mediates the perception of visual motion and provides the visual inputs for behaviors such as smooth pursuit eye movements. Pursuit has been a valuable testing ground for theories of sensory information processing because the neural circuits and physiological response properties of motion-responsive cortical areas are well studied, sensory estimates of visual motion signals are formed quickly, and the initiation of pursuit is closely coupled to sensory estimates of target motion. Here, we analyzed variability in visually driven smooth pursuit and perceptual reports of target direction and speed in human subjects while we manipulated the signal-to-noise level of motion estimates. Comparable levels of variability throughout viewing time and across conditions provide evidence for shared noise sources in the perception and action pathways arising from a common sensory estimate. We found that conditions that create poor, low-gain pursuit create a discrepancy between the precision of perception and that of pursuit. Differences in pursuit gain arising from differences in optic flow strength in the stimulus reconcile much of the controversy on this topic. PMID:26041919
Memory as Perception of the Past: Compressed Time inMind and Brain.
Howard, Marc W
2018-02-01
In the visual system retinal space is compressed such that acuity decreases further from the fovea. Different forms of memory may rely on a compressed representation of time, manifested as decreased accuracy for events that happened further in the past. Neurophysiologically, "time cells" show receptive fields in time. Analogous to the compression of visual space, time cells show less acuity for events further in the past. Behavioral evidence suggests memory can be accessed by scanning a compressed temporal representation, analogous to visual search. This suggests a common computational language for visual attention and memory retrieval. In this view, time functions like a scaffolding that organizes memories in much the same way that retinal space functions like a scaffolding for visual perception. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sensory Substitution and Multimodal Mental Imagery.
Nanay, Bence
2017-09-01
Many philosophers use findings about sensory substitution devices in the grand debate about how we should individuate the senses. The big question is this: Is "vision" assisted by (tactile) sensory substitution really vision? Or is it tactile perception? Or some sui generis novel form of perception? My claim is that sensory substitution assisted "vision" is neither vision nor tactile perception, because it is not perception at all. It is mental imagery: visual mental imagery triggered by tactile sensory stimulation. But it is a special form of mental imagery that is triggered by corresponding sensory stimulation in a different sense modality, which I call "multimodal mental imagery."
Implications on visual apperception: energy, duration, structure and synchronization.
Bókkon, I; Vimal, Ram Lakhan Pandey
2010-07-01
Although primary visual cortex (V1 or striate) activity per se is not sufficient for visual apperception (normal conscious visual experiences and conscious functions such as detection, discrimination, and recognition), the same is also true for extrastriate visual areas (such as V2, V3, V4/V8/VO, V5/M5/MST, IT, and GF). In the lack of V1 area, visual signals can still reach several extrastriate parts but appear incapable of generating normal conscious visual experiences. It is scarcely emphasized in the scientific literature that conscious perceptions and representations must have also essential energetic conditions. These energetic conditions are achieved by spatiotemporal networks of dynamic mitochondrial distributions inside neurons. However, the highest density of neurons in neocortex (number of neurons per degree of visual angle) devoted to representing the visual field is found in retinotopic V1. It means that the highest mitochondrial (energetic) activity can be achieved in mitochondrial cytochrome oxidase-rich V1 areas. Thus, V1 bear the highest energy allocation for visual representation. In addition, the conscious perceptions also demand structural conditions, presence of adequate duration of information representation, and synchronized neural processes and/or 'interactive hierarchical structuralism.' For visual apperception, various visual areas are involved depending on context such as stimulus characteristics such as color, form/shape, motion, and other features. Here, we focus primarily on V1 where specific mitochondrial-rich retinotopic structures are found; we will concisely discuss V2 where smaller riches of these structures are found. We also point out that residual brain states are not fully reflected in active neural patterns after visual perception. Namely, after visual perception, subliminal residual states are not being reflected in passive neural recording techniques, but require active stimulation to be revealed.
Pastukhov, Alexander
2016-02-01
We investigated the relation between perception and sensory memory of multi-stable structure-from-motion displays. The latter is an implicit visual memory that reflects a recent history of perceptual dominance and influences only the initial perception of multi-stable displays. First, we established the earliest time point when the direction of an illusory rotation can be reversed after the display onset (29-114 ms). Because our display manipulation did not bias perception towards a specific direction of illusory rotation but only signaled the change in motion, this means that the perceptual dominance was established no later than 29-114 ms after the stimulus onset. Second, we used orientation-selectivity of sensory memory to establish which display orientation produced the strongest memory trace and when this orientation was presented during the preceding prime interval (80-140 ms). Surprisingly, both estimates point towards the time interval immediately after the display onset, indicating that both perception and sensory memory form at approximately the same time. This suggests a tighter integration between perception and sensory memory than previously thought, warrants a reconsideration of its role in visual perception, and indicates that sensory memory could be a unique behavioral correlate of the earlier perceptual inference that can be studied post hoc.
Impaired visual recognition of biological motion in schizophrenia.
Kim, Jejoong; Doop, Mikisha L; Blake, Randolph; Park, Sohee
2005-09-15
Motion perception deficits have been suggested to be an important feature of schizophrenia but the behavioral consequences of such deficits are unknown. Biological motion refers to the movements generated by living beings. The human visual system rapidly and effortlessly detects and extracts socially relevant information from biological motion. A deficit in biological motion perception may have significant consequences for detecting and interpreting social information. Schizophrenia patients and matched healthy controls were tested on two visual tasks: recognition of human activity portrayed in point-light animations (biological motion task) and a perceptual control task involving detection of a grouped figure against the background noise (global-form task). Both tasks required detection of a global form against background noise but only the biological motion task required the extraction of motion-related information. Schizophrenia patients performed as well as the controls in the global-form task, but were significantly impaired on the biological motion task. In addition, deficits in biological motion perception correlated with impaired social functioning as measured by the Zigler social competence scale [Zigler, E., Levine, J. (1981). Premorbid competence in schizophrenia: what is being measured? Journal of Consulting and Clinical Psychology, 49, 96-105.]. The deficit in biological motion processing, which may be related to the previously documented deficit in global motion processing, could contribute to abnormal social functioning in schizophrenia.
Mercier, Manuel R; Schwartz, Sophie; Spinelli, Laurent; Michel, Christoph M; Blanke, Olaf
2017-03-01
The main model of visual processing in primates proposes an anatomo-functional distinction between the dorsal stream, specialized in spatio-temporal information, and the ventral stream, processing essentially form information. However, these two pathways also communicate to share much visual information. These dorso-ventral interactions have been studied using form-from-motion (FfM) stimuli, revealing that FfM perception first activates dorsal regions (e.g., MT+/V5), followed by successive activations of ventral regions (e.g., LOC). However, relatively little is known about the implications of focal brain damage of visual areas on these dorso-ventral interactions. In the present case report, we investigated the dynamics of dorsal and ventral activations related to FfM perception (using topographical ERP analysis and electrical source imaging) in a patient suffering from a deficit in FfM perception due to right extrastriate brain damage in the ventral stream. Despite the patient's FfM impairment, both successful (observed for the highest level of FfM signal) and absent/failed FfM perception evoked the same temporal sequence of three processing states observed previously in healthy subjects. During the first period, brain source localization revealed cortical activations along the dorsal stream, currently associated with preserved elementary motion processing. During the latter two periods, the patterns of activity differed from normal subjects: activations were observed in the ventral stream (as reported for normal subjects), but also in the dorsal pathway, with the strongest and most sustained activity localized in the parieto-occipital regions. On the other hand, absent/failed FfM perception was characterized by weaker brain activity, restricted to the more lateral regions. This study shows that in the present case report, successful FfM perception, while following the same temporal sequence of processing steps as in normal subjects, evoked different patterns of brain activity. By revealing a brain circuit involving the most rostral part of the dorsal pathway, this study provides further support for neuro-imaging studies and brain lesion investigations that have suggested the existence of different brain circuits associated with different profiles of interaction between the dorsal and the ventral streams.
Functional neural substrates of posterior cortical atrophy patients.
Shames, H; Raz, N; Levin, Netta
2015-07-01
Posterior cortical atrophy (PCA) is a neurodegenerative syndrome in which the most pronounced pathologic involvement is in the occipito-parietal visual regions. Herein, we aimed to better define the cortical reflection of this unique syndrome using a thorough battery of behavioral and functional MRI (fMRI) tests. Eight PCA patients underwent extensive testing to map their visual deficits. Assessments included visual functions associated with lower and higher components of the cortical hierarchy, as well as dorsal- and ventral-related cortical functions. fMRI was performed on five patients to examine the neuronal substrate of their visual functions. The PCA patient cohort exhibited stereopsis, saccadic eye movements and higher dorsal stream-related functional impairments, including simultant perception, image orientation, figure-from-ground segregation, closure and spatial orientation. In accordance with the behavioral findings, fMRI revealed intact activation in the ventral visual regions of face and object perception while more dorsal aspects of perception, including motion and gestalt perception, revealed impaired patterns of activity. In most of the patients, there was a lack of activity in the word form area, which is known to be linked to reading disorders. Finally, there was evidence of reduced cortical representation of the peripheral visual field, corresponding to the behaviorally assessed peripheral visual deficit. The findings are discussed in the context of networks extending from parietal regions, which mediate navigationally related processing, visually guided actions, eye movement control and working memory, suggesting that damage to these networks might explain the wide range of deficits in PCA patients.
NASA Technical Reports Server (NTRS)
Jobson, Daniel J.
1990-01-01
The visual perception of form information is considered to be based on the functioning of simple and complex neurons in the primate striate cortex. However, a review of the physiological data on these brain cells cannot be harmonized with either the perceptual spatial frequency performance of primates or the performance which is necessary for form perception in humans. This discrepancy together with recent interest in cortical-like and perceptual-like processing in image coding and machine vision prompted a series of image processing experiments intended to provide some definition of the selection of image operators. The experiments were aimed at determining operators which could be used to detect edges in a computational manner consistent with the visual perception of structure in images. Fundamental issues were the selection of size (peak spatial frequency) and circular versus oriented operators (or some combination). In a previous study, circular difference-of-Gaussian (DOG) operators, with peak spatial frequency responses at about 11 and 33 cyc/deg were found to capture the primary structural information in images. Here larger scale circular DOG operators were explored and led to severe loss of image structure and introduced spatial dislocations (due to blur) in structure which is not consistent with visual perception. Orientation sensitive operators (akin to one class of simple cortical neurons) introduced ambiguities of edge extent regardless of the scale of the operator. For machine vision schemes which are functionally similar to natural vision form perception, two circularly symmetric very high spatial frequency channels appear to be necessary and sufficient for a wide range of natural images. Such a machine vision scheme is most similar to the physiological performance of the primate lateral geniculate nucleus rather than the striate cortex.
Interobject grouping facilitates visual awareness.
Stein, Timo; Kaiser, Daniel; Peelen, Marius V
2015-01-01
In organizing perception, the human visual system takes advantage of regularities in the visual input to perceptually group related image elements. Simple stimuli that can be perceptually grouped based on physical regularities, for example by forming an illusory contour, have a competitive advantage in entering visual awareness. Here, we show that regularities that arise from the relative positioning of complex, meaningful objects in the visual environment also modulate visual awareness. Using continuous flash suppression, we found that pairs of objects that were positioned according to real-world spatial regularities (e.g., a lamp above a table) accessed awareness more quickly than the same object pairs shown in irregular configurations (e.g., a table above a lamp). This advantage was specific to upright stimuli and abolished by stimulus inversion, meaning that it did not reflect physical stimulus confounds or the grouping of simple image elements. Thus, knowledge of the spatial configuration of objects in the environment shapes the contents of conscious perception.
A SUGGESTED METHOD FOR PRE-SCHOOL IDENTIFICATION OF POTENTIAL READING DISABILITY.
ERIC Educational Resources Information Center
NEWTON, KENNETH R.; AND OTHERS
THE RELATIONSHIPS BETWEEN PREREADING MEASURES OF VISUAL-MOTOR-PERCEPTUAL SKILLS AND READING ACHIEVEMENT WERE STUDIED. SUBJECTS WERE 172 FIRST GRADERS. PRETESTS AND POST-TESTS FOR WORD RECOGNITION, MOTOR COORDINATION, AND VISUAL PERCEPTION WERE ADMINISTERED. FOURTEEN VARIABLES WERE TESTED. RESULTS INDICATED THAT FORM-COPYING WAS MORE EFFECTIVE THAN…
The role of stereopsis (three-dimensional vision) in dentistry: review of the current literature.
Syrimi, M; Ali, N
2015-05-22
Clinical dental work is placing increasing demands on a clinician's vision as new techniques that require fine detail become more common. High hand-eye coordination requires good visual acuity as well as other psychological and neurological qualities such as stereopsis. Stereopsis (three-dimensional vision) is the highest form of depth perception obtained by visual disparity of images formed in the retinas of two eyes. It is believed to confer functional benefits on everyday tasks such as hand-eye coordination. Although its role in depth perception has long been established, little is known regarding the importance of stereopsis in dentistry. This article reviews the role of stereopsis in everyday life and the available literature on the importance of stereopsis in dentistry.
Visual shape perception as Bayesian inference of 3D object-centered shape representations.
Erdogan, Goker; Jacobs, Robert A
2017-11-01
Despite decades of research, little is known about how people visually perceive object shape. We hypothesize that a promising approach to shape perception is provided by a "visual perception as Bayesian inference" framework which augments an emphasis on visual representation with an emphasis on the idea that shape perception is a form of statistical inference. Our hypothesis claims that shape perception of unfamiliar objects can be characterized as statistical inference of 3D shape in an object-centered coordinate system. We describe a computational model based on our theoretical framework, and provide evidence for the model along two lines. First, we show that, counterintuitively, the model accounts for viewpoint-dependency of object recognition, traditionally regarded as evidence against people's use of 3D object-centered shape representations. Second, we report the results of an experiment using a shape similarity task, and present an extensive evaluation of existing models' abilities to account for the experimental data. We find that our shape inference model captures subjects' behaviors better than competing models. Taken as a whole, our experimental and computational results illustrate the promise of our approach and suggest that people's shape representations of unfamiliar objects are probabilistic, 3D, and object-centered. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
The anatomy of object recognition--visual form agnosia caused by medial occipitotemporal stroke.
Karnath, Hans-Otto; Rüter, Johannes; Mandler, André; Himmelbach, Marc
2009-05-06
The influential model on visual information processing by Milner and Goodale (1995) has suggested a dissociation between action- and perception-related processing in a dorsal versus ventral stream projection. It was inspired substantially by the observation of a double dissociation of disturbed visual action versus perception in patients with optic ataxia on the one hand and patients with visual form agnosia (VFA) on the other. Unfortunately, almost all cases with VFA reported so far suffered from inhalational intoxication, the majority with carbon monoxide (CO). Since CO induces a diffuse and widespread pattern of neuronal and white matter damage throughout the whole brain, precise conclusions from these patients with VFA on the selective role of ventral stream structures for shape and orientation perception were difficult. Here, we report patient J.S., who demonstrated VFA after a well circumscribed brain lesion due to stroke etiology. Like the famous patient D.F. with VFA after CO intoxication studied by Milner, Goodale, and coworkers (Goodale et al., 1991, 1994; Milner et al., 1991; Servos et al., 1995; Mon-Williams et al., 2001a,b; Wann et al., 2001; Westwood et al., 2002; McIntosh et al., 2004; Schenk and Milner, 2006), J.S. showed an obvious dissociation between disturbed visual perception of shape and orientation information on the one side and preserved visuomotor abilities based on the same information on the other. In both hemispheres, damage primarily affected the fusiform and the lingual gyri as well as the adjacent posterior cingulate gyrus. We conclude that these medial structures of the ventral occipitotemporal cortex are integral for the normal flow of shape and of contour information into the ventral stream system allowing to recognize objects.
Predator perception and the interrelation between different forms of protective coloration
Stevens, Martin
2007-01-01
Animals possess a range of defensive markings to reduce the risk of predation, including warning colours, camouflage, eyespots and mimicry. These different strategies are frequently considered independently, and with little regard towards predator vision, even though they may be linked in various ways and can be fully understood only in terms of predator perception. For example, camouflage and warning coloration need not be mutually exclusive, and may frequently exploit similar features of visual perception. This paper outlines how different forms of protective markings can be understood from predator perception and illustrates how this is fundamental in determining the mechanisms underlying, and the interrelation between, different strategies. Suggestions are made for future work, and potential mechanisms discussed in relation to various forms of defensive coloration, including disruptive coloration, eyespots, dazzle markings, motion camouflage, aposematism and mimicry. PMID:17426012
Color-binding errors during rivalrous suppression of form.
Hong, Sang Wook; Shevell, Steven K
2009-09-01
How does a physical stimulus determine a conscious percept? Binocular rivalry provides useful insights into this question because constant physical stimulation during rivalry causes different visual experiences. For example, presentation of vertical stripes to one eye and horizontal stripes to the other eye results in a percept that alternates between horizontal and vertical stripes. Presentation of a different color to each eye (color rivalry) produces alternating percepts of the two colors or, in some cases, a color mixture. The experiments reported here reveal a novel and instructive resolution of rivalry for stimuli that differ in both form and color: perceptual alternation between the rivalrous forms (e.g., horizontal or vertical stripes), with both eyes' colors seen simultaneously in separate parts of the currently perceived form. Thus, the colors presented to the two eyes (a) maintain their distinct neural representations despite resolution of form rivalry and (b) can bind separately to distinct parts of the perceived form.
Age, School Experience and the Development of Visual-Perceptual Memory. Final Report, Part 2.
ERIC Educational Resources Information Center
Goulet, L. R.
This study attempted to investigate the effects of school experience on visual perception tests involving line figures and forms. There were two experiments in this study. Experiment 1 examined the independent and interactive influences of school experience and chronological age in kindergarten children. Experiment 2 compared the effects of…
NASA Technical Reports Server (NTRS)
Reschke, M. F.; Parker, D. E.; Arrott, A. P.
1986-01-01
Report discusses physiological and physical concepts of proposed training system to precondition astronauts to weightless environment. System prevents motion sickness, often experienced during early part of orbital flight. Also helps prevent seasickness and other forms of terrestrial motion sickness, often experienced during early part of orbital flight. Training affects subject's perception of inner-ear signals, visual signals, and kinesthetic motion perception. Changed perception resembles that of astronauts who spent many days in space and adapted to weightlessness.
Gestalt perception modulates early visual processing.
Herrmann, C S; Bosch, V
2001-04-17
We examined whether early visual processing reflects perceptual properties of a stimulus in addition to physical features. We recorded event-related potentials (ERPs) of 13 subjects in a visual classification task. We used four different stimuli which were all composed of four identical elements. One of the stimuli constituted an illusory Kanizsa square, another was composed of the same number of collinear line segments but the elements did not form a Gestalt. In addition, a target and a control stimulus were used which were arranged differently. These stimuli allow us to differentiate the processing of colinear line elements (stimulus features) and illusory figures (perceptual properties). The visual N170 in response to the illusory figure was significantly larger as compared to the other collinear stimulus. This is taken to indicate that the visual N170 reflects cognitive processes of Gestalt perception in addition to attentional processes and physical stimulus properties.
When apperceptive agnosia is explained by a deficit of primary visual processing.
Serino, Andrea; Cecere, Roberto; Dundon, Neil; Bertini, Caterina; Sanchez-Castaneda, Cristina; Làdavas, Elisabetta
2014-03-01
Visual agnosia is a deficit in shape perception, affecting figure, object, face and letter recognition. Agnosia is usually attributed to lesions to high-order modules of the visual system, which combine visual cues to represent the shape of objects. However, most of previously reported agnosia cases presented visual field (VF) defects and poor primary visual processing. The present case-study aims to verify whether form agnosia could be explained by a deficit in basic visual functions, rather that by a deficit in high-order shape recognition. Patient SDV suffered a bilateral lesion of the occipital cortex due to anoxia. When tested, he could navigate, interact with others, and was autonomous in daily life activities. However, he could not recognize objects from drawings and figures, read or recognize familiar faces. He was able to recognize objects by touch and people from their voice. Assessments of visual functions showed blindness at the centre of the VF, up to almost 5°, bilaterally, with better stimulus detection in the periphery. Colour and motion perception was preserved. Psychophysical experiments showed that SDV's visual recognition deficits were not explained by poor spatial acuity or by the crowding effect. Rather a severe deficit in line orientation processing might be a key mechanism explaining SDV's agnosia. Line orientation processing is a basic function of primary visual cortex neurons, necessary for detecting "edges" of visual stimuli to build up a "primal sketch" for object recognition. We propose, therefore, that some forms of visual agnosia may be explained by deficits in basic visual functions due to widespread lesions of the primary visual areas, affecting primary levels of visual processing. Copyright © 2013 Elsevier Ltd. All rights reserved.
The psychophysics of Visual Motion and Global form Processing in Autism
ERIC Educational Resources Information Center
Koldewyn, Kami; Whitney, David; Rivera, Susan M.
2010-01-01
Several groups have recently reported that people with autism may suffer from a deficit in visual motion processing and proposed that these deficits may be related to a general dorsal stream dysfunction. In order to test the dorsal stream deficit hypothesis, we investigated coherent and biological motion perception as well as coherent form…
Basic visual function and cortical thickness patterns in posterior cortical atrophy.
Lehmann, Manja; Barnes, Josephine; Ridgway, Gerard R; Wattam-Bell, John; Warrington, Elizabeth K; Fox, Nick C; Crutch, Sebastian J
2011-09-01
Posterior cortical atrophy (PCA) is characterized by a progressive decline in higher-visual object and space processing, but the extent to which these deficits are underpinned by basic visual impairments is unknown. This study aimed to assess basic and higher-order visual deficits in 21 PCA patients. Basic visual skills including form detection and discrimination, color discrimination, motion coherence, and point localization were measured, and associations and dissociations between specific basic visual functions and measures of higher-order object and space perception were identified. All participants showed impairment in at least one aspect of basic visual processing. However, a number of dissociations between basic visual skills indicated a heterogeneous pattern of visual impairment among the PCA patients. Furthermore, basic visual impairments were associated with particular higher-order object and space perception deficits, but not with nonvisual parietal tasks, suggesting the specific involvement of visual networks in PCA. Cortical thickness analysis revealed trends toward lower cortical thickness in occipitotemporal (ventral) and occipitoparietal (dorsal) regions in patients with visuoperceptual and visuospatial deficits, respectively. However, there was also a lot of overlap in their patterns of cortical thinning. These findings suggest that different presentations of PCA represent points in a continuum of phenotypical variation.
Haptic perception and body representation in lateral and medial occipito-temporal cortices.
Costantini, Marcello; Urgesi, Cosimo; Galati, Gaspare; Romani, Gian Luca; Aglioti, Salvatore M
2011-04-01
Although vision is the primary sensory modality that humans and other primates use to identify objects in the environment, we can recognize crucial object features (e.g., shape, size) using the somatic modality. Previous studies have shown that the occipito-temporal areas dedicated to the visual processing of object forms, faces and bodies also show category-selective responses when the preferred stimuli are haptically explored out of view. Visual processing of human bodies engages specific areas in lateral (extrastriate body area, EBA) and medial (fusiform body area, FBA) occipito-temporal cortex. This study aimed at exploring the relative involvement of EBA and FBA in the haptic exploration of body parts. During fMRI scanning, participants were asked to haptically explore either real-size fake body parts or objects. We found a selective activation of right and left EBA, but not of right FBA, while participants haptically explored body parts as compared to real objects. This suggests that EBA may integrate visual body representations with somatosensory information regarding body parts and form a multimodal representation of the body. Furthermore, both left and right EBA showed a comparable level of body selectivity during haptic perception and visual imagery. However, right but not left EBA was more activated during haptic exploration than visual imagery of body parts, ruling out that the response to haptic body exploration was entirely due to the use of visual imagery. Overall, the results point to the existence of different multimodal body representations in the occipito-temporal cortex which are activated during perception and imagery of human body parts. Copyright © 2011 Elsevier Ltd. All rights reserved.
Effects of cortical damage on binocular depth perception.
Bridge, Holly
2016-06-19
Stereoscopic depth perception requires considerable neural computation, including the initial correspondence of the two retinal images, comparison across the local regions of the visual field and integration with other cues to depth. The most common cause for loss of stereoscopic vision is amblyopia, in which one eye has failed to form an adequate input to the visual cortex, usually due to strabismus (deviating eye) or anisometropia. However, the significant cortical processing required to produce the percept of depth means that, even when the retinal input is intact from both eyes, brain damage or dysfunction can interfere with stereoscopic vision. In this review, I examine the evidence for impairment of binocular vision and depth perception that can result from insults to the brain, including both discrete damage, temporal lobectomy and more systemic diseases such as posterior cortical atrophy.This article is part of the themed issue 'Vision in our three-dimensional world'. © 2016 The Authors.
Effects of cortical damage on binocular depth perception
2016-01-01
Stereoscopic depth perception requires considerable neural computation, including the initial correspondence of the two retinal images, comparison across the local regions of the visual field and integration with other cues to depth. The most common cause for loss of stereoscopic vision is amblyopia, in which one eye has failed to form an adequate input to the visual cortex, usually due to strabismus (deviating eye) or anisometropia. However, the significant cortical processing required to produce the percept of depth means that, even when the retinal input is intact from both eyes, brain damage or dysfunction can interfere with stereoscopic vision. In this review, I examine the evidence for impairment of binocular vision and depth perception that can result from insults to the brain, including both discrete damage, temporal lobectomy and more systemic diseases such as posterior cortical atrophy. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269597
The effect of contextual sound cues on visual fidelity perception.
Rojas, David; Cowan, Brent; Kapralos, Bill; Collins, Karen; Dubrowski, Adam
2014-01-01
Previous work has shown that sound can affect the perception of visual fidelity. Here we build upon this previous work by examining the effect of contextual sound cues (i.e., sounds that are related to the visuals) on visual fidelity perception. Results suggest that contextual sound cues do influence visual fidelity perception and, more specifically, our perception of visual fidelity increases with contextual sound cues. These results have implications for designers of multimodal virtual worlds and serious games that, with the appropriate use of contextual sounds, can reduce visual rendering requirements without a corresponding decrease in the perception of visual fidelity.
37 CFR 211.5 - Deposit of identifying material.
Code of Federal Regulations, 2011 CFR
2011-07-01
... fixed in the form of the semiconductor chip product in which it was first commercially exploited... photograph of each layer of the work fixed in a semiconductor chip product. The visually perceptible... complete form of the mask work as fixed in a semiconductor product. (ii) Where the mask work contribution...
Color vision in ADHD: part 2--does attention influence color perception?
Kim, Soyeon; Al-Haj, Mohamed; Fuller, Stuart; Chen, Samantha; Jain, Umesh; Carrasco, Marisa; Tannock, Rosemary
2014-10-24
To investigate the impact of exogenous covert attention on chromatic (blue and red) and achromatic visual perception in adults with and without Attention Deficit Hyperactivity Disorder (ADHD). Exogenous covert attention, which is a transient, automatic, stimulus-driven form of attention, is a key mechanism for selecting relevant information in visual arrays. 30 adults diagnosed with ADHD and 30 healthy adults, matched on age and gender, performed a psychophysical task designed to measure the effects of exogenous covert attention on perceived color saturation (blue, red) and contrast sensitivity. The effects of exogenous covert attention on perceived blue and red saturation levels and contrast sensitivity were similar in both groups, with no differences between males and females. Specifically, exogenous covert attention enhanced the perception of blue saturation and contrast sensitivity, but it had no effect on the perception of red saturation. The findings suggest that exogenous covert attention is intact in adults with ADHD and does not account for the observed impairments in the perception of chromatic (blue and red) saturation.
Cignetti, Fabien; Chabeauti, Pierre-Yves; Menant, Jasmine; Anton, Jean-Luc J. J.; Schmitz, Christina; Vaugoyeau, Marianne; Assaiante, Christine
2017-01-01
The present study investigated the cortical areas engaged in the perception of graviceptive information embedded in biological motion (BM). To this end, functional magnetic resonance imaging was used to assess the cortical areas active during the observation of human movements performed under normogravity and microgravity (parabolic flight). Movements were defined by motion cues alone using point-light displays. We found that gravity modulated the activation of a restricted set of regions of the network subtending BM perception, including form-from-motion areas of the visual system (kinetic occipital region, lingual gyrus, cuneus) and motor-related areas (primary motor and somatosensory cortices). These findings suggest that compliance of observed movements with normal gravity was carried out by mapping them onto the observer’s motor system and by extracting their overall form from local motion of the moving light points. We propose that judgment on graviceptive information embedded in BM can be established based on motor resonance and visual familiarity mechanisms and not necessarily by accessing the internal model of gravitational motion stored in the vestibular cortex. PMID:28861024
Auditory Selective Attention to Speech Modulates Activity in the Visual Word Form Area
Yoncheva, Yuliya N.; Zevin, Jason D.; Maurer, Urs
2010-01-01
Selective attention to speech versus nonspeech signals in complex auditory input could produce top-down modulation of cortical regions previously linked to perception of spoken, and even visual, words. To isolate such top-down attentional effects, we contrasted 2 equally challenging active listening tasks, performed on the same complex auditory stimuli (words overlaid with a series of 3 tones). Instructions required selectively attending to either the speech signals (in service of rhyme judgment) or the melodic signals (tone-triplet matching). Selective attention to speech, relative to attention to melody, was associated with blood oxygenation level–dependent (BOLD) increases during functional magnetic resonance imaging (fMRI) in left inferior frontal gyrus, temporal regions, and the visual word form area (VWFA). Further investigation of the activity in visual regions revealed overall deactivation relative to baseline rest for both attention conditions. Topographic analysis demonstrated that while attending to melody drove deactivation equivalently across all fusiform regions of interest examined, attending to speech produced a regionally specific modulation: deactivation of all fusiform regions, except the VWFA. Results indicate that selective attention to speech can topographically tune extrastriate cortex, leading to increased activity in VWFA relative to surrounding regions, in line with the well-established connectivity between areas related to spoken and visual word perception in skilled readers. PMID:19571269
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen Bo; State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Science, Beijing 100101; Xia Jing
Physiological and behavioral studies have demonstrated that a number of visual functions such as visual acuity, contrast sensitivity, and motion perception can be impaired by acute alcohol exposure. The orientation- and direction-selective responses of cells in primary visual cortex are thought to participate in the perception of form and motion. To investigate how orientation selectivity and direction selectivity of neurons are influenced by acute alcohol exposure in vivo, we used the extracellular single-unit recording technique to examine the response properties of neurons in primary visual cortex (A17) of adult cats. We found that alcohol reduces spontaneous activity, visual evoked unitmore » responses, the signal-to-noise ratio, and orientation selectivity of A17 cells. In addition, small but detectable changes in both the preferred orientation/direction and the bandwidth of the orientation tuning curve of strongly orientation-biased A17 cells were observed after acute alcohol administration. Our findings may provide physiological evidence for some alcohol-related deficits in visual function observed in behavioral studies.« less
Non-auditory factors affecting urban soundscape evaluation.
Jeon, Jin Yong; Lee, Pyoung Jik; Hong, Joo Young; Cabrera, Densil
2011-12-01
The aim of this study is to characterize urban spaces, which combine landscape, acoustics, and lighting, and to investigate people's perceptions of urban soundscapes through quantitative and qualitative analyses. A general questionnaire survey and soundwalk were performed to investigate soundscape perception in urban spaces. Non-auditory factors (visual image, day lighting, and olfactory perceptions), as well as acoustic comfort, were selected as the main contexts that affect soundscape perception, and context preferences and overall impressions were evaluated using an 11-point numerical scale. For qualitative analysis, a semantic differential test was performed in the form of a social survey, and subjects were also asked to describe their impressions during a soundwalk. The results showed that urban soundscapes can be characterized by soundmarks, and soundscape perceptions are dominated by acoustic comfort, visual images, and day lighting, whereas reverberance in urban spaces does not yield consistent preference judgments. It is posited that the subjective evaluation of reverberance can be replaced by physical measurements. The categories extracted from the qualitative analysis revealed that spatial impressions such as openness and density emerged as some of the contexts of soundscape perception. © 2011 Acoustical Society of America
Subjective Perception of Visual Distortions or Scotomas in Individuals with Retinitis Pigmentosa
ERIC Educational Resources Information Center
Wittich, Walter; Watanabe, Donald H.; Kapusta, Michael A.; Overbury, Olga
2011-01-01
It is often assumed that persons who develop ocular disease have some form of visual experience that makes them aware of their deficits. However, in the case of peripheral field loss or decreasing vision in dim lighting, as in retinitis pigmentosa, for example, symptoms are more obscure and may not be as easily identified by the persons who have…
Weiss, Peter H; Zilles, Karl; Fink, Gereon R
2005-12-01
In synesthesia, stimulation of one sensory modality (e.g., hearing) triggers a percept in another, non-stimulated sensory modality (e.g., vision). Likewise, perception of a form (e.g., a letter) may induce a color percept (i.e., grapheme-color synesthesia). To date, the neural mechanisms underlying synesthesia remain to be elucidated. We disclosed by fMRI, while controlling for surface color processing, enhanced activity in the left intraparietal cortex during the experience of grapheme-color synesthesia (n = 9). In contrast, the perception of surface color per se activated the color centers in the fusiform gyrus bilaterally. The data support theoretical accounts that grapheme-color synesthesia may originate from enhanced cross-modal binding of form and color. A mismatch of surface color and grapheme induced synesthetically felt color additionally activated the left dorsolateral prefrontal cortex (DLPFC). This suggests that cognitive control processes become active to resolve the perceptual conflict resulting from synesthesia.
Blind subjects construct conscious mental images of visual scenes encoded in musical form.
Cronly-Dillon, J; Persaud, K C; Blore, R
2000-01-01
Blind (previously sighted) subjects are able to analyse, describe and graphically represent a number of high-contrast visual images translated into musical form de novo. We presented musical transforms of a random assortment of photographic images of objects and urban scenes to such subjects, a few of which depicted architectural and other landmarks that may be useful in navigating a route to a particular destination. Our blind subjects were able to use the sound representation to construct a conscious mental image that was revealed by their ability to depict a visual target by drawing it. We noted the similarity between the way the visual system integrates information from successive fixations to form a representation that is stable across eye movements and the way a succession of image frames (encoded in sound) which depict different portions of the image are integrated to form a seamless mental image. Finally, we discuss the profound resemblance between the way a professional musician carries out a structural analysis of a musical composition in order to relate its structure to the perception of musical form and the strategies used by our blind subjects in isolating structural features that collectively reveal the identity of visual form. PMID:11413637
Ortega, Laura; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru
2014-01-01
Whereas the visual modality tends to dominate over the auditory modality in bimodal spatial perception, the auditory modality tends to dominate over the visual modality in bimodal temporal perception. Recent results suggest that the visual modality dominates bimodal spatial perception because spatial discriminability is typically greater for the visual than auditory modality; accordingly, visual dominance is eliminated or reversed when visual-spatial discriminability is reduced by degrading visual stimuli to be equivalent or inferior to auditory spatial discriminability. Thus, for spatial perception, the modality that provides greater discriminability dominates. Here we ask whether auditory dominance in duration perception is similarly explained by factors that influence the relative quality of auditory and visual signals. In contrast to the spatial results, the auditory modality dominated over the visual modality in bimodal duration perception even when the auditory signal was clearly weaker, when the auditory signal was ignored (i.e., the visual signal was selectively attended), and when the temporal discriminability was equivalent for the auditory and visual signals. Thus, unlike spatial perception where the modality carrying more discriminable signals dominates, duration perception seems to be mandatorily linked to auditory processing under most circumstances. PMID:24806403
A comparison of form processing involved in the perception of biological and nonbiological movements
Thurman, Steven M.; Lu, Hongjing
2016-01-01
Although there is evidence for specialization in the human brain for processing biological motion per se, few studies have directly examined the specialization of form processing in biological motion perception. The current study was designed to systematically compare form processing in perception of biological (human walkers) to nonbiological (rotating squares) stimuli. Dynamic form-based stimuli were constructed with conflicting form cues (position and orientation), such that the objects were perceived to be moving ambiguously in two directions at once. In Experiment 1, we used the classification image technique to examine how local form cues are integrated across space and time in a bottom-up manner. By comparing with a Bayesian observer model that embodies generic principles of form analysis (e.g., template matching) and integrates form information according to cue reliability, we found that human observers employ domain-general processes to recognize both human actions and nonbiological object movements. Experiments 2 and 3 found differential top-down effects of spatial context on perception of biological and nonbiological forms. When a background does not involve social information, observers are biased to perceive foreground object movements in the direction opposite to surrounding motion. However, when a background involves social cues, such as a crowd of similar objects, perception is biased toward the same direction as the crowd for biological walking stimuli, but not for rotating nonbiological stimuli. The model provided an accurate account of top-down modulations by adjusting the prior probabilities associated with the internal templates, demonstrating the power and flexibility of the Bayesian approach for visual form perception. PMID:26746875
1991-05-10
extrasensory areas, where the transfer is mediated by activity in the hippocampus. One of the objectives of this proposed research is to determine...Visual Perception : The Neurophysiological Foundations, Academic, New York, 1989. .] M. Livingstone and D. Ilubel. Segregation of form, color, movement...and depth: Anatomy, physiology, and perception . Science, 240:740-749, 1988. 56: M. S. Livingstone and D. H. Hubel. Anatomy and physiology of a color
Tachistoscopic exposure and masking of real three-dimensional scenes
Pothier, Stephen; Philbeck, John; Chichka, David; Gajewski, Daniel A.
2010-01-01
Although there are many well-known forms of visual cues specifying absolute and relative distance, little is known about how visual space perception develops at small temporal scales. How much time does the visual system require to extract the information in the various absolute and relative distance cues? In this article, we describe a system that may be used to address this issue by presenting brief exposures of real, three-dimensional scenes, followed by a masking stimulus. The system is composed of an electronic shutter (a liquid crystal smart window) for exposing the stimulus scene, and a liquid crystal projector coupled with an electromechanical shutter for presenting the masking stimulus. This system can be used in both full- and reduced-cue viewing conditions, under monocular and binocular viewing, and at distances limited only by the testing space. We describe a configuration that may be used for studying the microgenesis of visual space perception in the context of visually directed walking. PMID:19182129
Explicit Encoding of Multimodal Percepts by Single Neurons in the Human Brain
Quiroga, Rodrigo Quian; Kraskov, Alexander; Koch, Christof; Fried, Itzhak
2010-01-01
Summary Different pictures of Marilyn Monroe can evoke the same percept, even if greatly modified as in Andy Warhol’s famous portraits. But how does the brain recognize highly variable pictures as the same percept? Various studies have provided insights into how visual information is processed along the “ventral pathway,” via both single-cell recordings in monkeys [1, 2] and functional imaging in humans [3, 4]. Interestingly, in humans, the same “concept” of Marilyn Monroe can be evoked with other stimulus modalities, for instance by hearing or reading her name. Brain imaging studies have identified cortical areas selective to voices [5, 6] and visual word forms [7, 8]. However, how visual, text, and sound information can elicit a unique percept is still largely unknown. By using presentations of pictures and of spoken and written names, we show that (1) single neurons in the human medial temporal lobe (MTL) respond selectively to representations of the same individual across different sensory modalities; (2) the degree of multimodal invariance increases along the hierarchical structure within the MTL; and (3) such neuronal representations can be generated within less than a day or two. These results demonstrate that single neurons can encode percepts in an explicit, selective, and invariant manner, even if evoked by different sensory modalities. PMID:19631538
ERIC Educational Resources Information Center
Whitcraft, Carol
Investigations and theories concerning interrelationships of motoric experiences, perceptual-motor skills, and learning are reviewed, with emphasis on early engramming of form and space concepts. Covered are studies on haptic perception of form, the matching of perceptual data and motor information, Kephart's perceptual-motor theory, and…
Bellocchi, Stéphanie; Muneaux, Mathilde; Huau, Andréa; Lévêque, Yohana; Jover, Marianne; Ducrot, Stéphanie
2017-08-01
Reading is known to be primarily a linguistic task. However, to successfully decode written words, children also need to develop good visual-perception skills. Furthermore, motor skills are implicated in letter recognition and reading acquisition. Three studies have been designed to determine the link between reading, visual perception, and visual-motor integration using the Developmental Test of Visual Perception version 2 (DTVP-2). Study 1 tests how visual perception and visual-motor integration in kindergarten predict reading outcomes in Grade 1, in typical developing children. Study 2 is aimed at finding out if these skills can be seen as clinical markers in dyslexic children (DD). Study 3 determines if visual-motor integration and motor-reduced visual perception can distinguish DD children according to whether they exhibit or not developmental coordination disorder (DCD). Results showed that phonological awareness and visual-motor integration predicted reading outcomes one year later. DTVP-2 demonstrated similarities and differences in visual-motor integration and motor-reduced visual perception between children with DD, DCD, and both of these deficits. DTVP-2 is a suitable tool to investigate links between visual perception, visual-motor integration and reading, and to differentiate cognitive profiles of children with developmental disabilities (i.e. DD, DCD, and comorbid children). Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
An empirical investigation of the visual rightness theory of picture perception.
Locher, Paul J
2003-10-01
This research subjected the visual rightness theory of picture perception to experimental scrutiny. It investigated the ability of adults untrained in the visual arts to discriminate between reproductions of original abstract and representational paintings by renowned artists from two experimentally manipulated less well-organized versions of each art stimulus. Perturbed stimuli contained either minor or major disruptions in the originals' principal structural networks. It was found that participants were significantly more successful in discriminating between originals and their highly altered, but not slightly altered, perturbation than expected by chance. Accuracy of detection was found to be a function of style of painting and a viewer's way of thinking about a work as determined from their verbal reactions to it. Specifically, hit rates for originals were highest for abstract works when participants focused on their compositional style and form and highest for representational works when their content and realism were the focus of attention. Findings support the view that visually right (i.e., "good") compositions have efficient structural organizations that are visually salient to viewers who lack formal training in the visual arts.
Visuoperceptual impairment in dementia with Lewy bodies.
Mori, E; Shimomura, T; Fujimori, M; Hirono, N; Imamura, T; Hashimoto, M; Tanimukai, S; Kazui, H; Hanihara, T
2000-04-01
In dementia with Lewy bodies (DLB), vision-related cognitive and behavioral symptoms are common, and involvement of the occipital visual cortices has been demonstrated in functional neuroimaging studies. To delineate visuoperceptual disturbance in patients with DLB in comparison with that in patients with Alzheimer disease and to explore the relationship between visuoperceptual disturbance and the vision-related cognitive and behavioral symptoms. Case-control study. Research-oriented hospital. Twenty-four patients with probable DLB (based on criteria of the Consortium on DLB International Workshop) and 48 patients with probable Alzheimer disease (based on criteria of the National Institute of Neurological and Communicative Disorders and Stroke-Alzheimer's Disease and Related Disorders Association) who were matched to those with DLB 2:1 by age, sex, education, and Mini-Mental State Examination score. Four test items to examine visuoperceptual functions, including the object size discrimination, form discrimination, overlapping figure identification, and visual counting tasks. Compared with patients with probable Alzheimer disease, patients with probable DLB scored significantly lower on all the visuoperceptive tasks (P<.04 to P<.001). In the DLB group, patients with visual hallucinations (n = 18) scored significantly lower on the overlapping figure identification (P = .01) than those without them (n = 6), and patients with television misidentifications (n = 5) scored significantly lower on the size discrimination (P<.001), form discrimination (P = .01), and visual counting (P = .007) than those without them (n = 19). Visual perception is defective in probable DLB. The defective visual perception plays a role in development of visual hallucinations, delusional misidentifications, visual agnosias, and visuoconstructive disability charcteristic of DLB.
Kowalski, Ireneusz M.; Domagalska, Małgorzata; Szopa, Andrzej; Dwornik, Michał; Kujawa, Jolanta; Stępień, Agnieszka; Śliwiński, Zbigniew
2012-01-01
Introduction Central nervous system damage in early life results in both quantitative and qualitative abnormalities of psychomotor development. Late sequelae of these disturbances may include visual perception disorders which not only affect the ability to read and write but also generally influence the child's intellectual development. This study sought to determine whether a central coordination disorder (CCD) in early life treated according to Vojta's method with elements of the sensory integration (S-I) and neuro-developmental treatment (NDT)/Bobath approaches affects development of visual perception later in life. Material and methods The study involved 44 participants aged 15-16 years, including 19 diagnosed with moderate or severe CCD in the neonatal period, i.e. during the first 2-3 months of life, with diagnosed mild degree neonatal encephalopathy due to perinatal anoxia, and 25 healthy people without a history of developmental psychomotor disturbances in the neonatal period. The study tool was a visual perception IQ test comprising 96 graphic tasks. Results The study revealed equal proportions of participants (p < 0.05) defined as very skilled (94-96), skilled (91-94), aerage (71-91), poor (67-71), and very poor (0-67) in both groups. These results mean that adolescents with a history of CCD in the neonatal period did not differ with regard to the level of visual perception from their peers who had not demonstrated psychomotor development disorders in the neonatal period. Conclusions Early treatment of children with CCD affords a possibility of normalising their psychomotor development early enough to prevent consequences in the form of cognitive impairments in later life. PMID:23185199
Influences of selective adaptation on perception of audiovisual speech
Dias, James W.; Cook, Theresa C.; Rosenblum, Lawrence D.
2016-01-01
Research suggests that selective adaptation in speech is a low-level process dependent on sensory-specific information shared between the adaptor and test-stimuli. However, previous research has only examined how adaptors shift perception of unimodal test stimuli, either auditory or visual. In the current series of experiments, we investigated whether adaptation to cross-sensory phonetic information can influence perception of integrated audio-visual phonetic information. We examined how selective adaptation to audio and visual adaptors shift perception of speech along an audiovisual test continuum. This test-continuum consisted of nine audio-/ba/-visual-/va/ stimuli, ranging in visual clarity of the mouth. When the mouth was clearly visible, perceivers “heard” the audio-visual stimulus as an integrated “va” percept 93.7% of the time (e.g., McGurk & MacDonald, 1976). As visibility of the mouth became less clear across the nine-item continuum, the audio-visual “va” percept weakened, resulting in a continuum ranging in audio-visual percepts from /va/ to /ba/. Perception of the test-stimuli was tested before and after adaptation. Changes in audiovisual speech perception were observed following adaptation to visual-/va/ and audiovisual-/va/, but not following adaptation to auditory-/va/, auditory-/ba/, or visual-/ba/. Adaptation modulates perception of integrated audio-visual speech by modulating the processing of sensory-specific information. The results suggest that auditory and visual speech information are not completely integrated at the level of selective adaptation. PMID:27041781
A PDP model of the simultaneous perception of multiple objects
NASA Astrophysics Data System (ADS)
Henderson, Cynthia M.; McClelland, James L.
2011-06-01
Illusory conjunctions in normal and simultanagnosic subjects are two instances where the visual features of multiple objects are incorrectly 'bound' together. A connectionist model explores how multiple objects could be perceived correctly in normal subjects given sufficient time, but could give rise to illusory conjunctions with damage or time pressure. In this model, perception of two objects benefits from lateral connections between hidden layers modelling aspects of the ventral and dorsal visual pathways. As with simultanagnosia, simulations of dorsal lesions impair multi-object recognition. In contrast, a large ventral lesion has minimal effect on dorsal functioning, akin to dissociations between simple object manipulation (retained in visual form agnosia and semantic dementia) and object discrimination (impaired in these disorders) [Hodges, J.R., Bozeat, S., Lambon Ralph, M.A., Patterson, K., and Spatt, J. (2000), 'The Role of Conceptual Knowledge: Evidence from Semantic Dementia', Brain, 123, 1913-1925; Milner, A.D., and Goodale, M.A. (2006), The Visual Brain in Action (2nd ed.), New York: Oxford]. It is hoped that the functioning of this model might suggest potential processes underlying dorsal and ventral contributions to the correct perception of multiple objects.
How dolphins see the world: a comparison with chimpanzees and humans.
Tomonaga, Masaki; Uwano, Yuka; Saito, Toyoshi
2014-01-16
Bottlenose dolphins use auditory (or echoic) information to recognise their environments, and many studies have described their echolocation perception abilities. However, relatively few systematic studies have examined their visual perception. We tested dolphins on a visual-matching task using two-dimensional geometric forms including various features. Based on error patterns, we used multidimensional scaling to analyse perceptual similarities among stimuli. In addition to dolphins, we conducted comparable tests with terrestrial species: chimpanzees were tested on a computer-controlled matching task and humans were tested on a rating task. The overall perceptual similarities among stimuli in dolphins were similar to those in the two species of primates. These results clearly indicate that the visual world is perceived similarly by the three species of mammals, even though each has adapted to a different environment and has differing degrees of dependence on vision.
Human Occipital and Parietal GABA Selectively Influence Visual Perception of Orientation and Size.
Song, Chen; Sandberg, Kristian; Andersen, Lau Møller; Blicher, Jakob Udby; Rees, Geraint
2017-09-13
GABA is the primary inhibitory neurotransmitter in human brain. The level of GABA varies substantially across individuals, and this variability is associated with interindividual differences in visual perception. However, it remains unclear whether the association between GABA level and visual perception reflects a general influence of visual inhibition or whether the GABA levels of different cortical regions selectively influence perception of different visual features. To address this, we studied how the GABA levels of parietal and occipital cortices related to interindividual differences in size, orientation, and brightness perception. We used visual contextual illusion as a perceptual assay since the illusion dissociates perceptual content from stimulus content and the magnitude of the illusion reflects the effect of visual inhibition. Across individuals, we observed selective correlations between the level of GABA and the magnitude of contextual illusion. Specifically, parietal GABA level correlated with size illusion magnitude but not with orientation or brightness illusion magnitude; in contrast, occipital GABA level correlated with orientation illusion magnitude but not with size or brightness illusion magnitude. Our findings reveal a region- and feature-dependent influence of GABA level on human visual perception. Parietal and occipital cortices contain, respectively, topographic maps of size and orientation preference in which neural responses to stimulus sizes and stimulus orientations are modulated by intraregional lateral connections. We propose that these lateral connections may underlie the selective influence of GABA on visual perception. SIGNIFICANCE STATEMENT GABA, the primary inhibitory neurotransmitter in human visual system, varies substantially across individuals. This interindividual variability in GABA level is linked to interindividual differences in many aspects of visual perception. However, the widespread influence of GABA raises the question of whether interindividual variability in GABA reflects an overall variability in visual inhibition and has a general influence on visual perception or whether the GABA levels of different cortical regions have selective influence on perception of different visual features. Here we report a region- and feature-dependent influence of GABA level on human visual perception. Our findings suggest that GABA level of a cortical region selectively influences perception of visual features that are topographically mapped in this region through intraregional lateral connections. Copyright © 2017 Song, Sandberg et al.
Neural network architecture for form and motion perception (Abstract Only)
NASA Astrophysics Data System (ADS)
Grossberg, Stephen
1991-08-01
Evidence is given for a new neural network theory of biological motion perception, a motion boundary contour system. This theory clarifies why parallel streams V1 yields V2 and V1 yields MT exist for static form and motion form processing among the areas V1, V2, and MT of visual cortex. The motion boundary contour system consists of several parallel copies, such that each copy is activated by a different range of receptive field sizes. Each copy is further subdivided into two hierarchically organized subsystems: a motion oriented contrast (MOC) filter, for preprocessing moving images; and a cooperative-competitive feedback (CC) loop, for generating emergent boundary segmentations of the filtered signals. The present work uses the MOC filter to explain a variety of classical and recent data about short-range and long- range apparent motion percepts that have not yet been explained by alternative models. These data include split motion; reverse-contrast gamma motion; delta motion; visual inertia; group motion in response to a reverse-contrast Ternus display at short interstimulus intervals; speed- up of motion velocity as interflash distance increases or flash duration decreases; dependence of the transition from element motion to group motion on stimulus duration and size; various classical dependencies between flash duration, spatial separation, interstimulus interval, and motion threshold known as Korte''s Laws; and dependence of motion strength on stimulus orientation and spatial frequency. These results supplement earlier explanations by the model of apparent motion data that other models have not explained; a recent proposed solution of the global aperture problem including explanations of motion capture and induced motion; an explanation of how parallel cortical systems for static form perception and motion form perception may develop, including a demonstration that these parallel systems are variations on a common cortical design; an explanation of why the geometries of static form and motion form differ, in particular why opposite orientations differ by 90 degree(s), whereas opposite directions differ by 180 degree(s), and why a cortical stream V1 yields V2 yields MT is needed; and a summary of how the main properties of other motion perception models can be assimilated into different parts of the motion boundary contour system design.
Smelling directions: Olfaction modulates ambiguous visual motion perception
Kuang, Shenbing; Zhang, Tao
2014-01-01
Senses of smells are often accompanied by simultaneous visual sensations. Previous studies have documented enhanced olfactory performance with concurrent presence of congruent color- or shape- related visual cues, and facilitated visual object perception when congruent smells are simultaneously present. These visual object-olfaction interactions suggest the existences of couplings between the olfactory pathway and the visual ventral processing stream. However, it is not known if olfaction can modulate visual motion perception, a function that is related to the visual dorsal stream. We tested this possibility by examining the influence of olfactory cues on the perceptions of ambiguous visual motion signals. We showed that, after introducing an association between motion directions and olfactory cues, olfaction could indeed bias ambiguous visual motion perceptions. Our result that olfaction modulates visual motion processing adds to the current knowledge of cross-modal interactions and implies a possible functional linkage between the olfactory system and the visual dorsal pathway. PMID:25052162
A model of attention-guided visual perception and recognition.
Rybak, I A; Gusakova, V I; Golovan, A V; Podladchikova, L N; Shevtsova, N A
1998-08-01
A model of visual perception and recognition is described. The model contains: (i) a low-level subsystem which performs both a fovea-like transformation and detection of primary features (edges), and (ii) a high-level subsystem which includes separated 'what' (sensory memory) and 'where' (motor memory) structures. Image recognition occurs during the execution of a 'behavioral recognition program' formed during the primary viewing of the image. The recognition program contains both programmed attention window movements (stored in the motor memory) and predicted image fragments (stored in the sensory memory) for each consecutive fixation. The model shows the ability to recognize complex images (e.g. faces) invariantly with respect to shift, rotation and scale.
A review of visual perception mechanisms that regulate rapid adaptive camouflage in cuttlefish.
Chiao, Chuan-Chin; Chubb, Charles; Hanlon, Roger T
2015-09-01
We review recent research on the visual mechanisms of rapid adaptive camouflage in cuttlefish. These neurophysiologically complex marine invertebrates can camouflage themselves against almost any background, yet their ability to quickly (0.5-2 s) alter their body patterns on different visual backgrounds poses a vexing challenge: how to pick the correct body pattern amongst their repertoire. The ability of cuttlefish to change appropriately requires a visual system that can rapidly assess complex visual scenes and produce the motor responses-the neurally controlled body patterns-that achieve camouflage. Using specifically designed visual backgrounds and assessing the corresponding body patterns quantitatively, we and others have uncovered several aspects of scene variation that are important in regulating cuttlefish patterning responses. These include spatial scale of background pattern, background intensity, background contrast, object edge properties, object contrast polarity, object depth, and the presence of 3D objects. Moreover, arm postures and skin papillae are also regulated visually for additional aspects of concealment. By integrating these visual cues, cuttlefish are able to rapidly select appropriate body patterns for concealment throughout diverse natural environments. This sensorimotor approach of studying cuttlefish camouflage thus provides unique insights into the mechanisms of visual perception in an invertebrate image-forming eye.
Effects of color combination and ambient illumination on visual perception time with TFT-LCD.
Lin, Chin-Chiuan; Huang, Kuo-Chen
2009-10-01
An empirical study was carried out to examine the effects of color combination and ambient illumination on visual perception time using TFT-LCD. The effect of color combination was broken down into two subfactors, luminance contrast ratio and chromaticity contrast. Analysis indicated that the luminance contrast ratio and ambient illumination had significant, though small effects on visual perception. Visual perception time was better at high luminance contrast ratio than at low luminance contrast ratio. Visual perception time under normal ambient illumination was better than at other ambient illumination levels, although the stimulus color had a confounding effect on visual perception time. In general, visual perception time was better for the primary colors than the middle-point colors. Based on the results, normal ambient illumination level and high luminance contrast ratio seemed to be the optimal choice for design of workplace with video display terminals TFT-LCD.
The Role of Auditory and Visual Speech in Word Learning at 18 Months and in Adulthood
ERIC Educational Resources Information Center
Havy, Mélanie; Foroud, Afra; Fais, Laurel; Werker, Janet F.
2017-01-01
Visual information influences speech perception in both infants and adults. It is still unknown whether lexical representations are multisensory. To address this question, we exposed 18-month-old infants (n = 32) and adults (n = 32) to new word-object pairings: Participants either heard the acoustic form of the words or saw the talking face in…
Pienaar, A E; Barhorst, R; Twisk, J W R
2014-05-01
Perceptual-motor skills contribute to a variety of basic learning skills associated with normal academic success. This study aimed to determine the relationship between academic performance and perceptual-motor skills in first grade South African learners and whether low SES (socio-economic status) school type plays a role in such a relationship. This cross-sectional study of the baseline measurements of the NW-CHILD longitudinal study included a stratified random sample of first grade learners (n = 812; 418 boys and 394 boys), with a mean age of 6.78 years ± 0.49 living in the North West Province (NW) of South Africa. The Beery-Buktenica Developmental Test of Visual-Motor Integration-4 (VMI) was used to assess visual-motor integration, visual perception and hand control while the Bruininks Oseretsky Test of Motor Proficiency, short form (BOT2-SF) assessed overall motor proficiency. Academic performance in math, reading and writing was assessed with the Mastery of Basic Learning Areas Questionnaire. Linear mixed models analysis was performed with spss to determine possible differences between the different VMI and BOT2-SF standard scores in different math, reading and writing mastery categories ranging from no mastery to outstanding mastery. A multinomial multilevel logistic regression analysis was performed to assess the relationship between a clustered score of academic performance and the different determinants. A strong relationship was established between academic performance and VMI, visual perception, hand control and motor proficiency with a significant relationship between a clustered academic performance score, visual-motor integration and visual perception. A negative association was established between low SES school types on academic performance, with a common perceptual motor foundation shared by all basic learning areas. Visual-motor integration, visual perception, hand control and motor proficiency are closely related to basic academic skills required in the first formal school year, especially among learners in low SES type schools. © 2013 John Wiley & Sons Ltd.
Perception of emotion in abstract artworks: a multidisciplinary approach.
Melcher, David; Bacci, Francesca
2013-01-01
There is a long-standing and fundamental debate regarding how emotion can be expressed by fine art. Some artists and theorists have claimed that certain features of paintings, such as color, line, form, and composition, can consistently express an "objective" emotion, while others have argued that emotion perception is subjective and depends more on expertise of the observer. Here, we discuss two studies in which we have found evidence for consistency in observer ratings of emotion for abstract artworks. We have developed a stimulus set of abstract art images to test emotional priming, both between different painting images and between paintings and faces. The ratings were also used in a computational vision analysis of the visual features underlying emotion expression. Overall, these findings suggest that there is a strong bottom-up and objective aspect to perception of emotion in abstract artworks that may tap into basic visual mechanisms. © 2013 Elsevier B.V. All rights reserved.
From optics to attention: visual perception in barn owls.
Harmening, Wolf M; Wagner, Hermann
2011-11-01
Barn owls are nocturnal predators which have evolved specific sensory and morphological adaptations to a life in dim light. Here, some of the most fundamental properties of spatial vision in barn owls are reviewed. The eye with its tubular shape is rigidly integrated in the skull so that eye movements are very much restricted. The eyes are oriented frontally, allowing for a large binocular overlap. Accommodation, but not pupil dilation, is coupled between the two eyes. The retina is rod dominated and lacks a visible fovea. Retinal ganglion cells form a marked region of highest density that extends to a horizontally oriented visual streak. Behavioural visual acuity and contrast sensitivity are poor, although the optical quality of the ocular media is excellent. A low f-number allows high image quality at low light levels. Vernier acuity was found to be a hyperacute percept. Owls have global stereopsis with hyperacute stereo acuity thresholds. Neurons of the visual Wulst are sensitive to binocular disparities. Orientation based saliency was demonstrated in a visual-search experiment, and higher cognitive abilities were shown when the owl's were able to use illusory contours for object discrimination.
Yellepeddi, Venkata Kashyap; Roberson, Charles
2016-10-25
Objective. To evaluate the impact of animated videos of oral solid dosage form manufacturing as visual instructional aids on pharmacy students' perception and learning. Design. Data were obtained using a validated, paper-based survey instrument designed to evaluate the effectiveness, appeal, and efficiency of the animated videos in a pharmaceutics course offered in spring 2014 and 2015. Basic demographic data were also collected and analyzed. Assessment data at the end of pharmaceutics course was collected for 2013 and compared with assessment data from 2014, and 2015. Assessment. Seventy-six percent of the respondents supported the idea of incorporating animated videos as instructional aids for teaching pharmaceutics. Students' performance on the formative assessment in 2014 and 2015 improved significantly compared to the performance of students in 2013 whose lectures did not include animated videos as instructional aids. Conclusions. Implementing animated videos of oral solid dosage form manufacturing as instructional aids resulted in improved student learning and favorable student perceptions about the instructional approach. Therefore, use of animated videos can be incorporated in pharmaceutics teaching to enhance visual learning.
Roberson, Charles
2016-01-01
Objective. To evaluate the impact of animated videos of oral solid dosage form manufacturing as visual instructional aids on pharmacy students’ perception and learning. Design. Data were obtained using a validated, paper-based survey instrument designed to evaluate the effectiveness, appeal, and efficiency of the animated videos in a pharmaceutics course offered in spring 2014 and 2015. Basic demographic data were also collected and analyzed. Assessment data at the end of pharmaceutics course was collected for 2013 and compared with assessment data from 2014, and 2015. Assessment. Seventy-six percent of the respondents supported the idea of incorporating animated videos as instructional aids for teaching pharmaceutics. Students’ performance on the formative assessment in 2014 and 2015 improved significantly compared to the performance of students in 2013 whose lectures did not include animated videos as instructional aids. Conclusions. Implementing animated videos of oral solid dosage form manufacturing as instructional aids resulted in improved student learning and favorable student perceptions about the instructional approach. Therefore, use of animated videos can be incorporated in pharmaceutics teaching to enhance visual learning. PMID:27899837
Tunnel vision: sharper gradient of spatial attention in autism.
Robertson, Caroline E; Kravitz, Dwight J; Freyberg, Jan; Baron-Cohen, Simon; Baker, Chris I
2013-04-17
Enhanced perception of detail has long been regarded a hallmark of autism spectrum conditions (ASC), but its origins are unknown. Normal sensitivity on all fundamental perceptual measures-visual acuity, contrast discrimination, and flicker detection-is strongly established in the literature. If individuals with ASC do not have superior low-level vision, how is perception of detail enhanced? We argue that this apparent paradox can be resolved by considering visual attention, which is known to enhance basic visual sensitivity, resulting in greater acuity and lower contrast thresholds. Here, we demonstrate that the focus of attention and concomitant enhancement of perception are sharper in human individuals with ASC than in matched controls. Using a simple visual acuity task embedded in a standard cueing paradigm, we mapped the spatial and temporal gradients of attentional enhancement by varying the distance and onset time of visual targets relative to an exogenous cue, which obligatorily captures attention. Individuals with ASC demonstrated a greater fall-off in performance with distance from the cue than controls, indicating a sharper spatial gradient of attention. Further, this sharpness was highly correlated with the severity of autistic symptoms in ASC, as well as autistic traits across both ASC and control groups. These findings establish the presence of a form of "tunnel vision" in ASC, with far-reaching implications for our understanding of the social and neurobiological aspects of autism.
Neural representation of form-contingent color filling-in in the early visual cortex.
Hong, Sang Wook; Tong, Frank
2017-11-01
Perceptual filling-in exemplifies the constructive nature of visual processing. Color, a prominent surface property of visual objects, can appear to spread to neighboring areas that lack any color. We investigated cortical responses to a color filling-in illusion that effectively dissociates perceived color from the retinal input (van Lier, Vergeer, & Anstis, 2009). Observers adapted to a star-shaped stimulus with alternating red- and cyan-colored points to elicit a complementary afterimage. By presenting an achromatic outline that enclosed one of the two afterimage colors, perceptual filling-in of that color was induced in the unadapted central region. Visual cortical activity was monitored with fMRI, and analyzed using multivariate pattern analysis. Activity patterns in early visual areas (V1-V4) reliably distinguished between the two color-induced filled-in conditions, but only higher extrastriate visual areas showed the predicted correspondence with color perception. Activity patterns allowed for reliable generalization between filled-in colors and physical presentations of perceptually matched colors in areas V3 and V4, but not in earlier visual areas. These findings suggest that the perception of filled-in surface color likely requires more extensive processing by extrastriate visual areas, in order for the neural representation of surface color to become aligned with perceptually matched real colors.
Kinesthetic information disambiguates visual motion signals.
Hu, Bo; Knill, David C
2010-05-25
Numerous studies have shown that extra-retinal signals can disambiguate motion information created by movements of the eye or head. We report a new form of cross-modal sensory integration in which the kinesthetic information generated by active hand movements essentially captures ambiguous visual motion information. Several previous studies have shown that active movement can bias observers' percepts of bi-stable stimuli; however, these effects seem to be best explained by attentional mechanisms. We show that kinesthetic information can change an otherwise stable perception of motion, providing evidence of genuine fusion between visual and kinesthetic information. The experiments take advantage of the aperture problem, in which the motion of a one-dimensional grating pattern behind an aperture, while geometrically ambiguous, appears to move stably in the grating normal direction. When actively moving the pattern, however, the observer sees the motion to be in the hand movement direction. Copyright 2010 Elsevier Ltd. All rights reserved.
How dolphins see the world: A comparison with chimpanzees and humans
Tomonaga, Masaki; Uwano, Yuka; Saito, Toyoshi
2014-01-01
Bottlenose dolphins use auditory (or echoic) information to recognise their environments, and many studies have described their echolocation perception abilities. However, relatively few systematic studies have examined their visual perception. We tested dolphins on a visual-matching task using two-dimensional geometric forms including various features. Based on error patterns, we used multidimensional scaling to analyse perceptual similarities among stimuli. In addition to dolphins, we conducted comparable tests with terrestrial species: chimpanzees were tested on a computer-controlled matching task and humans were tested on a rating task. The overall perceptual similarities among stimuli in dolphins were similar to those in the two species of primates. These results clearly indicate that the visual world is perceived similarly by the three species of mammals, even though each has adapted to a different environment and has differing degrees of dependence on vision. PMID:24435017
Nakashima, Ryoichi; Iwai, Ritsuko; Ueda, Sayako; Kumada, Takatsune
2015-01-01
When observers perceive several objects in a space, at the same time, they should effectively perceive their own position as a viewpoint. However, little is known about observers’ percepts of their own spatial location based on the visual scene information viewed from them. Previous studies indicate that two distinct visual spatial processes exist in the locomotion situation: the egocentric position perception and egocentric direction perception. Those studies examined such perceptions in information rich visual environments where much dynamic and static visual information was available. This study examined these two perceptions in information of impoverished environments, including only static lane edge information (i.e., limited information). We investigated the visual factors associated with static lane edge information that may affect these perceptions. Especially, we examined the effects of the two factors on egocentric direction and position perceptions. One is the “uprightness factor” that “far” visual information is seen at upper location than “near” visual information. The other is the “central vision factor” that observers usually look at “far” visual information using central vision (i.e., foveal vision) whereas ‘near’ visual information using peripheral vision. Experiment 1 examined the effect of the “uprightness factor” using normal and inverted road images. Experiment 2 examined the effect of the “central vision factor” using normal and transposed road images where the upper half of the normal image was presented under the lower half. Experiment 3 aimed to replicate the results of Experiments 1 and 2. Results showed that egocentric direction perception is interfered with image inversion or image transposition, whereas egocentric position perception is robust against these image transformations. That is, both “uprightness” and “central vision” factors are important for egocentric direction perception, but not for egocentric position perception. Therefore, the two visual spatial perceptions about observers’ own viewpoints are fundamentally dissociable. PMID:26648895
ERIC Educational Resources Information Center
Gao, Tao; Gao, Zaifeng; Li, Jie; Sun, Zhongqiang; Shen, Mowei
2011-01-01
Mainstream theories of visual perception assume that visual working memory (VWM) is critical for integrating online perceptual information and constructing coherent visual experiences in changing environments. Given the dynamic interaction between online perception and VWM, we propose that how visual information is processed during visual…
Neural theory for the perception of causal actions.
Fleischer, Falk; Christensen, Andrea; Caggiano, Vittorio; Thier, Peter; Giese, Martin A
2012-07-01
The efficient prediction of the behavior of others requires the recognition of their actions and an understanding of their action goals. In humans, this process is fast and extremely robust, as demonstrated by classical experiments showing that human observers reliably judge causal relationships and attribute interactive social behavior to strongly simplified stimuli consisting of simple moving geometrical shapes. While psychophysical experiments have identified critical visual features that determine the perception of causality and agency from such stimuli, the underlying detailed neural mechanisms remain largely unclear, and it is an open question why humans developed this advanced visual capability at all. We created pairs of naturalistic and abstract stimuli of hand actions that were exactly matched in terms of their motion parameters. We show that varying critical stimulus parameters for both stimulus types leads to very similar modulations of the perception of causality. However, the additional form information about the hand shape and its relationship with the object supports more fine-grained distinctions for the naturalistic stimuli. Moreover, we show that a physiologically plausible model for the recognition of goal-directed hand actions reproduces the observed dependencies of causality perception on critical stimulus parameters. These results support the hypothesis that selectivity for abstract action stimuli might emerge from the same neural mechanisms that underlie the visual processing of natural goal-directed action stimuli. Furthermore, the model proposes specific detailed neural circuits underlying this visual function, which can be evaluated in future experiments.
Perceptual deficits of object identification: apperceptive agnosia.
Milner, A David; Cavina-Pratesi, Cristiana
2018-01-01
It is argued here that apperceptive object agnosia (generally now known as visual form agnosia) is in reality not a kind of agnosia, but rather a form of "imperception" (to use the term coined by Hughlings Jackson). We further argue that its proximate cause is a bilateral loss (or functional loss) of the visual form processing systems embodied in the human lateral occipital cortex (area LO). According to the dual-system model of cortical visual processing elaborated by Milner and Goodale (2006), area LO constitutes a crucial component of the ventral stream, and indeed is essential for providing the figural qualities inherent in our normal visual perception of the world. According to this account, the functional loss of area LO would leave only spared visual areas within the occipito-parietal dorsal stream - dedicated to the control of visually-guided actions - potentially able to provide some aspects of visual shape processing in patients with apperceptive agnosia. We review the relevant evidence from such individuals, concentrating particularly on the well-researched patient D.F. We conclude that studies of this kind can provide useful pointers to an understanding of the processing characteristics of parietal-lobe visual mechanisms and their interactions with occipitotemporal perceptual systems in the guidance of action. Copyright © 2018 Elsevier B.V. All rights reserved.
Visual Memories Bypass Normalization.
Bloem, Ilona M; Watanabe, Yurika L; Kibbe, Melissa M; Ling, Sam
2018-05-01
How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores-neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation.
Visual Memories Bypass Normalization
Bloem, Ilona M.; Watanabe, Yurika L.; Kibbe, Melissa M.; Ling, Sam
2018-01-01
How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores—neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation. PMID:29596038
Concurrent visuomotor behaviour improves form discrimination in a patient with visual form agnosia.
Schenk, Thomas; Milner, A David
2006-09-01
It is now well established that the visual brain is divided into two visual streams, the ventral and the dorsal stream. Milner and Goodale have suggested that the ventral stream is dedicated for processing vision for perception and the dorsal stream vision for action [A.D. Milner & M.A. Goodale (1995) The Visual Brain in Action, Oxford University Press, Oxford]. However, it is possible that ongoing processes in the visuomotor stream will nevertheless have an effect on perceptual processes. This possibility was examined in the present study. We have examined the visual form-discrimination performance of the form-agnosic patient D.F. with and without a concurrent visuomotor task, and found that her performance was significantly improved in the former condition. This suggests that the visuomotor behaviour provides cues that enhance her ability to recognize the form of the target object. In control experiments we have ruled out proprioceptive and efferent cues, and therefore propose that D.F. can, to a significant degree, access the object's visuomotor representation in the dorsal stream. Moreover, we show that the grasping-induced perceptual improvement disappears if the target objects only differ with respect to their shape but not their width. This suggests that shape information per se is not used for this grasping task.
Near-optimal integration of facial form and motion.
Dobs, Katharina; Ma, Wei Ji; Reddy, Leila
2017-09-08
Human perception consists of the continuous integration of sensory cues pertaining to the same object. While it has been fairly well shown that humans use an optimal strategy when integrating low-level cues proportional to their relative reliability, the integration processes underlying high-level perception are much less understood. Here we investigate cue integration in a complex high-level perceptual system, the human face processing system. We tested cue integration of facial form and motion in an identity categorization task and found that an optimal model could successfully predict subjects' identity choices. Our results suggest that optimal cue integration may be implemented across different levels of the visual processing hierarchy.
Computational model for perception of objects and motions.
Yang, WenLu; Zhang, LiQing; Ma, LiBo
2008-06-01
Perception of objects and motions in the visual scene is one of the basic problems in the visual system. There exist 'What' and 'Where' pathways in the superior visual cortex, starting from the simple cells in the primary visual cortex. The former is able to perceive objects such as forms, color, and texture, and the latter perceives 'where', for example, velocity and direction of spatial movement of objects. This paper explores brain-like computational architectures of visual information processing. We propose a visual perceptual model and computational mechanism for training the perceptual model. The computational model is a three-layer network. The first layer is the input layer which is used to receive the stimuli from natural environments. The second layer is designed for representing the internal neural information. The connections between the first layer and the second layer, called the receptive fields of neurons, are self-adaptively learned based on principle of sparse neural representation. To this end, we introduce Kullback-Leibler divergence as the measure of independence between neural responses and derive the learning algorithm based on minimizing the cost function. The proposed algorithm is applied to train the basis functions, namely receptive fields, which are localized, oriented, and bandpassed. The resultant receptive fields of neurons in the second layer have the characteristics resembling that of simple cells in the primary visual cortex. Based on these basis functions, we further construct the third layer for perception of what and where in the superior visual cortex. The proposed model is able to perceive objects and their motions with a high accuracy and strong robustness against additive noise. Computer simulation results in the final section show the feasibility of the proposed perceptual model and high efficiency of the learning algorithm.
The what, where and how of auditory-object perception.
Bizley, Jennifer K; Cohen, Yale E
2013-10-01
The fundamental perceptual unit in hearing is the 'auditory object'. Similar to visual objects, auditory objects are the computational result of the auditory system's capacity to detect, extract, segregate and group spectrotemporal regularities in the acoustic environment; the multitude of acoustic stimuli around us together form the auditory scene. However, unlike the visual scene, resolving the component objects within the auditory scene crucially depends on their temporal structure. Neural correlates of auditory objects are found throughout the auditory system. However, neural responses do not become correlated with a listener's perceptual reports until the level of the cortex. The roles of different neural structures and the contribution of different cognitive states to the perception of auditory objects are not yet fully understood.
The what, where and how of auditory-object perception
Bizley, Jennifer K.; Cohen, Yale E.
2014-01-01
The fundamental perceptual unit in hearing is the ‘auditory object’. Similar to visual objects, auditory objects are the computational result of the auditory system's capacity to detect, extract, segregate and group spectrotemporal regularities in the acoustic environment; the multitude of acoustic stimuli around us together form the auditory scene. However, unlike the visual scene, resolving the component objects within the auditory scene crucially depends on their temporal structure. Neural correlates of auditory objects are found throughout the auditory system. However, neural responses do not become correlated with a listener's perceptual reports until the level of the cortex. The roles of different neural structures and the contribution of different cognitive states to the perception of auditory objects are not yet fully understood. PMID:24052177
Perceptions of submissiveness: implications for victimization.
Richards, L; Rollerson, B; Phillips, J
1991-07-01
Some researchers have suggested that a precondition of affective submissiveness may increase the likelihood of female victimization in sexual assault, whereas others have suggested that criminal offenders use perceptions of vulnerability when selecting a victim. In this study, based on American college students, men (decoders) rated videotaped women (encoders) dominant versus submissive using a semantic differential instrument. Cue evaluators analyzed the body language and appearance of the videotaped women using a Likert instrument. The results suggest that (a) men form differentiated perceptions of dominant versus submissive women, (b) such perceptions substantially rely on nonverbal cues, (c) dominant and submissive women display visually different behaviors and appearances, and (d) men tend to select submissive females for exploitation.
Object form discontinuity facilitates displacement discrimination across saccades.
Demeyer, Maarten; De Graef, Peter; Wagemans, Johan; Verfaillie, Karl
2010-06-01
Stimulus displacements coinciding with a saccadic eye movement are poorly detected by human observers. In recent years, converging evidence has shown that this phenomenon does not result from poor transsaccadic retention of presaccadic stimulus position information, but from the visual system's efforts to spatially align presaccadic and postsaccadic perception on the basis of visual landmarks. It is known that this process can be disrupted, and transsaccadic displacement detection performance can be improved, by briefly blanking the stimulus display during and immediately after the saccade. In the present study, we investigated whether this improvement could also follow from a discontinuity in the task-irrelevant form of the displaced stimulus. We observed this to be the case: Subjects more accurately identified the direction of intrasaccadic displacements when the displaced stimulus simultaneously changed form, compared to conditions without a form change. However, larger improvements were still observed under blanking conditions. In a second experiment, we show that facilitation induced by form changes and blanks can combine. We conclude that a strong assumption of visual stability underlies the suppression of transsaccadic change detection performance, the rejection of which generalizes from stimulus form to stimulus position.
Neocortical Rebound Depolarization Enhances Visual Perception
Funayama, Kenta; Ban, Hiroshi; Chan, Allen W.; Matsuki, Norio; Murphy, Timothy H.; Ikegaya, Yuji
2015-01-01
Animals are constantly exposed to the time-varying visual world. Because visual perception is modulated by immediately prior visual experience, visual cortical neurons may register recent visual history into a specific form of offline activity and link it to later visual input. To examine how preceding visual inputs interact with upcoming information at the single neuron level, we designed a simple stimulation protocol in which a brief, orientated flashing stimulus was subsequently coupled to visual stimuli with identical or different features. Using in vivo whole-cell patch-clamp recording and functional two-photon calcium imaging from the primary visual cortex (V1) of awake mice, we discovered that a flash of sinusoidal grating per se induces an early, transient activation as well as a long-delayed reactivation in V1 neurons. This late response, which started hundreds of milliseconds after the flash and persisted for approximately 2 s, was also observed in human V1 electroencephalogram. When another drifting grating stimulus arrived during the late response, the V1 neurons exhibited a sublinear, but apparently increased response, especially to the same grating orientation. In behavioral tests of mice and humans, the flashing stimulation enhanced the detection power of the identically orientated visual stimulation only when the second stimulation was presented during the time window of the late response. Therefore, V1 late responses likely provide a neural basis for admixing temporally separated stimuli and extracting identical features in time-varying visual environments. PMID:26274866
Aural-Visual-Kinesthetic Imagery in Motion Media.
ERIC Educational Resources Information Center
Allan, David W.
Motion media refers to film, television, and other forms of kinesthetic media including computerized multimedia technologies and virtual reality. Imagery reproduced by motion media carries a multisensory amalgamation of mental experiences. The blending of these experiences phenomenologically intersects with the reality and perception of words,…
The first pictures: perceptual foundations of Paleolithic art.
Halverson, J
1992-01-01
Paleolithic representational art has a number of consistent characteristics: the subjects are almost always animals, depicted without scenic background, usually in profile, and mostly in outline; the means of representation are extremely economical, often consisting of only a few strokes that indicate the salient features of the animal which are sufficient to suggest the whole form; and it is naturalistic to a degree, but lacks anything like photographic realism. Two elementary questions are raised in this essay: (i) why did the earliest known attempts at depiction have just these characteristics and not others? and (ii) how are objects so minimally represented recognizable? The answers seem to lie with certain fundamental features of visual perception, especially figure-ground distinction, Gestalt principles of closure and good continuation, line surrogacy, component feature analysis, and canonical imaging. In the earliest pictures the graphic means used are such that they evoke the same visual responses as those involved in the perception of real-world forms, but eschew redundancies of color, texture, linear perspective, and completeness of representation.
Altered figure-ground perception in monkeys with an extra-striate lesion.
Supèr, Hans; Lamme, Victor A F
2007-11-05
The visual system binds and segments the elements of an image into coherent objects and their surroundings. Recent findings demonstrate that primary visual cortex is involved in this process of figure-ground organization. In the primary visual cortex the late part of a neural response to a stimulus correlates with figure-ground segregation and perception. Such a late onset indicates an involvement of feedback projections from higher visual areas. To investigate the possible role of feedback in figure-ground perception we removed dorsal extra-striate areas of the monkey visual cortex. The findings show that figure-ground perception is reduced when the figure is presented in the lesioned hemifield and perception is normal when the figure appeared in the intact hemifield. In conclusion, our observations show the importance for recurrent processing in visual perception.
Songnian, Zhao; Qi, Zou; Chang, Liu; Xuemin, Liu; Shousi, Sun; Jun, Qiu
2014-04-23
How it is possible to "faithfully" represent a three-dimensional stereoscopic scene using Cartesian coordinates on a plane, and how three-dimensional perceptions differ between an actual scene and an image of the same scene are questions that have not yet been explored in depth. They seem like commonplace phenomena, but in fact, they are important and difficult issues for visual information processing, neural computation, physics, psychology, cognitive psychology, and neuroscience. The results of this study show that the use of plenoptic (or all-optical) functions and their dual plane parameterizations can not only explain the nature of information processing from the retina to the primary visual cortex and, in particular, the characteristics of the visual pathway's optical system and its affine transformation, but they can also clarify the reason why the vanishing point and line exist in a visual image. In addition, they can better explain the reasons why a three-dimensional Cartesian coordinate system can be introduced into the two-dimensional plane to express a real three-dimensional scene. 1. We introduce two different mathematical expressions of the plenoptic functions, Pw and Pv that can describe the objective world. We also analyze the differences between these two functions when describing visual depth perception, that is, the difference between how these two functions obtain the depth information of an external scene.2. The main results include a basic method for introducing a three-dimensional Cartesian coordinate system into a two-dimensional plane to express the depth of a scene, its constraints, and algorithmic implementation. In particular, we include a method to separate the plenoptic function and proceed with the corresponding transformation in the retina and visual cortex.3. We propose that size constancy, the vanishing point, and vanishing line form the basis of visual perception of the outside world, and that the introduction of a three-dimensional Cartesian coordinate system into a two dimensional plane reveals a corresponding mapping between a retinal image and the vanishing point and line.
2014-01-01
Background How it is possible to “faithfully” represent a three-dimensional stereoscopic scene using Cartesian coordinates on a plane, and how three-dimensional perceptions differ between an actual scene and an image of the same scene are questions that have not yet been explored in depth. They seem like commonplace phenomena, but in fact, they are important and difficult issues for visual information processing, neural computation, physics, psychology, cognitive psychology, and neuroscience. Results The results of this study show that the use of plenoptic (or all-optical) functions and their dual plane parameterizations can not only explain the nature of information processing from the retina to the primary visual cortex and, in particular, the characteristics of the visual pathway’s optical system and its affine transformation, but they can also clarify the reason why the vanishing point and line exist in a visual image. In addition, they can better explain the reasons why a three-dimensional Cartesian coordinate system can be introduced into the two-dimensional plane to express a real three-dimensional scene. Conclusions 1. We introduce two different mathematical expressions of the plenoptic functions, P w and P v that can describe the objective world. We also analyze the differences between these two functions when describing visual depth perception, that is, the difference between how these two functions obtain the depth information of an external scene. 2. The main results include a basic method for introducing a three-dimensional Cartesian coordinate system into a two-dimensional plane to express the depth of a scene, its constraints, and algorithmic implementation. In particular, we include a method to separate the plenoptic function and proceed with the corresponding transformation in the retina and visual cortex. 3. We propose that size constancy, the vanishing point, and vanishing line form the basis of visual perception of the outside world, and that the introduction of a three-dimensional Cartesian coordinate system into a two dimensional plane reveals a corresponding mapping between a retinal image and the vanishing point and line. PMID:24755246
Most, Tova; Michaelis, Hilit
2012-08-01
This study aimed to investigate the effect of hearing loss (HL) on emotion-perception ability among young children with and without HL. A total of 26 children 4.0-6.6 years of age with prelingual sensory-neural HL ranging from moderate to profound and 14 children with normal hearing (NH) participated. They were asked to identify happiness, anger, sadness, and fear expressed by an actress when uttering the same neutral nonsense sentence. Their auditory, visual, and auditory-visual perceptions of the emotional content were assessed. The accuracy of emotion perception among children with HL was lower than that of the NH children in all 3 conditions: auditory, visual, and auditory-visual. Perception through the combined auditory-visual mode significantly surpassed the auditory or visual modes alone in both groups, indicating that children with HL utilized the auditory information for emotion perception. No significant differences in perception emerged according to degree of HL. In addition, children with profound HL and cochlear implants did not perform differently from children with less severe HL who used hearing aids. The relatively high accuracy of emotion perception by children with HL may be explained by their intensive rehabilitation, which emphasizes suprasegmental and paralinguistic aspects of verbal communication.
Perception and control of rotorcraft flight
NASA Technical Reports Server (NTRS)
Owen, Dean H.
1991-01-01
Three topics which can be applied to rotorcraft flight are examined: (1) the nature of visual information; (2) what visual information is informative about; and (3) the control of visual information. The anchorage of visual perception is defined as the distribution of structure in the surrounding optical array or the distribution of optical structure over the retinal surface. A debate was provoked about whether the referent of visual event perception, and in turn control, is optical motion, kinetics, or dynamics. The interface of control theory and visual perception is also considered. The relationships among these problems is the basis of this article.
ERIC Educational Resources Information Center
Washington County Public Schools, Washington, PA.
Symptoms displayed by primary age children with learning disabilities are listed; perceptual handicaps are explained. Activities are suggested for developing visual perception and perception involving motor activities. Also suggested are activities to develop body concept, visual discrimination and attentiveness, visual memory, and figure ground…
[Visual perception and its disorders].
Ruf-Bächtiger, L
1989-11-21
It's the brain and not the eye that decides what is perceived. In spite of this fact, quite a lot is known about the functioning of the eye and the first sections of the optic tract, but little about the actual process of perception. Examination of visual perception and its malfunctions relies therefore on certain hypotheses. Proceeding from the model of functional brain systems, variant functional domains of visual perception can be distinguished. Among the more important of these domains are: digit span, visual discrimination and figure-ground discrimination. Evaluation of these functional domains allows us to understand those children with disorders of visual perception better and to develop more effective treatment methods.
Jonas, Jacques; Frismand, Solène; Vignal, Jean-Pierre; Colnat-Coulbois, Sophie; Koessler, Laurent; Vespignani, Hervé; Rossion, Bruno; Maillard, Louis
2014-07-01
Electrical brain stimulation can provide important information about the functional organization of the human visual cortex. Here, we report the visual phenomena evoked by a large number (562) of intracerebral electrical stimulations performed at low-intensity with depth electrodes implanted in the occipito-parieto-temporal cortex of 22 epileptic patients. Focal electrical stimulation evoked primarily visual hallucinations with various complexities: simple (spot or blob), intermediary (geometric forms), or complex meaningful shapes (faces); visual illusions and impairments of visual recognition were more rarely observed. With the exception of the most posterior cortical sites, the probability of evoking a visual phenomenon was significantly higher in the right than the left hemisphere. Intermediary and complex hallucinations, illusions, and visual recognition impairments were almost exclusively evoked by stimulation in the right hemisphere. The probability of evoking a visual phenomenon decreased substantially from the occipital pole to the most anterior sites of the temporal lobe, and this decrease was more pronounced in the left hemisphere. The greater sensitivity of the right occipito-parieto-temporal regions to intracerebral electrical stimulation to evoke visual phenomena supports a predominant role of right hemispheric visual areas from perception to recognition of visual forms, regardless of visuospatial and attentional factors. Copyright © 2013 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Brown, Ted; Murdolo, Yuki
2015-01-01
The "Developmental Test of Visual Perception-Third Edition" (DTVP-3) is a recent revision of the "Developmental Test of Visual Perception-Second Edition" (DTVP-2). The DTVP-3 is designed to assess the visual perceptual and/or visual-motor integration skills of children from 4 to 12 years of age. The test is standardized using…
A Critical Review of the "Motor-Free Visual Perception Test-Fourth Edition" (MVPT-4)
ERIC Educational Resources Information Center
Brown, Ted; Peres, Lisa
2018-01-01
The "Motor-Free Visual Perception Test-fourth edition" (MVPT-4) is a revised version of the "Motor-Free Visual Perception Test-third edition." The MVPT-4 is used to assess the visual-perceptual ability of individuals aged 4.0 through 80+ years via a series of visual-perceptual tasks that do not require a motor response. Test…
The development of visual speech perception in Mandarin Chinese-speaking children.
Chen, Liang; Lei, Jianghua
2017-01-01
The present study aimed to investigate the development of visual speech perception in Chinese-speaking children. Children aged 7, 13 and 16 were asked to visually identify both consonant and vowel sounds in Chinese as quickly and accurately as possible. Results revealed (1) an increase in accuracy of visual speech perception between ages 7 and 13 after which the accuracy rate either stagnates or drops; and (2) a U-shaped development pattern in speed of perception with peak performance in 13-year olds. Results also showed that across all age groups, the overall levels of accuracy rose, whereas the response times fell for simplex finals, complex finals and initials. These findings suggest that (1) visual speech perception in Chinese is a developmental process that is acquired over time and is still fine-tuned well into late adolescence; (2) factors other than cross-linguistic differences in phonological complexity and degrees of reliance on visual information are involved in development of visual speech perception.
Cortical visual prostheses: from microstimulation to functional percept
NASA Astrophysics Data System (ADS)
Najarpour Foroushani, Armin; Pack, Christopher C.; Sawan, Mohamad
2018-04-01
Cortical visual prostheses are intended to restore vision by targeted electrical stimulation of the visual cortex. The perception of spots of light, called phosphenes, resulting from microstimulation of the visual pathway, suggests the possibility of creating meaningful percept made of phosphenes. However, to date electrical stimulation of V1 has still not resulted in perception of phosphenated images that goes beyond punctate spots of light. In this review, we summarize the clinical and experimental progress that has been made in generating phosphenes and modulating their associated perceptual characteristics in human and macaque primary visual cortex (V1). We focus specifically on the effects of different microstimulation parameters on perception and we analyse key challenges facing the generation of meaningful artificial percepts. Finally, we propose solutions to these challenges based on the application of supervised learning of population codes for spatial stimulation of visual cortex.
Shourie, Nasrin; Firoozabadi, Mohammad; Badie, Kambiz
2014-01-01
In this paper, differences between multichannel EEG signals of artists and nonartists were analyzed during visual perception and mental imagery of some paintings and at resting condition using approximate entropy (ApEn). It was found that ApEn is significantly higher for artists during the visual perception and the mental imagery in the frontal lobe, suggesting that artists process more information during these conditions. It was also observed that ApEn decreases for the two groups during the visual perception due to increasing mental load; however, their variation patterns are different. This difference may be used for measuring progress in novice artists. In addition, it was found that ApEn is significantly lower during the visual perception than the mental imagery in some of the channels, suggesting that visual perception task requires more cerebral efforts.
Still holding after all these years: An action-perception dissociation in patient DF.
Ganel, Tzvi; Goodale, Melvyn A
2017-09-23
Patient DF, who has bilateral damage in the ventral visual stream, is perhaps the best known individual with visual form agnosia in the world, and has been the focus of scores of research papers over the past twenty-five years. The remarkable dissociation she exhibits between a profound deficit in perceptual report and a preserved ability to generate relatively normal visuomotor behaviour was early on a cornerstone in Goodale and Milner's (1992) two visual systems hypothesis. In recent years, however, there has been a greater emphasis on the damage that is evident in the posterior regions of her parietal cortex in both hemispheres. Deficits in several aspects of visuomotor control in the visual periphery have been demonstrated, leading some researchers to conclude that the double dissociation between vision-for-perception and vision-for-action in DF and patients with classic optic ataxia can no longer be assumed to be strong evidence for the division of labour between the dorsal and ventral streams of visual processing. In this short review, we argue that this is not the case. Indeed, after evaluating DF's performance and the location of her brain lesions, a clear picture of a double dissociation between DF and patients with optic ataxia is revealed. More than quarter of a century after the initial presentation of DF's unique case, she continues to provide compelling evidence for the idea that the ventral stream is critical for the perception of the shape and orientation of objects but not the visual control of skilled actions directed at those objects. Copyright © 2017 Elsevier Ltd. All rights reserved.
Influence of the Casserius Tables on fetal anatomy illustration and how we envision the unborn.
Heilemann, Heidi A
2011-01-01
The paper demonstrates how visual representation of the fetus in early anatomy texts influenced the reader's perception of the unborn child as an autonomous being. The health, art, and history literatures were used as sources. Original texts and illustrations, with particular attention paid to the Casserius Tables, published by Andreas Spigelius in 1627, are discussed. A review of the literature was conducted to identify and analyze published renderings, reproductions, and discussion of images of the unborn child. Original anatomy atlases were consulted. Artists' renderings of a particularly vulnerable state of human life influenced early perceptions of the status of the unborn child. The images show fetuses as highly independent, providing a visual cue that life is fully formed in utero. The legacy of the Casserius Tables is that they are still able to capture our attention because they portray the idea of a fetus and newborn even more clearly than our modern representations of this charged topic. The use of deceptive realism provides the viewer with an accessible visual representation of the unborn child. These early anatomy illustrations continue to influence modern-day perception of the unborn child as a separate being, completely autonomous from the mother.
Stephens, Robert P
2011-01-01
Addiction films have been shaped by the internal demands of a commercial medium. Specifically, melodrama, as a genre, has defined the limits of the visual representation of addiction. Similarly, the process of intermedialization has tended to induce a metamorphosis that shapes disparate narratives with diverse goals into a generic filmic form and substantially alters the meanings of the texts. Ultimately, visual representations shape public perceptions of addiction in meaningful ways, privileging a moralistic understanding of drug addiction that makes a complex issue visually uncomplicated by reinforcing "common sense" ideas of moral failure and redemption. Copyright © 2011 Informa Healthcare USA, Inc.
Attention modulates perception of visual space
Zhou, Liu; Deng, Chenglong; Ooi, Teng Leng; He, Zijiang J.
2017-01-01
Attention readily facilitates the detection and discrimination of objects, but it is not known whether it helps to form the vast volume of visual space that contains the objects and where actions are implemented. Conventional wisdom suggests not, given the effortless ease with which we perceive three-dimensional (3D) scenes on opening our eyes. Here, we show evidence to the contrary. In Experiment 1, the observer judged the location of a briefly presented target, placed either on the textured ground or ceiling surface. Judged location was more accurate for a target on the ground, provided that the ground was visible and that the observer directed attention to the lower visual field, not the upper field. This reveals that attention facilitates space perception with reference to the ground. Experiment 2 showed that judged location of a target in mid-air, with both ground and ceiling surfaces present, was more accurate when the observer directed their attention to the lower visual field; this indicates that the attention effect extends to visual space above the ground. These findings underscore the role of attention in anchoring visual orientation in space, which is arguably a primal event that enhances one’s ability to interact with objects and surface layouts within the visual space. The fact that the effect of attention was contingent on the ground being visible suggests that our terrestrial visual system is best served by its ecological niche. PMID:29177198
Visual adaptation and face perception
Webster, Michael A.; MacLeod, Donald I. A.
2011-01-01
The appearance of faces can be strongly affected by the characteristics of faces viewed previously. These perceptual after-effects reflect processes of sensory adaptation that are found throughout the visual system, but which have been considered only relatively recently in the context of higher level perceptual judgements. In this review, we explore the consequences of adaptation for human face perception, and the implications of adaptation for understanding the neural-coding schemes underlying the visual representation of faces. The properties of face after-effects suggest that they, in part, reflect response changes at high and possibly face-specific levels of visual processing. Yet, the form of the after-effects and the norm-based codes that they point to show many parallels with the adaptations and functional organization that are thought to underlie the encoding of perceptual attributes like colour. The nature and basis for human colour vision have been studied extensively, and we draw on ideas and principles that have been developed to account for norms and normalization in colour vision to consider potential similarities and differences in the representation and adaptation of faces. PMID:21536555
Visual adaptation and face perception.
Webster, Michael A; MacLeod, Donald I A
2011-06-12
The appearance of faces can be strongly affected by the characteristics of faces viewed previously. These perceptual after-effects reflect processes of sensory adaptation that are found throughout the visual system, but which have been considered only relatively recently in the context of higher level perceptual judgements. In this review, we explore the consequences of adaptation for human face perception, and the implications of adaptation for understanding the neural-coding schemes underlying the visual representation of faces. The properties of face after-effects suggest that they, in part, reflect response changes at high and possibly face-specific levels of visual processing. Yet, the form of the after-effects and the norm-based codes that they point to show many parallels with the adaptations and functional organization that are thought to underlie the encoding of perceptual attributes like colour. The nature and basis for human colour vision have been studied extensively, and we draw on ideas and principles that have been developed to account for norms and normalization in colour vision to consider potential similarities and differences in the representation and adaptation of faces.
The effect of occlusion therapy on motion perception deficits in amblyopia.
Giaschi, Deborah; Chapman, Christine; Meier, Kimberly; Narasimhan, Sathyasri; Regan, David
2015-09-01
There is growing evidence for deficits in motion perception in amblyopia, but these are rarely assessed clinically. In this prospective study we examined the effect of occlusion therapy on motion-defined form perception and multiple-object tracking. Participants included children (3-10years old) with unilateral anisometropic and/or strabismic amblyopia who were currently undergoing occlusion therapy and age-matched control children with normal vision. At the start of the study, deficits in motion-defined form perception were present in at least one eye in 69% of the children with amblyopia. These deficits were still present at the end of the study in 55% of the amblyopia group. For multiple-object tracking, deficits were present initially in 64% and finally in 55% of the children with amblyopia, even after completion of occlusion therapy. Many of these deficits persisted in spite of an improvement in amblyopic eye visual acuity in response to occlusion therapy. The prevalence of motion perception deficits in amblyopia as well as their resistance to occlusion therapy, support the need for new approaches to amblyopia treatment. Copyright © 2015 Elsevier Ltd. All rights reserved.
McGuckian, Thomas B; Cole, Michael H; Pepping, Gert-Jan
2018-04-01
To visually perceive opportunities for action, athletes rely on the movements of their eyes, head and body to explore their surrounding environment. To date, the specific types of technology and their efficacy for assessing the exploration behaviours of association footballers have not been systematically reviewed. This review aimed to synthesise the visual perception and exploration behaviours of footballers according to the task constraints, action requirements of the experimental task, and level of expertise of the athlete, in the context of the technology used to quantify the visual perception and exploration behaviours of footballers. A systematic search for papers that included keywords related to football, technology, and visual perception was conducted. All 38 included articles utilised eye-movement registration technology to quantify visual perception and exploration behaviour. The experimental domain appears to influence the visual perception behaviour of footballers, however no studies investigated exploration behaviours of footballers in open-play situations. Studies rarely utilised representative stimulus presentation or action requirements. To fully understand the visual perception requirements of athletes, it is recommended that future research seek to validate alternate technologies that are capable of investigating the eye, head and body movements associated with the exploration behaviours of footballers during representative open-play situations.
Exogenous Attention Enables Perceptual Learning
Szpiro, Sarit F. A.; Carrasco, Marisa
2015-01-01
Practice can improve visual perception, and these improvements are considered to be a form of brain plasticity. Training-induced learning is time-consuming and requires hundreds of trials across multiple days. The process of learning acquisition is understudied. Can learning acquisition be potentiated by manipulating visual attentional cues? We developed a protocol in which we used task-irrelevant cues for between-groups manipulation of attention during training. We found that training with exogenous attention can enable the acquisition of learning. Remarkably, this learning was maintained even when observers were subsequently tested under neutral conditions, which indicates that a change in perception was involved. Our study is the first to isolate the effects of exogenous attention and to demonstrate its efficacy to enable learning. We propose that exogenous attention boosts perceptual learning by enhancing stimulus encoding. PMID:26502745
Toward Objectivity in Diagnosing Learning Disabilities: Refinement of Established Procedures.
ERIC Educational Resources Information Center
Goodman, Marvin; Mina, Elias
Variability in diagnostic procedures and a lack of valid and reliable measures led to the development of a comprehensive battery, which incorporated an operational definition of learning disabilities. The battery consisted of forms for observing these functions: intelligence, academic achievement, gross and fine motor control, visual perception,…
Toward a Psychology of Responses to Dance Performance
ERIC Educational Resources Information Center
Gervasio, Amy Herstein
2012-01-01
This paper applies contemporary principles in cognitive and social psychology to understand how Western ballet and modern dance is imbued with emotional and narrative meaning by an audience. These include nine Gestalt concepts of visual form perception as well as cognitive heuristics of representativeness and availability in concept formation and…
Matsumiya, Kazumichi
2013-10-01
Current views on face perception assume that the visual system receives only visual facial signals. However, I show that the visual perception of faces is systematically biased by adaptation to a haptically explored face. Recently, face aftereffects (FAEs; the altered perception of faces after adaptation to a face) have been demonstrated not only in visual perception but also in haptic perception; therefore, I combined the two FAEs to examine whether the visual system receives face-related signals from the haptic modality. I found that adaptation to a haptically explored facial expression on a face mask produced a visual FAE for facial expression. This cross-modal FAE was not due to explicitly imaging a face, response bias, or adaptation to local features. Furthermore, FAEs transferred from vision to haptics. These results indicate that visual face processing depends on substrates adapted by haptic faces, which suggests that face processing relies on shared representation underlying cross-modal interactions.
Endogenous modulation of human visual cortex activity improves perception at twilight.
Cordani, Lorenzo; Tagliazucchi, Enzo; Vetter, Céline; Hassemer, Christian; Roenneberg, Till; Stehle, Jörg H; Kell, Christian A
2018-04-10
Perception, particularly in the visual domain, is drastically influenced by rhythmic changes in ambient lighting conditions. Anticipation of daylight changes by the circadian system is critical for survival. However, the neural bases of time-of-day-dependent modulation in human perception are not yet understood. We used fMRI to study brain dynamics during resting-state and close-to-threshold visual perception repeatedly at six times of the day. Here we report that resting-state signal variance drops endogenously at times coinciding with dawn and dusk, notably in sensory cortices only. In parallel, perception-related signal variance in visual cortices decreases and correlates negatively with detection performance, identifying an anticipatory mechanism that compensates for the deteriorated visual signal quality at dawn and dusk. Generally, our findings imply that decreases in spontaneous neural activity improve close-to-threshold perception.
Decoding and disrupting left midfusiform gyrus activity during word reading
Hirshorn, Elizabeth A.; Ward, Michael J.; Fiez, Julie A.; Ghuman, Avniel Singh
2016-01-01
The nature of the visual representation for words has been fiercely debated for over 150 y. We used direct brain stimulation, pre- and postsurgical behavioral measures, and intracranial electroencephalography to provide support for, and elaborate upon, the visual word form hypothesis. This hypothesis states that activity in the left midfusiform gyrus (lmFG) reflects visually organized information about words and word parts. In patients with electrodes placed directly in their lmFG, we found that disrupting lmFG activity through stimulation, and later surgical resection in one of the patients, led to impaired perception of whole words and letters. Furthermore, using machine-learning methods to analyze the electrophysiological data from these electrodes, we found that information contained in early lmFG activity was consistent with an orthographic similarity space. Finally, the lmFG contributed to at least two distinguishable stages of word processing, an early stage that reflects gist-level visual representation sensitive to orthographic statistics, and a later stage that reflects more precise representation sufficient for the individuation of orthographic word forms. These results provide strong support for the visual word form hypothesis and demonstrate that across time the lmFG is involved in multiple stages of orthographic representation. PMID:27325763
Decoding and disrupting left midfusiform gyrus activity during word reading.
Hirshorn, Elizabeth A; Li, Yuanning; Ward, Michael J; Richardson, R Mark; Fiez, Julie A; Ghuman, Avniel Singh
2016-07-19
The nature of the visual representation for words has been fiercely debated for over 150 y. We used direct brain stimulation, pre- and postsurgical behavioral measures, and intracranial electroencephalography to provide support for, and elaborate upon, the visual word form hypothesis. This hypothesis states that activity in the left midfusiform gyrus (lmFG) reflects visually organized information about words and word parts. In patients with electrodes placed directly in their lmFG, we found that disrupting lmFG activity through stimulation, and later surgical resection in one of the patients, led to impaired perception of whole words and letters. Furthermore, using machine-learning methods to analyze the electrophysiological data from these electrodes, we found that information contained in early lmFG activity was consistent with an orthographic similarity space. Finally, the lmFG contributed to at least two distinguishable stages of word processing, an early stage that reflects gist-level visual representation sensitive to orthographic statistics, and a later stage that reflects more precise representation sufficient for the individuation of orthographic word forms. These results provide strong support for the visual word form hypothesis and demonstrate that across time the lmFG is involved in multiple stages of orthographic representation.
Spatial Disorientation in Gondola Centrifuges Predicted by the Form of Motion as a Whole in 3-D
Holly, Jan E.; Harmon, Katharine J.
2009-01-01
INTRODUCTION During a coordinated turn, subjects can misperceive tilts. Subjects accelerating in tilting-gondola centrifuges without external visual reference underestimate the roll angle, and underestimate more when backward-facing than when forward-facing. In addition, during centrifuge deceleration, the perception of pitch can include tumble while paradoxically maintaining a fixed perceived pitch angle. The goal of the present research was to test two competing hypotheses: (1) that components of motion are perceived relatively independently and then combined to form a three-dimensional perception, and (2) that perception is governed by familiarity of motions as a whole in three dimensions, with components depending more strongly on the overall shape of the motion. METHODS Published experimental data were used from existing tilting-gondola centrifuge studies. The two hypotheses were implemented formally in computer models, and centrifuge acceleration and deceleration were simulated. RESULTS The second, whole-motion oriented, hypothesis better predicted subjects' perceptions, including the forward-backward asymmetry and the paradoxical tumble upon deceleration. Important was the predominant stimulus at the beginning of the motion as well as the familiarity of centripetal acceleration. CONCLUSION Three-dimensional perception is better predicted by taking into account familiarity with the form of three-dimensional motion. PMID:19198199
Optical phonetics and visual perception of lexical and phrasal stress in English.
Scarborough, Rebecca; Keating, Patricia; Mattys, Sven L; Cho, Taehong; Alwan, Abeer
2009-01-01
In a study of optical cues to the visual perception of stress, three American English talkers spoke words that differed in lexical stress and sentences that differed in phrasal stress, while video and movements of the face were recorded. The production of stressed and unstressed syllables from these utterances was analyzed along many measures of facial movement, which were generally larger and faster in the stressed condition. In a visual perception experiment, 16 perceivers identified the location of stress in forced-choice judgments of video clips of these utterances (without audio). Phrasal stress was better perceived than lexical stress. The relation of the visual intelligibility of the prosody of these utterances to the optical characteristics of their production was analyzed to determine which cues are associated with successful visual perception. While most optical measures were correlated with perception performance, chin measures, especially Chin Opening Displacement, contributed the most to correct perception independently of the other measures. Thus, our results indicate that the information for visual stress perception is mainly associated with mouth opening movements.
Visual and auditory perception in preschool children at risk for dyslexia.
Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina
2014-11-01
Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit. Copyright © 2014 Elsevier Ltd. All rights reserved.
Eye movements and attention in reading, scene perception, and visual search.
Rayner, Keith
2009-08-01
Eye movements are now widely used to investigate cognitive processes during reading, scene perception, and visual search. In this article, research on the following topics is reviewed with respect to reading: (a) the perceptual span (or span of effective vision), (b) preview benefit, (c) eye movement control, and (d) models of eye movements. Related issues with respect to eye movements during scene perception and visual search are also reviewed. It is argued that research on eye movements during reading has been somewhat advanced over research on eye movements in scene perception and visual search and that some of the paradigms developed to study reading should be more widely adopted in the study of scene perception and visual search. Research dealing with "real-world" tasks and research utilizing the visual-world paradigm are also briefly discussed.
Neural dynamics of 3-D surface perception: figure-ground separation and lightness perception.
Kelly, F; Grossberg, S
2000-11-01
This article develops the FACADE theory of three-dimensional (3-D) vision to simulate data concerning how two-dimensional pictures give rise to 3-D percepts of occluded and occluding surfaces. The theory suggests how geometrical and contrastive properties of an image can either cooperate or compete when forming the boundary and surface representations that subserve conscious visual percepts. Spatially long-range cooperation and short-range competition work together to separate boundaries of occluding figures from their occluded neighbors, thereby providing sensitivity to T-junctions without the need to assume that T-junction "detectors" exist. Both boundary and surface representations of occluded objects may be amodally completed, whereas the surface representations of unoccluded objects become visible through modal processes. Computer simulations include Bregman-Kanizsa figure-ground separation, Kanizsa stratification, and various lightness percepts, including the Münker-White, Benary cross, and checkerboard percepts.
Goodhew, Stephanie C; Lawrence, Rebecca K; Edwards, Mark
2017-05-01
There are volumes of information available to process in visual scenes. Visual spatial attention is a critically important selection mechanism that prevents these volumes from overwhelming our visual system's limited-capacity processing resources. We were interested in understanding the effect of the size of the attended area on visual perception. The prevailing model of attended-region size across cognition, perception, and neuroscience is the zoom-lens model. This model stipulates that the magnitude of perceptual processing enhancement is inversely related to the size of the attended region, such that a narrow attended-region facilitates greater perceptual enhancement than a wider region. Yet visual processing is subserved by two major visual pathways (magnocellular and parvocellular) that operate with a degree of independence in early visual processing and encode contrasting visual information. Historically, testing of the zoom-lens has used measures of spatial acuity ideally suited to parvocellular processing. This, therefore, raises questions about the generality of the zoom-lens model to different aspects of visual perception. We found that while a narrow attended-region facilitated spatial acuity and the perception of high spatial frequency targets, it had no impact on either temporal acuity or the perception of low spatial frequency targets. This pattern also held up when targets were not presented centrally. This supports the notion that visual attended-region size has dissociable effects on magnocellular versus parvocellular mediated visual processing.
Involvement of Right STS in Audio-Visual Integration for Affective Speech Demonstrated Using MEG
Hagan, Cindy C.; Woods, Will; Johnson, Sam; Green, Gary G. R.; Young, Andrew W.
2013-01-01
Speech and emotion perception are dynamic processes in which it may be optimal to integrate synchronous signals emitted from different sources. Studies of audio-visual (AV) perception of neutrally expressed speech demonstrate supra-additive (i.e., where AV>[unimodal auditory+unimodal visual]) responses in left STS to crossmodal speech stimuli. However, emotions are often conveyed simultaneously with speech; through the voice in the form of speech prosody and through the face in the form of facial expression. Previous studies of AV nonverbal emotion integration showed a role for right (rather than left) STS. The current study therefore examined whether the integration of facial and prosodic signals of emotional speech is associated with supra-additive responses in left (cf. results for speech integration) or right (due to emotional content) STS. As emotional displays are sometimes difficult to interpret, we also examined whether supra-additive responses were affected by emotional incongruence (i.e., ambiguity). Using magnetoencephalography, we continuously recorded eighteen participants as they viewed and heard AV congruent emotional and AV incongruent emotional speech stimuli. Significant supra-additive responses were observed in right STS within the first 250 ms for emotionally incongruent and emotionally congruent AV speech stimuli, which further underscores the role of right STS in processing crossmodal emotive signals. PMID:23950977
Involvement of right STS in audio-visual integration for affective speech demonstrated using MEG.
Hagan, Cindy C; Woods, Will; Johnson, Sam; Green, Gary G R; Young, Andrew W
2013-01-01
Speech and emotion perception are dynamic processes in which it may be optimal to integrate synchronous signals emitted from different sources. Studies of audio-visual (AV) perception of neutrally expressed speech demonstrate supra-additive (i.e., where AV>[unimodal auditory+unimodal visual]) responses in left STS to crossmodal speech stimuli. However, emotions are often conveyed simultaneously with speech; through the voice in the form of speech prosody and through the face in the form of facial expression. Previous studies of AV nonverbal emotion integration showed a role for right (rather than left) STS. The current study therefore examined whether the integration of facial and prosodic signals of emotional speech is associated with supra-additive responses in left (cf. results for speech integration) or right (due to emotional content) STS. As emotional displays are sometimes difficult to interpret, we also examined whether supra-additive responses were affected by emotional incongruence (i.e., ambiguity). Using magnetoencephalography, we continuously recorded eighteen participants as they viewed and heard AV congruent emotional and AV incongruent emotional speech stimuli. Significant supra-additive responses were observed in right STS within the first 250 ms for emotionally incongruent and emotionally congruent AV speech stimuli, which further underscores the role of right STS in processing crossmodal emotive signals.
Visual Perception of Force: Comment on White (2012)
ERIC Educational Resources Information Center
Hubbard, Timothy L.
2012-01-01
White (2012) proposed that kinematic features in a visual percept are matched to stored representations containing information regarding forces (based on prior haptic experience) and that information in the matched, stored representations regarding forces is then incorporated into visual perception. Although some elements of White's (2012) account…
Human V4 Activity Patterns Predict Behavioral Performance in Imagery of Object Color.
Bannert, Michael M; Bartels, Andreas
2018-04-11
Color is special among basic visual features in that it can form a defining part of objects that are engrained in our memory. Whereas most neuroimaging research on human color vision has focused on responses related to external stimulation, the present study investigated how sensory-driven color vision is linked to subjective color perception induced by object imagery. We recorded fMRI activity in male and female volunteers during viewing of abstract color stimuli that were red, green, or yellow in half of the runs. In the other half we asked them to produce mental images of colored, meaningful objects (such as tomato, grapes, banana) corresponding to the same three color categories. Although physically presented color could be decoded from all retinotopically mapped visual areas, only hV4 allowed predicting colors of imagined objects when classifiers were trained on responses to physical colors. Importantly, only neural signal in hV4 was predictive of behavioral performance in the color judgment task on a trial-by-trial basis. The commonality between neural representations of sensory-driven and imagined object color and the behavioral link to neural representations in hV4 identifies area hV4 as a perceptual hub linking externally triggered color vision with color in self-generated object imagery. SIGNIFICANCE STATEMENT Humans experience color not only when visually exploring the outside world, but also in the absence of visual input, for example when remembering, dreaming, and during imagery. It is not known where neural codes for sensory-driven and internally generated hue converge. In the current study we evoked matching subjective color percepts, one driven by physically presented color stimuli, the other by internally generated color imagery. This allowed us to identify area hV4 as the only site where neural codes of corresponding subjective color perception converged regardless of its origin. Color codes in hV4 also predicted behavioral performance in an imagery task, suggesting it forms a perceptual hub for color perception. Copyright © 2018 the authors 0270-6474/18/383657-12$15.00/0.
Making memories: the development of long-term visual knowledge in children with visual agnosia.
Metitieri, Tiziana; Barba, Carmen; Pellacani, Simona; Viggiano, Maria Pia; Guerrini, Renzo
2013-01-01
There are few reports about the effects of perinatal acquired brain lesions on the development of visual perception. These studies demonstrate nonseverely impaired visual-spatial abilities and preserved visual memory. Longitudinal data analyzing the effects of compromised perceptions on long-term visual knowledge in agnosics are limited to lesions having occurred in adulthood. The study of children with focal lesions of the visual pathways provides a unique opportunity to assess the development of visual memory when perceptual input is degraded. We assessed visual recognition and visual memory in three children with lesions to the visual cortex having occurred in early infancy. We then explored the time course of visual memory impairment in two of them at 2 years and 3.7 years from the initial assessment. All children exhibited apperceptive visual agnosia and visual memory impairment. We observed a longitudinal improvement of visual memory modulated by the structural properties of objects. Our findings indicate that processing of degraded perceptions from birth results in impoverished memories. The dynamic interaction between perception and memory during development might modulate the long-term construction of visual representations, resulting in less severe impairment.
Making Memories: The Development of Long-Term Visual Knowledge in Children with Visual Agnosia
Barba, Carmen; Pellacani, Simona; Viggiano, Maria Pia; Guerrini, Renzo
2013-01-01
There are few reports about the effects of perinatal acquired brain lesions on the development of visual perception. These studies demonstrate nonseverely impaired visual-spatial abilities and preserved visual memory. Longitudinal data analyzing the effects of compromised perceptions on long-term visual knowledge in agnosics are limited to lesions having occurred in adulthood. The study of children with focal lesions of the visual pathways provides a unique opportunity to assess the development of visual memory when perceptual input is degraded. We assessed visual recognition and visual memory in three children with lesions to the visual cortex having occurred in early infancy. We then explored the time course of visual memory impairment in two of them at 2 years and 3.7 years from the initial assessment. All children exhibited apperceptive visual agnosia and visual memory impairment. We observed a longitudinal improvement of visual memory modulated by the structural properties of objects. Our findings indicate that processing of degraded perceptions from birth results in impoverished memories. The dynamic interaction between perception and memory during development might modulate the long-term construction of visual representations, resulting in less severe impairment. PMID:24319599
Memory-guided saccade processing in visual form agnosia (patient DF).
Rossit, Stéphanie; Szymanek, Larissa; Butler, Stephen H; Harvey, Monika
2010-01-01
According to Milner and Goodale's model (The visual brain in action, Oxford University Press, Oxford, 2006) areas in the ventral visual stream mediate visual perception and oV-line actions, whilst regions in the dorsal visual stream mediate the on-line visual control of action. Strong evidence for this model comes from a patient (DF), who suffers from visual form agnosia after bilateral damage to the ventro-lateral occipital region, sparing V1. It has been reported that she is normal in immediate reaching and grasping, yet severely impaired when asked to perform delayed actions. Here we investigated whether this dissociation would extend to saccade execution. Neurophysiological studies and TMS work in humans have shown that the posterior parietal cortex (PPC), on the right in particular (supposedly spared in DF), is involved in the control of memory-guided saccades. Surprisingly though, we found that, just as reported for reaching and grasping, DF's saccadic accuracy was much reduced in the memory compared to the stimulus-guided condition. These data support the idea of a tight coupling of eye and hand movements and further suggest that dorsal stream structures may not be sufficient to drive memory-guided saccadic performance.
Analyzing the Reading Skills and Visual Perception Levels of First Grade Students
ERIC Educational Resources Information Center
Çayir, Aybala
2017-01-01
The purpose of this study was to analyze primary school first grade students' reading levels and correlate their visual perception skills. For this purpose, students' reading speed, reading comprehension and reading errors were determined using The Informal Reading Inventory. Students' visual perception levels were also analyzed using…
Rosenblatt, Steven David; Crane, Benjamin Thomas
2015-01-01
A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37) participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s) at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001) and rotation (p<0.001), and arrows (p = 0.02). For the visual motion stimuli, inertial motion perception was shifted in the direction consistent with the visual stimulus. Arrows had a small effect on self-motion perception driven by a minority of subjects. There was no significant effect of illusory motion on self-motion perception for either translation or rotation (p>0.1 for both). Thus, although a true moving visual field can induce self-motion, results of this study show that illusory motion does not.
Is improved contrast sensitivity a natural consequence of visual training?
Levi, Aaron; Shaked, Danielle; Tadin, Duje; Huxlin, Krystel R.
2015-01-01
Many studies have shown that training and testing conditions modulate specificity of visual learning to trained stimuli and tasks. In visually impaired populations, generalizability of visual learning to untrained stimuli/tasks is almost always reported, with contrast sensitivity (CS) featuring prominently among these collaterally-improved functions. To understand factors underlying this difference, we measured CS for direction and orientation discrimination in the visual periphery of three groups of visually-intact subjects. Group 1 trained on an orientation discrimination task with static Gabors whose luminance contrast was decreased as performance improved. Group 2 trained on a global direction discrimination task using high-contrast random dot stimuli previously used to recover motion perception in cortically blind patients. Group 3 underwent no training. Both forms of training improved CS with some degree of specificity for basic attributes of the trained stimulus/task. Group 1's largest enhancement was in CS around the trained spatial/temporal frequencies; similarly, Group 2's largest improvements occurred in CS for discriminating moving and flickering stimuli. Group 3 saw no significant CS changes. These results indicate that CS improvements may be a natural consequence of multiple forms of visual training in visually intact humans, albeit with some specificity to the trained visual domain(s). PMID:26305736
Fluctuation scaling in the visual cortex at threshold
NASA Astrophysics Data System (ADS)
Medina, José M.; Díaz, José A.
2016-05-01
Fluctuation scaling relates trial-to-trial variability to the average response by a power function in many physical processes. Here we address whether fluctuation scaling holds in sensory psychophysics and its functional role in visual processing. We report experimental evidence of fluctuation scaling in human color vision and form perception at threshold. Subjects detected thresholds in a psychophysical masking experiment that is considered a standard reference for studying suppression between neurons in the visual cortex. For all subjects, the analysis of threshold variability that results from the masking task indicates that fluctuation scaling is a global property that modulates detection thresholds with a scaling exponent that departs from 2, β =2.48 ±0.07 . We also examine a generalized version of fluctuation scaling between the sample kurtosis K and the sample skewness S of threshold distributions. We find that K and S are related and follow a unique quadratic form K =(1.19 ±0.04 ) S2+(2.68 ±0.06 ) that departs from the expected 4/3 power function regime. A random multiplicative process with weak additive noise is proposed based on a Langevin-type equation. The multiplicative process provides a unifying description of fluctuation scaling and the quadratic S -K relation and is related to on-off intermittency in sensory perception. Our findings provide an insight into how the human visual system interacts with the external environment. The theoretical methods open perspectives for investigating fluctuation scaling and intermittency effects in a wide variety of natural, economic, and cognitive phenomena.
An Analysis of the Concepts of Reading. Final Report.
ERIC Educational Resources Information Center
Ross, James F.
An initial philosophical analysis of "reading" has yielded; (1) that there cannot be a general definition of reading; (2) that the "focal" senses of "to read" indicate that reading is a form of linguistic perception carried out through the exercise of general linguistic abilities, adapted to a visual input of inscriptions with inherent linguistic…
The reliability and clinical correlates of figure-ground perception in schizophrenia.
Malaspina, Dolores; Simon, Naomi; Goetz, Raymond R; Corcoran, Cheryl; Coleman, Eliza; Printz, David; Mujica-Parodi, Lilianne; Wolitzky, Rachel
2004-01-01
Schizophrenia subjects are impaired in a number of visual attention paradigms. However, their performance on tests of figure-ground visual perception (FGP), which requires subjects to visually discriminate figures embedded in a rival background, is relatively unstudied. We examined FGP in 63 schizophrenia patients and 27 control subjects and found that the patients performed the FGP test reliably and had significantly lower FGP scores than the control subjects. Figure-ground visual perception was significantly correlated with other neuropsychological test scores and was inversely related to negative symptoms. It was unrelated to antipsychotic medication treatment. Figure-ground visual perception depends on "top down" processing of visual stimuli, and thus this data suggests that dysfunction in the higher-level pathways that modulate visual perceptual processes may also be related to a core defect in schizophrenia.
Timing in audiovisual speech perception: A mini review and new psychophysical data.
Venezia, Jonathan H; Thurman, Steven M; Matchin, William; George, Sahara E; Hickok, Gregory
2016-02-01
Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (~35 % identification of /apa/ compared to ~5 % in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (~130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content.
Timing in Audiovisual Speech Perception: A Mini Review and New Psychophysical Data
Venezia, Jonathan H.; Thurman, Steven M.; Matchin, William; George, Sahara E.; Hickok, Gregory
2015-01-01
Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually-relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (∼35% identification of /apa/ compared to ∼5% in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually-relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (∼130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content. PMID:26669309
Visual Cortical Entrainment to Motion and Categorical Speech Features during Silent Lipreading
O’Sullivan, Aisling E.; Crosse, Michael J.; Di Liberto, Giovanni M.; Lalor, Edmund C.
2017-01-01
Speech is a multisensory percept, comprising an auditory and visual component. While the content and processing pathways of audio speech have been well characterized, the visual component is less well understood. In this work, we expand current methodologies using system identification to introduce a framework that facilitates the study of visual speech in its natural, continuous form. Specifically, we use models based on the unheard acoustic envelope (E), the motion signal (M) and categorical visual speech features (V) to predict EEG activity during silent lipreading. Our results show that each of these models performs similarly at predicting EEG in visual regions and that respective combinations of the individual models (EV, MV, EM and EMV) provide an improved prediction of the neural activity over their constituent models. In comparing these different combinations, we find that the model incorporating all three types of features (EMV) outperforms the individual models, as well as both the EV and MV models, while it performs similarly to the EM model. Importantly, EM does not outperform EV and MV, which, considering the higher dimensionality of the V model, suggests that more data is needed to clarify this finding. Nevertheless, the performance of EMV, and comparisons of the subject performances for the three individual models, provides further evidence to suggest that visual regions are involved in both low-level processing of stimulus dynamics and categorical speech perception. This framework may prove useful for investigating modality-specific processing of visual speech under naturalistic conditions. PMID:28123363
Physics and psychophysics of color reproduction
NASA Astrophysics Data System (ADS)
Giorgianni, Edward J.
1991-08-01
The successful design of a color-imaging system requires knowledge of the factors used to produce and control color. This knowledge can be derived, in part, from measurements of the physical properties of the imaging system. Color itself, however, is a perceptual response and cannot be directly measured. Though the visual process begins with physics, as radiant energy reaching the eyes, it is in the mind of the observer that the stimuli produced from this radiant energy are interpreted and organized to form meaningful perceptions, including the perception of color. A comprehensive understanding of color reproduction, therefore, requires not only a knowledge of the physical properties of color-imaging systems but also an understanding of the physics, psychophysics, and psychology of the human observer. The human visual process is quite complex; in many ways the physical properties of color-imaging systems are easier to understand.
Exogenous Attention Enables Perceptual Learning.
Szpiro, Sarit F A; Carrasco, Marisa
2015-12-01
Practice can improve visual perception, and these improvements are considered to be a form of brain plasticity. Training-induced learning is time-consuming and requires hundreds of trials across multiple days. The process of learning acquisition is understudied. Can learning acquisition be potentiated by manipulating visual attentional cues? We developed a protocol in which we used task-irrelevant cues for between-groups manipulation of attention during training. We found that training with exogenous attention can enable the acquisition of learning. Remarkably, this learning was maintained even when observers were subsequently tested under neutral conditions, which indicates that a change in perception was involved. Our study is the first to isolate the effects of exogenous attention and to demonstrate its efficacy to enable learning. We propose that exogenous attention boosts perceptual learning by enhancing stimulus encoding. © The Author(s) 2015.
Erlikhman, Gennady; Kellman, Philip J.
2016-01-01
Spatiotemporal boundary formation (SBF) is the perception of illusory boundaries, global form, and global motion from spatially and temporally sparse transformations of texture elements (Shipley and Kellman, 1993a, 1994; Erlikhman and Kellman, 2015). It has been theorized that the visual system uses positions and times of element transformations to extract local oriented edge fragments, which then connect by known interpolation processes to produce larger contours and shapes in SBF. To test this theory, we created a novel display consisting of a sawtooth arrangement of elements that disappeared and reappeared sequentially. Although apparent motion along the sawtooth would be expected, with appropriate spacing and timing, the resulting percept was of a larger, moving, illusory bar. This display approximates the minimal conditions for visual perception of an oriented edge fragment from spatiotemporal information and confirms that such events may be initiating conditions in SBF. Using converging objective and subjective methods, experiments showed that edge formation in these displays was subject to a temporal integration constraint of ~80 ms between element disappearances. The experiments provide clear support for models of SBF that begin with extraction of local edge fragments, and they identify minimal conditions required for this process. We conjecture that these results reveal a link between spatiotemporal object perception and basic visual filtering. Motion energy filters have usually been studied with orientation given spatially by luminance contrast. When orientation is not given in static frames, these same motion energy filters serve as spatiotemporal edge filters, yielding local orientation from discrete element transformations over time. As numerous filters of different characteristic orientations and scales may respond to any simple SBF stimulus, we discuss the aperture and ambiguity problems that accompany this conjecture and how they might be resolved by the visual system. PMID:27445886
Reading Disability and Visual Perception in Families: New Findings.
ERIC Educational Resources Information Center
Oxford, Rebecca L.
Frequently a variety of visual perception difficulties correlate with reading disabilities. A study was made to investigate the relationship between visual perception and reading disability in families, and to explore the genetic aspects of the relationship. One-hundred twenty-five reading-disabled students, ages 7.5 to 12 years, were matched with…
PONS, FERRAN; ANDREU, LLORENC.; SANZ-TORRENT, MONICA; BUIL-LEGAZ, LUCIA; LEWKOWICZ, DAVID J.
2014-01-01
Speech perception involves the integration of auditory and visual articulatory information and, thus, requires the perception of temporal synchrony between this information. There is evidence that children with Specific Language Impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component followed the visual component. None of the groups perceived an audiovisual asynchrony of 366ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception. PMID:22874648
Pons, Ferran; Andreu, Llorenç; Sanz-Torrent, Monica; Buil-Legaz, Lucía; Lewkowicz, David J
2013-06-01
Speech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666 ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component preceded [corrected] the visual component. None of the groups perceived an audiovisual asynchrony of 366 ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception.
Ganz, Aura; Schafer, James; Gandhi, Siddhesh; Puleo, Elaine; Wilson, Carole; Robertson, Meg
2012-01-01
We introduce PERCEPT system, an indoor navigation system for the blind and visually impaired. PERCEPT will improve the quality of life and health of the visually impaired community by enabling independent living. Using PERCEPT, blind users will have independent access to public health facilities such as clinics, hospitals, and wellness centers. Access to healthcare facilities is crucial for this population due to the multiple health conditions that they face such as diabetes and its complications. PERCEPT system trials with 24 blind and visually impaired users in a multistory building show PERCEPT system effectiveness in providing appropriate navigation instructions to these users. The uniqueness of our system is that it is affordable and that its design follows orientation and mobility principles. We hope that PERCEPT will become a standard deployed in all indoor public spaces, especially in healthcare and wellness facilities. PMID:23316225
Influence of the Casserius Tables on fetal anatomy illustration and how we envision the unborn*
Heilemann, Heidi A
2011-01-01
Objective: The paper demonstrates how visual representation of the fetus in early anatomy texts influenced the reader's perception of the unborn child as an autonomous being. Data Sources: The health, art, and history literatures were used as sources. Original texts and illustrations, with particular attention paid to the Casserius Tables, published by Andreas Spigelius in 1627, are discussed. Study Selection: A review of the literature was conducted to identify and analyze published renderings, reproductions, and discussion of images of the unborn child. Original anatomy atlases were consulted. Main Results: Artists' renderings of a particularly vulnerable state of human life influenced early perceptions of the status of the unborn child. The images show fetuses as highly independent, providing a visual cue that life is fully formed in utero. Conclusion: The legacy of the Casserius Tables is that they are still able to capture our attention because they portray the idea of a fetus and newborn even more clearly than our modern representations of this charged topic. The use of deceptive realism provides the viewer with an accessible visual representation of the unborn child. These early anatomy illustrations continue to influence modern-day perception of the unborn child as a separate being, completely autonomous from the mother. PMID:21243052
ASCII Art Synthesis from Natural Photographs.
Xu, Xuemiao; Zhong, Linyuan; Xie, Minshan; Liu, Xueting; Qin, Jing; Wong, Tien-Tsin
2017-08-01
While ASCII art is a worldwide popular art form, automatic generating structure-based ASCII art from natural photographs remains challenging. The major challenge lies on extracting the perception-sensitive structure from the natural photographs so that a more concise ASCII art reproduction can be produced based on the structure. However, due to excessive amount of texture in natural photos, extracting perception-sensitive structure is not easy, especially when the structure may be weak and within the texture region. Besides, to fit different target text resolutions, the amount of the extracted structure should also be controllable. To tackle these challenges, we introduce a visual perception mechanism of non-classical receptive field modulation (non-CRF modulation) from physiological findings to this ASCII art application, and propose a new model of non-CRF modulation which can better separate the weak structure from the crowded texture, and also better control the scale of texture suppression. Thanks to our non-CRF model, more sensible ASCII art reproduction can be obtained. In addition, to produce more visually appealing ASCII arts, we propose a novel optimization scheme to obtain the optimal placement of proportional-font characters. We apply our method on a rich variety of images, and visually appealing ASCII art can be obtained in all cases.
Visual Imagery without Visual Perception?
ERIC Educational Resources Information Center
Bertolo, Helder
2005-01-01
The question regarding visual imagery and visual perception remain an open issue. Many studies have tried to understand if the two processes share the same mechanisms or if they are independent, using different neural substrates. Most research has been directed towards the need of activation of primary visual areas during imagery. Here we review…
Perception and Attention for Visualization
ERIC Educational Resources Information Center
Haroz, Steve
2013-01-01
This work examines how a better understanding of visual perception and attention can impact visualization design. In a collection of studies, I explore how different levels of the visual system can measurably affect a variety of visualization metrics. The results show that expert preference, user performance, and even computational performance are…
Assessing a VR-based learning environment for anatomy education.
Hoffman, H; Murray, M; Hettinger, L; Viirre, E
1998-01-01
The purpose of the research proposed herein is to develop an empirical, methodological tool for the assessment of visual depth perception in virtual environments (VEs). Our goal is to develop and employ a behaviorally-based method for assessing the impact of VE design features on the perception of visual depth as indexed by the performance of fundamental perceptual-motor activities. Specifically, in this experiment we will assess the affect of two dimensions of VE system design--(1) viewing condition or "level of immersion", and (2) layout/design of the VE--on the performance of an engaging, game-like task. The characteristics of the task to be employed are as follows--(1) it places no demands on cognition in the form of problem solving, retrieval of previously learned information, or other analytic activity in order to assure that (2) variations in task performance can be exclusively attributed to the extent to which the experimental factors influence visual depth perception. Subjects' performance will be assessed in terms of the speed and accuracy of task performance, as well as underlying dimensions of performance such as workload, fatigue, and physiological well being (i.e., cybersickness). The results of this experiment will provide important information on the effect of VE immersion and other VE design issues on human perception and performance. Further development, refinement, and validation of this behaviorally-based methodology will be pursued to provide user-centered design criteria for the design and use of VE systems.
Do Visual Illusions Probe the Visual Brain?: Illusions in Action without a Dorsal Visual Stream
ERIC Educational Resources Information Center
Coello, Yann; Danckert, James; Blangero, Annabelle; Rossetti, Yves
2007-01-01
Visual illusions have been shown to affect perceptual judgements more so than motor behaviour, which was interpreted as evidence for a functional division of labour within the visual system. The dominant perception-action theory argues that perception involves a holistic processing of visual objects or scenes, performed within the ventral,…
Tebartz van Elst, Ludger; Bach, Michael; Blessing, Julia; Riedel, Andreas; Bubl, Emanuel
2015-01-01
A common neurodevelopmental disorder, autism spectrum disorder (ASD), is defined by specific patterns in social perception, social competence, communication, highly circumscribed interests, and a strong subjective need for behavioral routines. Furthermore, distinctive features of visual perception, such as markedly reduced eye contact and a tendency to focus more on small, visual items than on holistic perception, have long been recognized as typical ASD characteristics. Recent debate in the scientific community discusses whether the physiology of low-level visual perception might explain such higher visual abnormalities. While reports of this enhanced, "eagle-like" visual acuity contained methodological errors and could not be substantiated, several authors have reported alterations in even earlier stages of visual processing, such as contrast perception and motion perception at the occipital cortex level. Therefore, in this project, we have investigated the electrophysiology of very early visual processing by analyzing the pattern electroretinogram-based contrast gain, the background noise amplitude, and the psychophysical visual acuities of participants with high-functioning ASD and controls with equal education. Based on earlier findings, we hypothesized that alterations in early vision would be present in ASD participants. This study included 33 individuals with ASD (11 female) and 33 control individuals (12 female). The groups were matched in terms of age, gender, and education level. We found no evidence of altered electrophysiological retinal contrast processing or psychophysical measured visual acuities. There appears to be no evidence for abnormalities in retinal visual processing in ASD patients, at least with respect to contrast detection.
Grasp posture alters visual processing biases near the hands
Thomas, Laura E.
2015-01-01
Observers experience biases in visual processing for objects within easy reach of their hands that may assist them in evaluating items that are candidates for action. I investigated the hypothesis that hand postures affording different types of actions differentially bias vision. Across three experiments, participants performed global motion detection and global form perception tasks while their hands were positioned a) near the display in a posture affording a power grasp, b) near the display in a posture affording a precision grasp, or c) in their laps. Although the power grasp posture facilitated performance on the motion task, the precision grasp posture instead facilitated performance on the form task. These results suggest that the visual system weights processing based on an observer’s current affordances for specific actions: fast and forceful power grasps enhance temporal sensitivity, while detail-oriented precision grasps enhance spatial sensitivity. PMID:25862545
Visual perception of ADHD children with sensory processing disorder.
Jung, Hyerim; Woo, Young Jae; Kang, Je Wook; Choi, Yeon Woo; Kim, Kyeong Mi
2014-04-01
The aim of the present study was to investigate the visual perception difference between ADHD children with and without sensory processing disorder, and the relationship between sensory processing and visual perception of the children with ADHD. Participants were 47 outpatients, aged 6-8 years, diagnosed with ADHD. After excluding those who met exclusion criteria, 38 subjects were clustered into two groups, ADHD children with and without sensory processing disorder (SPD), using SSP reported by their parents, then subjects completed K-DTVP-2. Spearman correlation analysis was run to determine the relationship between sensory processing and visual perception, and Mann-Whitney-U test was conducted to compare the K-DTVP-2 score of two groups respectively. The ADHD children with SPD performed inferiorly to ADHD children without SPD in the on 3 quotients of K-DTVP-2. The GVP of K-DTVP-2 score was related to Movement Sensitivity section (r=0.368(*)) and Low Energy/Weak section of SSP (r=0.369*). The result of the present study suggests that among children with ADHD, the visual perception is lower in those children with co-morbid SPD. Also, visual perception may be related to sensory processing, especially in the reactions of vestibular and proprioceptive senses. Regarding academic performance, it is necessary to consider how sensory processing issues affect visual perception in children with ADHD.
Differential temporal dynamics during visual imagery and perception.
Dijkstra, Nadine; Mostert, Pim; Lange, Floris P de; Bosch, Sander; van Gerven, Marcel Aj
2018-05-29
Visual perception and imagery rely on similar representations in the visual cortex. During perception, visual activity is characterized by distinct processing stages, but the temporal dynamics underlying imagery remain unclear. Here, we investigated the dynamics of visual imagery in human participants using magnetoencephalography. Firstly, we show that, compared to perception, imagery decoding becomes significant later and representations at the start of imagery already overlap with later time points. This suggests that during imagery, the entire visual representation is activated at once or that there are large differences in the timing of imagery between trials. Secondly, we found consistent overlap between imagery and perceptual processing around 160 ms and from 300 ms after stimulus onset. This indicates that the N170 gets reactivated during imagery and that imagery does not rely on early perceptual representations. Together, these results provide important insights for our understanding of the neural mechanisms of visual imagery. © 2018, Dijkstra et al.
Most, Tova; Aviner, Chen
2009-01-01
This study evaluated the benefits of cochlear implant (CI) with regard to emotion perception of participants differing in their age of implantation, in comparison to hearing aid users and adolescents with normal hearing (NH). Emotion perception was examined by having the participants identify happiness, anger, surprise, sadness, fear, and disgust. The emotional content was placed upon the same neutral sentence. The stimuli were presented in auditory, visual, and combined auditory-visual modes. The results revealed better auditory identification by the participants with NH in comparison to all groups of participants with hearing loss (HL). No differences were found among the groups with HL in each of the 3 modes. Although auditory-visual perception was better than visual-only perception for the participants with NH, no such differentiation was found among the participants with HL. The results question the efficiency of some currently used CIs in providing the acoustic cues required to identify the speaker's emotional state.
[Peculiarities of visual perception of dentition and smile aesthetic parameters].
Riakhovskiĭ, A N; Usanova, E V
2007-01-01
As the result of the studies it was determined in which limits the dentition central line displacement from the face middle line and the change of smile line tilt angle become noticeable for visual perception. And also how much visual perception of the dentition aesthetic parameters were differed in doctors with different experience, dental technicians and patients.
Seen, Unseen or Overlooked? How Can Visual Perception Develop through a Multimodal Enquiry?
ERIC Educational Resources Information Center
Payne, Rachel
2012-01-01
This article outlines an exploration into the development of visual perception through analysing the process of taking photographs of the mundane as small-scale research. A preoccupation with social construction of the visual lies at the heart of the investigation by correlating the perceptive process to Mitchell's (2002) counter thesis for visual…
Neural mechanisms underlying sound-induced visual motion perception: An fMRI study.
Hidaka, Souta; Higuchi, Satomi; Teramoto, Wataru; Sugita, Yoichi
2017-07-01
Studies of crossmodal interactions in motion perception have reported activation in several brain areas, including those related to motion processing and/or sensory association, in response to multimodal (e.g., visual and auditory) stimuli that were both in motion. Recent studies have demonstrated that sounds can trigger illusory visual apparent motion to static visual stimuli (sound-induced visual motion: SIVM): A visual stimulus blinking at a fixed location is perceived to be moving laterally when an alternating left-right sound is also present. Here, we investigated brain activity related to the perception of SIVM using a 7T functional magnetic resonance imaging technique. Specifically, we focused on the patterns of neural activities in SIVM and visually induced visual apparent motion (VIVM). We observed shared activations in the middle occipital area (V5/hMT), which is thought to be involved in visual motion processing, for SIVM and VIVM. Moreover, as compared to VIVM, SIVM resulted in greater activation in the superior temporal area and dominant functional connectivity between the V5/hMT area and the areas related to auditory and crossmodal motion processing. These findings indicate that similar but partially different neural mechanisms could be involved in auditory-induced and visually-induced motion perception, and neural signals in auditory, visual, and, crossmodal motion processing areas closely and directly interact in the perception of SIVM. Copyright © 2017 Elsevier B.V. All rights reserved.
Spatial Frequency Requirements and Gaze Strategy in Visual-Only and Audiovisual Speech Perception
ERIC Educational Resources Information Center
Wilson, Amanda H.; Alsius, Agnès; Parè, Martin; Munhall, Kevin G.
2016-01-01
Purpose: The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. Method: We presented vowel-consonant-vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent…
Saturation in Phosphene Size with Increasing Current Levels Delivered to Human Visual Cortex.
Bosking, William H; Sun, Ping; Ozker, Muge; Pei, Xiaomei; Foster, Brett L; Beauchamp, Michael S; Yoshor, Daniel
2017-07-26
Electrically stimulating early visual cortex results in a visual percept known as a phosphene. Although phosphenes can be evoked by a wide range of electrode sizes and current amplitudes, they are invariably described as small. To better understand this observation, we electrically stimulated 93 electrodes implanted in the visual cortex of 13 human subjects who reported phosphene size while stimulation current was varied. Phosphene size increased as the stimulation current was initially raised above threshold, but then rapidly reached saturation. Phosphene size also depended on the location of the stimulated site, with size increasing with distance from the foveal representation. We developed a model relating phosphene size to the amount of activated cortex and its location within the retinotopic map. First, a sigmoidal curve was used to predict the amount of activated cortex at a given current. Second, the amount of active cortex was converted to degrees of visual angle by multiplying by the inverse cortical magnification factor for that retinotopic location. This simple model accurately predicted phosphene size for a broad range of stimulation currents and cortical locations. The unexpected saturation in phosphene sizes suggests that the functional architecture of cerebral cortex may impose fundamental restrictions on the spread of artificially evoked activity and this may be an important consideration in the design of cortical prosthetic devices. SIGNIFICANCE STATEMENT Understanding the neural basis for phosphenes, the visual percepts created by electrical stimulation of visual cortex, is fundamental to the development of a visual cortical prosthetic. Our experiments in human subjects implanted with electrodes over visual cortex show that it is the activity of a large population of cells spread out across several millimeters of tissue that supports the perception of a phosphene. In addition, we describe an important feature of the production of phosphenes by electrical stimulation: phosphene size saturates at a relatively low current level. This finding implies that, with current methods, visual prosthetics will have a limited dynamic range available to control the production of spatial forms and that more advanced stimulation methods may be required. Copyright © 2017 the authors 0270-6474/17/377188-10$15.00/0.
Schelonka, Kathryn; Graulty, Christian; Canseco-Gonzalez, Enriqueta; Pitts, Michael A
2017-09-01
A three-phase inattentional blindness paradigm was combined with ERPs. While participants performed a distracter task, line segments in the background formed words or consonant-strings. Nearly half of the participants failed to notice these word-forms and were deemed inattentionally blind. All participants noticed the word-forms in phase 2 of the experiment while they performed the same distracter task. In the final phase, participants performed a task on the word-forms. In all phases, including during inattentional blindness, word-forms elicited distinct ERPs during early latencies (∼200-280ms) suggesting unconscious orthographic processing. A subsequent ERP (∼320-380ms) similar to the visual awareness negativity appeared only when subjects were aware of the word-forms, regardless of the task. Finally, word-forms elicited a P3b (∼400-550ms) only when these stimuli were task-relevant. These results are consistent with previous inattentional blindness studies and help distinguish brain activity associated with pre- and post-perceptual processing from correlates of conscious perception. Copyright © 2017 Elsevier Inc. All rights reserved.
Why do parallel cortical systems exist for the perception of static form and moving form?
Grossberg, S
1991-02-01
This article analyzes computational properties that clarify why the parallel cortical systems V1----V2, V1----MT, and V1----V2----MT exist for the perceptual processing of static visual forms and moving visual forms. The article describes a symmetry principle, called FM symmetry, that is predicted to govern the development of these parallel cortical systems by computing all possible ways of symmetrically gating sustained cells with transient cells and organizing these sustained-transient cells into opponent pairs of on-cells and off-cells whose output signals are insensitive to direction of contrast. This symmetric organization explains how the static form system (static BCS) generates emergent boundary segmentations whose outputs are insensitive to direction of contrast and insensitive to direction of motion, whereas the motion form system (motion BCS) generates emergent boundary segmentations whose outputs are insensitive to direction of contrast but sensitive to direction of motion. FM symmetry clarifies why the geometries of static and motion form perception differ--for example, why the opposite orientation of vertical is horizontal (90 degrees), but the opposite direction of up is down (180 degrees). Opposite orientations and directions are embedded in gated dipole opponent processes that are capable of antagonistic rebound. Negative afterimages, such as the MacKay and waterfall illusions, are hereby explained as are aftereffects of long-range apparent motion. These antagonistic rebounds help to control a dynamic balance between complementary perceptual states of resonance and reset. Resonance cooperatively links features into emergent boundary segmentations via positive feedback in a CC loop, and reset terminates a resonance when the image changes, thereby preventing massive smearing of percepts. These complementary preattentive states of resonance and reset are related to analogous states that govern attentive feature integration, learning, and memory search in adaptive resonance theory. The mechanism used in the V1----MT system to generate a wave of apparent motion between discrete flashes may also be used in other cortical systems to generate spatial shifts of attention. The theory suggests how the V1----V2----MT cortical stream helps to compute moving form in depth and how long-range apparent motion of illusory contours occurs. These results collectively argue against vision theories that espouse independent processing modules. Instead, specialized subsystems interact to overcome computational uncertainties and complementary deficiencies, to cooperatively bind features into context-sensitive resonances, and to realize symmetry principles that are predicted to govern the development of the visual cortex.
Saccadic Corollary Discharge Underlies Stable Visual Perception
Berman, Rebecca A.; Joiner, Wilsaan M.; Wurtz, Robert H.
2016-01-01
Saccadic eye movements direct the high-resolution foveae of our retinas toward objects of interest. With each saccade, the image jumps on the retina, causing a discontinuity in visual input. Our visual perception, however, remains stable. Philosophers and scientists over centuries have proposed that visual stability depends upon an internal neuronal signal that is a copy of the neuronal signal driving the eye movement, now referred to as a corollary discharge (CD) or efference copy. In the old world monkey, such a CD circuit for saccades has been identified extending from superior colliculus through MD thalamus to frontal cortex, but there is little evidence that this circuit actually contributes to visual perception. We tested the influence of this CD circuit on visual perception by first training macaque monkeys to report their perceived eye direction, and then reversibly inactivating the CD as it passes through the thalamus. We found that the monkey's perception changed; during CD inactivation, there was a difference between where the monkey perceived its eyes to be directed and where they were actually directed. Perception and saccade were decoupled. We established that the perceived eye direction at the end of the saccade was not derived from proprioceptive input from eye muscles, and was not altered by contextual visual information. We conclude that the CD provides internal information contributing to the brain's creation of perceived visual stability. More specifically, the CD might provide the internal saccade vector used to unite separate retinal images into a stable visual scene. SIGNIFICANCE STATEMENT Visual stability is one of the most remarkable aspects of human vision. The eyes move rapidly several times per second, displacing the retinal image each time. The brain compensates for this disruption, keeping our visual perception stable. A major hypothesis explaining this stability invokes a signal within the brain, a corollary discharge, that informs visual regions of the brain when and where the eyes are about to move. Such a corollary discharge circuit for eye movements has been identified in macaque monkey. We now show that selectively inactivating this brain circuit alters the monkey's visual perception. We conclude that this corollary discharge provides a critical signal that can be used to unite jumping retinal images into a consistent visual scene. PMID:26740647
NASA Technical Reports Server (NTRS)
Hosman, R. J. A. W.; Vandervaart, J. C.
1984-01-01
An experiment to investigate visual roll attitude and roll rate perception is described. The experiment was also designed to assess the improvements of perception due to cockpit motion. After the onset of the motion, subjects were to make accurate and quick estimates of the final magnitude of the roll angle step response by pressing the appropriate button of a keyboard device. The differing time-histories of roll angle, roll rate and roll acceleration caused by a step response stimulate the different perception processes related the central visual field, peripheral visual field and vestibular organs in different, yet exactly known ways. Experiments with either of the visual displays or cockpit motion and some combinations of these were run to asses the roles of the different perception processes. Results show that the differences in response time are much more pronounced than the differences in perception accuracy.
Predictions penetrate perception: Converging insights from brain, behaviour and disorder
O’Callaghan, Claire; Kveraga, Kestutis; Shine, James M; Adams, Reginald B.; Bar, Moshe
2018-01-01
It is argued that during ongoing visual perception, the brain is generating top-down predictions to facilitate, guide and constrain the processing of incoming sensory input. Here we demonstrate that these predictions are drawn from a diverse range of cognitive processes, in order to generate the richest and most informative prediction signals. This is consistent with a central role for cognitive penetrability in visual perception. We review behavioural and mechanistic evidence that indicate a wide spectrum of domains—including object recognition, contextual associations, cognitive biases and affective state—that can directly influence visual perception. We combine these insights from the healthy brain with novel observations from neuropsychiatric disorders involving visual hallucinations, which highlight the consequences of imbalance between top-down signals and incoming sensory information. Together, these lines of evidence converge to indicate that predictive penetration, be it cognitive, social or emotional, should be considered a fundamental framework that supports visual perception. PMID:27222169
Cortical dynamics of feature binding and reset: control of visual persistence.
Francis, G; Grossberg, S; Mingolla, E
1994-04-01
An analysis of the reset of visual cortical circuits responsible for the binding or segmentation of visual features into coherent visual forms yields a model that explains properties of visual persistence. The reset mechanisms prevent massive smearing of visual percepts in response to rapidly moving images. The model simulates relationships among psychophysical data showing inverse relations of persistence to flash luminance and duration, greater persistence of illusory contours than real contours, a U-shaped temporal function for persistence of illusory contours, a reduction of persistence due to adaptation with a stimulus of like orientation, an increase of persistence with spatial separation of a masking stimulus. The model suggests that a combination of habituative, opponent, and endstopping mechanisms prevent smearing and limit persistence. Earlier work with the model has analyzed data about boundary formation, texture segregation, shape-from-shading, and figure-ground separation. Thus, several types of data support each model mechanism and new predictions are made.
Perceived Average Orientation Reflects Effective Gist of the Surface.
Cha, Oakyoon; Chong, Sang Chul
2018-03-01
The human ability to represent ensemble visual information, such as average orientation and size, has been suggested as the foundation of gist perception. To effectively summarize different groups of objects into the gist of a scene, observers should form ensembles separately for different groups, even when objects have similar visual features across groups. We hypothesized that the visual system utilizes perceptual groups characterized by spatial configuration and represents separate ensembles for different groups. Therefore, participants could not integrate ensembles of different perceptual groups on a task basis. We asked participants to determine the average orientation of visual elements comprising a surface with a contour situated inside. Although participants were asked to estimate the average orientation of all the elements, they ignored orientation signals embedded in the contour. This constraint may help the visual system to keep the visual features of occluding objects separate from those of the occluded objects.
Feczko, Eric; Shulman, Gordon L.; Petersen, Steven E.; Pruett, John R.
2014-01-01
Findings from diverse subfields of vision research suggest a potential link between high-level aspects of face perception and concentric form-from-structure perception. To explore this relationship, typical adults performed two adaptation experiments and two masking experiments to test whether concentric, but not nonconcentric, Glass patterns (a type of form-from-structure stimulus) utilize a processing mechanism shared by face perception. For the adaptation experiments, subjects were presented with an adaptor for 5 or 20 s, prior to discriminating a target. In the masking experiments, subjects saw a mask, then a target, and then a second mask. Measures of discriminability and bias were derived and repeated measures analysis of variance tested for pattern-specific masking and adaptation effects. Results from Experiment 1 show no Glass pattern-specific effect of adaptation to faces; results from Experiment 2 show concentric Glass pattern masking, but not adaptation, may impair upright/inverted face discrimination; results from Experiment 3 show concentric and radial Glass pattern masking impaired subsequent upright/inverted face discrimination more than translational Glass pattern masking; and results from Experiment 4 show concentric and radial Glass pattern masking impaired subsequent face gender discrimination more than translational Glass pattern masking. Taken together, these findings demonstrate interactions between concentric form-from-structure and face processing, suggesting a possible common processing pathway. PMID:24563526
ERIC Educational Resources Information Center
Gao, Zaifeng; Bentin, Shlomo
2011-01-01
Face perception studies investigated how spatial frequencies (SF) are extracted from retinal display while forming a perceptual representation, or their selective use during task-imposed categorization. Here we focused on the order of encoding low-spatial frequencies (LSF) and high-spatial frequencies (HSF) from perceptual representations into…
Visual motion perception predicts driving hazard perception ability.
Lacherez, Philippe; Au, Sandra; Wood, Joanne M
2014-02-01
To examine the basis of previous findings of an association between indices of driving safety and visual motion sensitivity and to examine whether this association could be explained by low-level changes in visual function. A total of 36 visually normal participants (aged 19-80 years) completed a battery of standard vision tests including visual acuity, contrast sensitivity and automated visual fields and two tests of motion perception including sensitivity for movement of a drifting Gabor stimulus and sensitivity for displacement in a random dot kinematogram (Dmin ). Participants also completed a hazard perception test (HPT), which measured participants' response times to hazards embedded in video recordings of real-world driving, which has been shown to be linked to crash risk. Dmin for the random dot stimulus ranged from -0.88 to -0.12 log minutes of arc, and the minimum drift rate for the Gabor stimulus ranged from 0.01 to 0.35 cycles per second. Both measures of motion sensitivity significantly predicted response times on the HPT. In addition, while the relationship involving the HPT and motion sensitivity for the random dot kinematogram was partially explained by the other visual function measures, the relationship with sensitivity for detection of the drifting Gabor stimulus remained significant even after controlling for these variables. These findings suggest that motion perception plays an important role in the visual perception of driving-relevant hazards independent of other areas of visual function and should be further explored as a predictive test of driving safety. Future research should explore the causes of reduced motion perception to develop better interventions to improve road safety. © 2012 The Authors. Acta Ophthalmologica © 2012 Acta Ophthalmologica Scandinavica Foundation.
Motion perception: behavior and neural substrate.
Mather, George
2011-05-01
Visual motion perception is vital for survival. Single-unit recordings in primate primary visual cortex (V1) have revealed the existence of specialized motion sensing neurons; perceptual effects such as the motion after-effect demonstrate their importance for motion perception. Human psychophysical data on motion detection can be explained by a computational model of cortical motion sensors. Both psychophysical and physiological data reveal at least two classes of motion sensor capable of sensing motion in luminance-defined and texture-defined patterns, respectively. Psychophysical experiments also reveal that motion can be seen independently of motion sensor output, based on attentive tracking of visual features. Sensor outputs are inherently ambiguous, due to the problem of univariance in neural responses. In order to compute stimulus direction and speed, the visual system must compare the responses of many different sensors sensitive to different directions and speeds. Physiological data show that this computation occurs in the visual middle temporal (MT) area. Recent psychophysical studies indicate that information about spatial form may also play a role in motion computations. Adaptation studies show that the human visual system is selectively sensitive to large-scale optic flow patterns, and physiological studies indicate that cells in the middle superior temporal (MST) area derive this sensitivity from the combined responses of many MT cells. Extraretinal signals used to control eye movements are an important source of signals to cancel out the retinal motion responses generated by eye movements, though visual information also plays a role. A number of issues remain to be resolved at all levels of the motion-processing hierarchy. WIREs Cogni Sci 2011 2 305-314 DOI: 10.1002/wcs.110 For further resources related to this article, please visit the WIREs website Additional Supporting Information may be found in http://www.lifesci.sussex.ac.uk/home/George_Mather/Motion/index.html. Copyright © 2010 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Erdener, Dogu; Burnham, Denis
2018-01-01
Despite the body of research on auditory-visual speech perception in infants and schoolchildren, development in the early childhood period remains relatively uncharted. In this study, English-speaking children between three and four years of age were investigated for: (i) the development of visual speech perception--lip-reading and visual…
Electroencephalograph (EEG) study of brain bistable illusion
NASA Astrophysics Data System (ADS)
Meng, Qinglei; Hong, Elliot; Choa, Fow-Sen
2015-05-01
Bistable illusion reflects two different kinds of interpretations for a single image, which is currently known as a competition between two groups of antagonism of neurons. Recent research indicates that these two groups of antagonism of neurons express different comprehension, while one group is emitting a pulse, the other group will be restrained. On the other hand, when this inhibition mechanism becomes weaker, the other antagonism neurons group will take over the interpretation. Since attention plays key roles controlling cognition, is highly interesting to find the location and frequency band used by brain (with either top-down or bottom-up control) to reach deterministic visual perceptions. In our study, we used a 16-channel EEG system to record brain signals from subjects while conducting bistable illusion testing. An extra channel of the EEG system was used for temporal marking. The moment when subjects reach a perception switch, they click the channel and mark the time. The recorded data were presented in form of brain electrical activity map (BEAM) with different frequency bands for analysis. It was found that the visual cortex in the on the right side between parietal and occipital areas was controlling the switching of perception. In the periods with stable perception, we can constantly observe all the delta, theta, alpha and beta waves. While the period perception is switching, almost all theta, alpha, and beta waves were suppressed by delta waves. This result suggests that delta wave may control the processing of perception switching.
ERIC Educational Resources Information Center
Klein, Sheryl; Guiltner, Val; Sollereder, Patti; Cui, Ying
2011-01-01
Occupational therapists assess fine motor, visual motor, visual perception, and visual skill development, but knowledge of the relationships between scores on sensorimotor performance measures and handwriting legibility and speed is limited. Ninety-nine students in grades three to six with learning and/or behavior problems completed the Upper-Limb…
Touch to see: neuropsychological evidence of a sensory mirror system for touch.
Bolognini, Nadia; Olgiati, Elena; Xaiz, Annalisa; Posteraro, Lucio; Ferraro, Francesco; Maravita, Angelo
2012-09-01
The observation of touch can be grounded in the activation of brain areas underpinning direct tactile experience, namely the somatosensory cortices. What is the behavioral impact of such a mirror sensory activity on visual perception? To address this issue, we investigated the causal interplay between observed and felt touch in right brain-damaged patients, as a function of their underlying damaged visual and/or tactile modalities. Patients and healthy controls underwent a detection task, comprising visual stimuli depicting touches or without a tactile component. Touch and No-touch stimuli were presented in egocentric or allocentric perspectives. Seeing touches, regardless of the viewing perspective, differently affects visual perception depending on which sensory modality is damaged: In patients with a selective visual deficit, but without any tactile defect, the sight of touch improves the visual impairment; this effect is associated with a lesion to the supramarginal gyrus. In patients with a tactile deficit, but intact visual perception, the sight of touch disrupts visual processing, inducing a visual extinction-like phenomenon. This disruptive effect is associated with the damage of the postcentral gyrus. Hence, a damage to the somatosensory system can lead to a dysfunctional visual processing, and an intact somatosensory processing can aid visual perception.
Muñoz-Ruata, J; Caro-Martínez, E; Martínez Pérez, L; Borja, M
2010-12-01
Perception disorders are frequently observed in persons with intellectual disability (ID) and their influence on cognition has been discussed. The objective of this study is to clarify the mechanisms behind these alterations by analysing the visual event related potentials early component, the N1 wave, which is related to perception alterations in several pathologies. Additionally, the relationship between N1 and neuropsychological visual tests was studied with the aim to understand its functional significance in ID persons. A group of 69 subjects, with etiologically heterogeneous mild ID, performed an odd-ball task of active discrimination of geometric figures. N1a (frontal) and N1b (post-occipital) waves were obtained from the evoked potentials. They also performed several neuropsychological tests. Only component N1a, produced by the target stimulus, showed significant correlations with the visual integration, visual semantic association, visual analogical reasoning tests, Perceptual Reasoning Index (Wechsler Intelligence Scale for Children Fourth Edition) and intelligence quotient. The systematic correlations, produced by the target stimulus in perceptual abilities tasks, with the N1a (frontal) and not with N1b (posterior), suggest that the visual perception process involves frontal participation. These correlations support the idea that the N1a and N1b are not equivalent. The relationship between frontal functions and early stages of visual perception is revised and discussed, as well as the frontal contribution with the neuropsychological tests used. A possible relationship between the frontal activity dysfunction in ID and perceptive problems is suggested. Perceptive alteration observed in persons with ID could indeed be because of altered sensory areas, but also to a failure in the frontal participation of perceptive processes conceived as elaborations inside reverberant circuits of perception-action. © 2010 The Authors. Journal of Intellectual Disability Research © 2010 Blackwell Publishing Ltd.
Li, Yi; Chen, Yuren
2016-12-30
To make driving assistance system more humanized, this study focused on the prediction and assistance of drivers' perception-response time on mountain highway curves. Field tests were conducted to collect real-time driving data and driver vision information. A driver-vision lane model quantified curve elements in drivers' vision. A multinomial log-linear model was established to predict perception-response time with traffic/road environment information, driver-vision lane model, and mechanical status (last second). A corresponding assistance model showed a positive impact on drivers' perception-response times on mountain highway curves. Model results revealed that the driver-vision lane model and visual elements did have important influence on drivers' perception-response time. Compared with roadside passive road safety infrastructure, proper visual geometry design, timely visual guidance, and visual information integrality of a curve are significant factors for drivers' perception-response time.
ERIC Educational Resources Information Center
Buldu, Mehmet; Shaban, Mohamed S.
2010-01-01
This study portrayed a picture of kindergarten through 3rd-grade teachers who teach visual arts, their perceptions of the value of visual arts, their visual arts teaching practices, visual arts experiences provided to young learners in school, and major factors and/or influences that affect their teaching of visual arts. The sample for this study…
Close binding of identity and location in visual feature perception
NASA Technical Reports Server (NTRS)
Johnston, J. C.; Pashler, H.
1990-01-01
The binding of identity and location information in disjunctive feature search was studied. Ss searched a heterogeneous display for a color or a form target, and reported both target identity and location. To avoid better than chance guessing of target identity (by choosing the target less likely to have been seen), the difficulty of the two targets was equalized adaptively; a mathematical model was used to quantify residual effects. A spatial layout was used that minimized postperceptual errors in reporting location. Results showed strong binding of identity and location perception. After correction for guessing, no perception of identity without location was found. A weak trend was found for accurate perception of target location without identity. We propose that activated features generate attention-calling "interrupt" signals, specifying only location; attention then retrieves the properties at that location.
Ferber, Susanne; Emrich, Stephen M
2007-03-01
Segregation and feature binding are essential to the perception and awareness of objects in a visual scene. When a fragmented line-drawing of an object moves relative to a background of randomly oriented lines, the previously hidden object is segregated from the background and consequently enters awareness. Interestingly, in such shape-from-motion displays, the percept of the object persists briefly when the motion stops, suggesting that the segregated and bound representation of the object is maintained in awareness. Here, we tested whether this persistence effect is mediated by capacity-limited working-memory processes, or by the amount of object-related information available. The experiments demonstrate that persistence is affected mainly by the proportion of object information available and is independent of working-memory limits. We suggest that this persistence effect can be seen as evidence for an intermediate, form-based memory store mediating between sensory and working memory.
Handwriting generates variable visual output to facilitate symbol learning.
Li, Julia X; James, Karin H
2016-03-01
Recent research has demonstrated that handwriting practice facilitates letter categorization in young children. The present experiments investigated why handwriting practice facilitates visual categorization by comparing 2 hypotheses: that handwriting exerts its facilitative effect because of the visual-motor production of forms, resulting in a direct link between motor and perceptual systems, or because handwriting produces variable visual instances of a named category in the environment that then changes neural systems. We addressed these issues by measuring performance of 5-year-old children on a categorization task involving novel, Greek symbols across 6 different types of learning conditions: 3 involving visual-motor practice (copying typed symbols independently, tracing typed symbols, tracing handwritten symbols) and 3 involving visual-auditory practice (seeing and saying typed symbols of a single typed font, of variable typed fonts, and of handwritten examples). We could therefore compare visual-motor production with visual perception both of variable and similar forms. Comparisons across the 6 conditions (N = 72) demonstrated that all conditions that involved studying highly variable instances of a symbol facilitated symbol categorization relative to conditions where similar instances of a symbol were learned, regardless of visual-motor production. Therefore, learning perceptually variable instances of a category enhanced performance, suggesting that handwriting facilitates symbol understanding by virtue of its environmental output: supporting the notion of developmental change though brain-body-environment interactions. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Handwriting generates variable visual input to facilitate symbol learning
Li, Julia X.; James, Karin H.
2015-01-01
Recent research has demonstrated that handwriting practice facilitates letter categorization in young children. The present experiments investigated why handwriting practice facilitates visual categorization by comparing two hypotheses: That handwriting exerts its facilitative effect because of the visual-motor production of forms, resulting in a direct link between motor and perceptual systems, or because handwriting produces variable visual instances of a named category in the environment that then changes neural systems. We addressed these issues by measuring performance of 5 year-old children on a categorization task involving novel, Greek symbols across 6 different types of learning conditions: three involving visual-motor practice (copying typed symbols independently, tracing typed symbols, tracing handwritten symbols) and three involving visual-auditory practice (seeing and saying typed symbols of a single typed font, of variable typed fonts, and of handwritten examples). We could therefore compare visual-motor production with visual perception both of variable and similar forms. Comparisons across the six conditions (N=72) demonstrated that all conditions that involved studying highly variable instances of a symbol facilitated symbol categorization relative to conditions where similar instances of a symbol were learned, regardless of visual-motor production. Therefore, learning perceptually variable instances of a category enhanced performance, suggesting that handwriting facilitates symbol understanding by virtue of its environmental output: supporting the notion of developmental change though brain-body-environment interactions. PMID:26726913
Perception of Stand-on-ability: Do Geographical Slants Feel Steeper Than They Look?
Hajnal, Alen; Wagman, Jeffrey B; Doyon, Jonathan K; Clark, Joseph D
2016-07-01
Past research has shown that haptically perceived surface slant by foot is matched with visually perceived slant by a factor of 0.81. Slopes perceived visually appear shallower than when stood on without looking. We sought to identify the sources of this discrepancy by asking participants to judge whether they would be able to stand on an inclined ramp. In the first experiment, visual perception was compared to pedal perception in which participants took half a step with one foot onto an occluded ramp. Visual perception closely matched the actual maximal slope angle that one could stand on, whereas pedal perception underestimated it. Participants may have been less stable in the pedal condition while taking half a step onto the ramp. We controlled for this by having participants hold onto a sturdy tripod in the pedal condition (Experiment 2). This did not eliminate the difference between visual and haptic perception, but repeating the task while sitting on a chair did (Experiment 3). Beyond balance requirements, pedal perception may also be constrained by the limited range of motion at the ankle and knee joints while standing. Indeed, when we restricted range of motion by wearing an ankle brace pedal perception underestimated the affordance (Experiment 4). Implications for ecological theory were offered by discussing the notion of functional equivalence and the role of exploration in perception. © The Author(s) 2016.
Visual perception and imagery: a new molecular hypothesis.
Bókkon, I
2009-05-01
Here, we put forward a redox molecular hypothesis about the natural biophysical substrate of visual perception and visual imagery. This hypothesis is based on the redox and bioluminescent processes of neuronal cells in retinotopically organized cytochrome oxidase-rich visual areas. Our hypothesis is in line with the functional roles of reactive oxygen and nitrogen species in living cells that are not part of haphazard process, but rather a very strict mechanism used in signaling pathways. We point out that there is a direct relationship between neuronal activity and the biophoton emission process in the brain. Electrical and biochemical processes in the brain represent sensory information from the external world. During encoding or retrieval of information, electrical signals of neurons can be converted into synchronized biophoton signals by bioluminescent radical and non-radical processes. Therefore, information in the brain appears not only as an electrical (chemical) signal but also as a regulated biophoton (weak optical) signal inside neurons. During visual perception, the topological distribution of photon stimuli on the retina is represented by electrical neuronal activity in retinotopically organized visual areas. These retinotopic electrical signals in visual neurons can be converted into synchronized biophoton signals by radical and non-radical processes in retinotopically organized mitochondria-rich areas. As a result, regulated bioluminescent biophotons can create intrinsic pictures (depictive representation) in retinotopically organized cytochrome oxidase-rich visual areas during visual imagery and visual perception. The long-term visual memory is interpreted as epigenetic information regulated by free radicals and redox processes. This hypothesis does not claim to solve the secret of consciousness, but proposes that the evolution of higher levels of complexity made the intrinsic picture representation of the external visual world possible by regulated redox and bioluminescent reactions in the visual system during visual perception and visual imagery.
Object perception is selectively slowed by a visually similar working memory load.
Robinson, Alan; Manzi, Alberto; Triesch, Jochen
2008-12-22
The capacity of visual working memory has been extensively characterized, but little work has investigated how occupying visual memory influences other aspects of cognition and perception. Here we show a novel effect: maintaining an item in visual working memory slows processing of similar visual stimuli during the maintenance period. Subjects judged the gender of computer rendered faces or the naturalness of body postures while maintaining different visual memory loads. We found that when stimuli of the same class (faces or bodies) were maintained in memory, perceptual judgments were slowed. Interestingly, this is the opposite of what would be predicted from traditional priming. Our results suggest there is interference between visual working memory and perception, caused by visual similarity between new perceptual input and items already encoded in memory.
Optical images of visible and invisible percepts in the primary visual cortex of primates
Macknik, Stephen L.; Haglund, Michael M.
1999-01-01
We optically imaged a visual masking illusion in primary visual cortex (area V-1) of rhesus monkeys to ask whether activity in the early visual system more closely reflects the physical stimulus or the generated percept. Visual illusions can be a powerful way to address this question because they have the benefit of dissociating the stimulus from perception. We used an illusion in which a flickering target (a bar oriented in visual space) is rendered invisible by two counter-phase flickering bars, called masks, which flank and abut the target. The target and masks, when shown separately, each generated correlated activity on the surface of the cortex. During the illusory condition, however, optical signals generated in the cortex by the target disappeared although the image of the masks persisted. The optical image thus was correlated with perception but not with the physical stimulus. PMID:10611363
Temporal resolution for the perception of features and conjunctions.
Bodelón, Clara; Fallah, Mazyar; Reynolds, John H
2007-01-24
The visual system decomposes stimuli into their constituent features, represented by neurons with different feature selectivities. How the signals carried by these feature-selective neurons are integrated into coherent object representations is unknown. To constrain the set of possible integrative mechanisms, we quantified the temporal resolution of perception for color, orientation, and conjunctions of these two features. We find that temporal resolution is measurably higher for each feature than for their conjunction, indicating that time is required to integrate features into a perceptual whole. This finding places temporal limits on the mechanisms that could mediate this form of perceptual integration.
Spering, Miriam; Montagnini, Anna
2011-04-22
Many neurophysiological studies in monkeys have indicated that visual motion information for the guidance of perception and smooth pursuit eye movements is - at an early stage - processed in the same visual pathway in the brain, crucially involving the middle temporal area (MT). However, these studies left some questions unanswered: Are perception and pursuit driven by the same or independent neuronal signals within this pathway? Are the perceptual interpretation of visual motion information and the motor response to visual signals limited by the same source of neuronal noise? Here, we review psychophysical studies that were motivated by these questions and compared perception and pursuit behaviorally in healthy human observers. We further review studies that focused on the interaction between perception and pursuit. The majority of results point to similarities between perception and pursuit, but dissociations were also reported. We discuss recent developments in this research area and conclude with suggestions for common and separate principles for the guidance of perceptual and motor responses to visual motion information. Copyright © 2010 Elsevier Ltd. All rights reserved.
Toward Model Building for Visual Aesthetic Perception
Lughofer, Edwin; Zeng, Xianyi
2017-01-01
Several models of visual aesthetic perception have been proposed in recent years. Such models have drawn on investigations into the neural underpinnings of visual aesthetics, utilizing neurophysiological techniques and brain imaging techniques including functional magnetic resonance imaging, magnetoencephalography, and electroencephalography. The neural mechanisms underlying the aesthetic perception of the visual arts have been explained from the perspectives of neuropsychology, brain and cognitive science, informatics, and statistics. Although corresponding models have been constructed, the majority of these models contain elements that are difficult to be simulated or quantified using simple mathematical functions. In this review, we discuss the hypotheses, conceptions, and structures of six typical models for human aesthetic appreciation in the visual domain: the neuropsychological, information processing, mirror, quartet, and two hierarchical feed-forward layered models. Additionally, the neural foundation of aesthetic perception, appreciation, or judgement for each model is summarized. The development of a unified framework for the neurobiological mechanisms underlying the aesthetic perception of visual art and the validation of this framework via mathematical simulation is an interesting challenge in neuroaesthetics research. This review aims to provide information regarding the most promising proposals for bridging the gap between visual information processing and brain activity involved in aesthetic appreciation. PMID:29270194
Visual imagery without visual perception: lessons from blind subjects
NASA Astrophysics Data System (ADS)
Bértolo, Helder
2014-08-01
The question regarding visual imagery and visual perception remain an open issue. Many studies have tried to understand if the two processes share the same mechanisms or if they are independent, using different neural substrates. Most research has been directed towards the need of activation of primary visual areas during imagery. Here we review some of the works providing evidence for both claims. It seems that studying visual imagery in blind subjects can be used as a way of answering some of those questions, namely if it is possible to have visual imagery without visual perception. We present results from the work of our group using visual activation in dreams and its relation with EEG's spectral components, showing that congenitally blind have visual contents in their dreams and are able to draw them; furthermore their Visual Activation Index is negatively correlated with EEG alpha power. This study supports the hypothesis that it is possible to have visual imagery without visual experience.
ERIC Educational Resources Information Center
Krahmer, Emiel; Swerts, Marc
2007-01-01
Speakers employ acoustic cues (pitch accents) to indicate that a word is important, but may also use visual cues (beat gestures, head nods, eyebrow movements) for this purpose. Even though these acoustic and visual cues are related, the exact nature of this relationship is far from well understood. We investigate whether producing a visual beat…
Sakurai, Ryota; Fujiwara, Yoshinori; Ishihara, Masami; Yasunaga, Masashi; Ogawa, Susumu; Suzuki, Hiroyuki; Imanaka, Kuniyasu
2017-07-01
Older adults tend to overestimate their step-over ability. However, it is unclear as to whether this is caused by inaccurate self-estimation of physical ability or inaccurate perception of height. We, therefore, measured both visual height perception ability and self-estimation of step-over ability among young and older adults. Forty-seven older and 16 young adults performed a height perception test (HPT) and a step-over test (SOT). Participants visually judged the height of vertical bars from distances of 7 and 1 m away in the HPT, then self-estimated and, subsequently, actually performed a step-over action in the SOT. The results showed no significant difference between young and older adults in visual height perception. In the SOT, young adults tended to underestimate their step-over ability, whereas older adults either overestimated their abilities or underestimated them to a lesser extent than did the young adults. Moreover, visual height perception was not correlated with the self-estimation of step-over ability in both young and older adults. These results suggest that the self-overestimation of step-over ability which appeared in some healthy older adults may not be caused by the nature of visual height perception, but by other factor(s), such as the likely age-related nature of self-estimation of physical ability, per se.
Acoustic Tactile Representation of Visual Information
NASA Astrophysics Data System (ADS)
Silva, Pubudu Madhawa
Our goal is to explore the use of hearing and touch to convey graphical and pictorial information to visually impaired people. Our focus is on dynamic, interactive display of visual information using existing, widely available devices, such as smart phones and tablets with touch sensitive screens. We propose a new approach for acoustic-tactile representation of visual signals that can be implemented on a touch screen and allows the user to actively explore a two-dimensional layout consisting of one or more objects with a finger or a stylus while listening to auditory feedback via stereo headphones. The proposed approach is acoustic-tactile because sound is used as the primary source of information for object localization and identification, while touch is used for pointing and kinesthetic feedback. A static overlay of raised-dot tactile patterns can also be added. A key distinguishing feature of the proposed approach is the use of spatial sound (directional and distance cues) to facilitate the active exploration of the layout. We consider a variety of configurations for acoustic-tactile rendering of object size, shape, identity, and location, as well as for the overall perception of simple layouts and scenes. While our primary goal is to explore the fundamental capabilities and limitations of representing visual information in acoustic-tactile form, we also consider a number of relatively simple configurations that can be tied to specific applications. In particular, we consider a simple scene layout consisting of objects in a linear arrangement, each with a distinct tapping sound, which we compare to a ''virtual cane.'' We will also present a configuration that can convey a ''Venn diagram.'' We present systematic subjective experiments to evaluate the effectiveness of the proposed display for shape perception, object identification and localization, and 2-D layout perception, as well as the applications. Our experiments were conducted with visually blocked subjects. The results are evaluated in terms of accuracy and speed, and they demonstrate the advantages of spatial sound for guiding the scanning finger or pointer in shape perception, object localization, and layout exploration. We show that these advantages increase with the amount of detail (smaller object size) in the display. Our experimental results show that the proposed system outperforms the state of the art in shape perception, including variable friction displays. We also demonstrate that, even though they are currently available only as static overlays, raised dot patterns provide the best shape rendition in terms of both the accuracy and speed. Our experiments with layout rendering and perception demonstrate that simultaneous representation of objects, using the most effective approaches for directionality and distance rendering, approaches the optimal performance level provided by visual layout perception. Finally, experiments with the virtual cane and Venn diagram configurations demonstrate that the proposed techniques can be used effectively in simple but nontrivial real-world applications. One of the most important conclusions of our experiments is that there is a clear performance gap between experienced and inexperienced subjects, which indicates that there is a lot of room for improvement with appropriate and extensive training. By exploring a wide variety of design alternatives and focusing on different aspects of the acoustic-tactile interfaces, our results offer many valuable insights and great promise for the design of future systematic tests visually impaired and visually blocked subjects, utilizing the most effective configurations.
ERIC Educational Resources Information Center
Chen, Y.; Norton, D. J.; McBain, R.; Gold, J.; Frazier, J. A.; Coyle, J. T.
2012-01-01
An important issue for understanding visual perception in autism concerns whether individuals with this neurodevelopmental disorder possess an advantage in processing local visual information, and if so, what is the nature of this advantage. Perception of movement speed is a visual process that relies on computation of local spatiotemporal signals…
ERIC Educational Resources Information Center
Murr, Christopher D.; Blanchard, R. Denise
2011-01-01
Advances in classroom technology have lowered barriers for the visually impaired to study geography, yet few participate. Employing stereotype threat theory, we examined whether beliefs held by the visually impaired affect perceptions toward completing courses and majors in visually oriented disciplines. A test group received a low-level threat…
ERIC Educational Resources Information Center
Zhou, Li; Smith, Derrick W.; Parker, Amy T.; Griffin-Shirley, Nora
2011-01-01
This study surveyed teachers of students with visual impairments in Texas on their perceptions of a set of assistive technology competencies developed for teachers of students with visual impairments by Smith and colleagues (2009). Differences in opinion between practicing teachers of students with visual impairments and Smith's group of…
ERIC Educational Resources Information Center
Pons, Ferran; Andreu, Llorenc; Sanz-Torrent, Monica; Buil-Legaz, Lucia; Lewkowicz, David J.
2013-01-01
Speech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the…
The role of vision in auditory distance perception.
Calcagno, Esteban R; Abregú, Ezequiel L; Eguía, Manuel C; Vergara, Ramiro
2012-01-01
In humans, multisensory interaction is an important strategy for improving the detection of stimuli of different nature and reducing the variability of response. It is known that the presence of visual information affects the auditory perception in the horizontal plane (azimuth), but there are few researches that study the influence of vision in the auditory distance perception. In general, the data obtained from these studies are contradictory and do not completely define the way in which visual cues affect the apparent distance of a sound source. Here psychophysical experiments on auditory distance perception in humans are performed, including and excluding visual cues. The results show that the apparent distance from the source is affected by the presence of visual information and that subjects can store in their memory a representation of the environment that later improves the perception of distance.
3D Visualizations of Abstract DataSets
2010-08-01
contrasts no shadows, drop shadows and drop lines. 15. SUBJECT TERMS 3D displays, 2.5D displays, abstract network visualizations, depth perception , human...altitude perception in airspace management and airspace route planning—simulated reality visualizations that employ altitude and heading as well as...cues employed by display designers for depicting real-world scenes on a flat surface can be applied to create a perception of depth for abstract
PERCEPTION AND TELEVISION--PHYSIOLOGICAL FACTORS OF TELEVISION VIEWING.
ERIC Educational Resources Information Center
GUBA, EGON; AND OTHERS
AN EXPERIMENTAL SYSTEM WAS DEVELOPED FOR RECORDING EYE-MOVEMENT DATA. RAW DATA WERE IN THE FORM OF MOTION PICTURES TAKEN OF THE MONITOR OF A CLOSED LOOP TELEVISION SYSTEM. A TELEVISION CAMERA WAS MOUNTED ON THE SUBJECTS' FIELD OF VIEW. THE EYE MARKER APPEARED AS A SMALL SPOT OF LIGHT AND INDICATED THE POINT IN THE VISUAL FIELD AT WHICH THE SUBJECT…
Experimenting with Automatic Text-to-Diagram Conversion: A Novel Teaching Aid for the Blind People
ERIC Educational Resources Information Center
Mukherjee, Anirban; Garain, Utpal; Biswas, Arindam
2014-01-01
Diagram describing texts are integral part of science and engineering subjects including geometry, physics, engineering drawing, etc. In order to understand such text, one, at first, tries to draw or perceive the underlying diagram. For perception of the blind students such diagrams need to be drawn in some non-visual accessible form like tactile…
ERIC Educational Resources Information Center
Braden, Roberts A., Ed.; And Others
These proceedings contain 37 papers from 51 authors noted for their expertise in the field of visual literacy. The collection is divided into three sections: (1) "Examining Visual Literacy" (including, in addition to a 7-year International Visual Literacy Association bibliography covering the period from 1983-1989, papers on the perception of…
ERIC Educational Resources Information Center
Coelho, Chase J.; Nusbaum, Howard C.; Rosenbaum, David A.; Fenn, Kimberly M.
2012-01-01
Early research on visual imagery led investigators to suggest that mental visual images are just weak versions of visual percepts. Later research helped investigators understand that mental visual images differ in deeper and more subtle ways from visual percepts. Research on motor imagery has yet to reach this mature state, however. Many authors…
Material properties from contours: New insights on object perception.
Pinna, Baingio; Deiana, Katia
2015-10-01
In this work we explored phenomenologically the visual complexity of the material attributes on the basis of the contours that define the boundaries of a visual object. The starting point is the rich and pioneering work done by Gestalt psychologists and, more in detail, by Rubin, who first demonstrated that contours contain most of the information related to object perception, like the shape, the color and the depth. In fact, by investigating simple conditions like those used by Gestalt psychologists, mostly consisting of contours only, we demonstrated that the phenomenal complexity of the material attributes emerges through appropriate manipulation of the contours. A phenomenological approach, analogous to the one used by Gestalt psychologists, was used to answer the following questions. What are contours? Which attributes can be phenomenally defined by contours? Are material properties determined only by contours? What is the visual syntactic organization of object attributes? The results of this work support the idea of a visual syntactic organization as a new kind of object formation process useful to understand the language of vision that creates well-formed attribute organizations. The syntax of visual attributes can be considered as a new way to investigate the modular coding and, more generally, the binding among attributes, i.e., the issue of how the brain represents the pairing of shape and material properties. Copyright © 2015. Published by Elsevier Ltd.
The contribution of dynamic visual cues to audiovisual speech perception.
Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador
2015-08-01
Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.
Kim, Heejung; Hahm, Jarang; Lee, Hyekyoung; Kang, Eunjoo; Kang, Hyejin; Lee, Dong Soo
2015-05-01
The human brain naturally integrates audiovisual information to improve speech perception. However, in noisy environments, understanding speech is difficult and may require much effort. Although the brain network is supposed to be engaged in speech perception, it is unclear how speech-related brain regions are connected during natural bimodal audiovisual or unimodal speech perception with counterpart irrelevant noise. To investigate the topological changes of speech-related brain networks at all possible thresholds, we used a persistent homological framework through hierarchical clustering, such as single linkage distance, to analyze the connected component of the functional network during speech perception using functional magnetic resonance imaging. For speech perception, bimodal (audio-visual speech cue) or unimodal speech cues with counterpart irrelevant noise (auditory white-noise or visual gum-chewing) were delivered to 15 subjects. In terms of positive relationship, similar connected components were observed in bimodal and unimodal speech conditions during filtration. However, during speech perception by congruent audiovisual stimuli, the tighter couplings of left anterior temporal gyrus-anterior insula component and right premotor-visual components were observed than auditory or visual speech cue conditions, respectively. Interestingly, visual speech is perceived under white noise by tight negative coupling in the left inferior frontal region-right anterior cingulate, left anterior insula, and bilateral visual regions, including right middle temporal gyrus, right fusiform components. In conclusion, the speech brain network is tightly positively or negatively connected, and can reflect efficient or effortful processes during natural audiovisual integration or lip-reading, respectively, in speech perception.
Disentangling visual imagery and perception of real-world objects
Lee, Sue-Hyun; Kravitz, Dwight J.; Baker, Chris I.
2011-01-01
During mental imagery, visual representations can be evoked in the absence of “bottom-up” sensory input. Prior studies have reported similar neural substrates for imagery and perception, but studies of brain-damaged patients have revealed a double dissociation with some patients showing preserved imagery in spite of impaired perception and others vice versa. Here, we used fMRI and multi-voxel pattern analysis to investigate the specificity, distribution, and similarity of information for individual seen and imagined objects to try and resolve this apparent contradiction. In an event-related design, participants either viewed or imagined individual named object images on which they had been trained prior to the scan. We found that the identity of both seen and imagined objects could be decoded from the pattern of activity throughout the ventral visual processing stream. Further, there was enough correspondence between imagery and perception to allow discrimination of individual imagined objects based on the response during perception. However, the distribution of object information across visual areas was strikingly different during imagery and perception. While there was an obvious posterior-anterior gradient along the ventral visual stream for seen objects, there was an opposite gradient for imagined objects. Moreover, the structure of representations (i.e. the pattern of similarity between responses to all objects) was more similar during imagery than perception in all regions along the visual stream. These results suggest that while imagery and perception have similar neural substrates, they involve different network dynamics, resolving the tension between previous imaging and neuropsychological studies. PMID:22040738
The impact of recreational MDMA 'ecstasy' use on global form processing.
White, Claire; Edwards, Mark; Brown, John; Bell, Jason
2014-11-01
The ability to integrate local orientation information into a global form percept was investigated in long-term ecstasy users. Evidence suggests that ecstasy disrupts the serotonin system, with the visual areas of the brain being particularly susceptible. Previous research has found altered orientation processing in the primary visual area (V1) of users, thought to be due to disrupted serotonin-mediated lateral inhibition. The current study aimed to investigate whether orientation deficits extend to higher visual areas involved in global form processing. Forty-five participants completed a psychophysical (Glass pattern) study allowing an investigation into the mechanisms underlying global form processing and sensitivity to changes in the offset of the stimuli (jitter). A subgroup of polydrug-ecstasy users (n=6) with high ecstasy use had significantly higher thresholds for the detection of Glass patterns than controls (n=21, p=0.039) after Bonferroni correction. There was also a significant interaction between jitter level and drug-group, with polydrug-ecstasy users showing reduced sensitivity to alterations in jitter level (p=0.003). These results extend previous research, suggesting disrupted global form processing and reduced sensitivity to orientation jitter with ecstasy use. Further research is needed to investigate this finding in a larger sample of heavy ecstasy users and to differentiate the effects of other drugs. © The Author(s) 2014.
How does parents' visual perception of their child's weight status affect their feeding style?
Yilmaz, Resul; Erkorkmaz, Ünal; Ozcetin, Mustafa; Karaaslan, Erhan
2013-01-01
Eating style is one of the prominente factors that determine energy intake. One of the influencing factors that determine parental feeding style is parental perception of the weight status of the child. The aim of this study is to evaluate the relationship between maternal visual perception of their children's weight status and their feeding style. A cross-sectional survey was completed with only mother's of 380 preschool children with age of 5 to 7 (6.14 years). Visual perception scores were measured with a sketch and maternal feeding style was measured with validated "Parental Feeding Style Questionnaire". The parental feeding dimensions "emotional feeding" and "encouragement to eat" subscale scores were low in overweight children according to visual perception classification. "Emotional feeding" and "permissive control" subscale scores were statistically different in children classified as correctly perceived and incorrectly low perceived group due to maternal misperception. Various feeding styles were related to maternal visual perception. The best approach to preventing obesity and underweight may be to focus on achieving correct parental perception of the weight status of their children, thus improving parental skills and leading them to implement proper feeding styles. Copyright © AULA MEDICA EDICIONES 2013. Published by AULA MEDICA. All rights reserved.
Development of form similarity as a Gestalt grouping principle in infancy.
Quinn, Paul C; Bhatt, Ramesh S; Brush, Diana; Grimes, Autumn; Sharpnack, Heather
2002-07-01
Given evidence demonstrating that infants 3 months of age and younger can utilize the Gestalt principle of lightness similarity to group visually presented elements into organized percepts, four experiments using the familiarization/novelty-preference procedure were conducted to determine whether infants can also organize visual pattern information in accord with the Gestalt principle of form similarity. In Experiments 1 and 2, 6- to 7-month-olds, but not 3- to 4-month-olds, presented with generalization and discrimination tasks involving arrays of X and O elements responded as if they organized the elements into columns or rows based on form similarity. Experiments 3 and 4 demonstrated that the failure of the young infants to use form similarity was not due to insufficient processing time or the inability to discriminate between the individual X and O elements. The results suggest that different Gestalt principles may become functional over different time courses of development, and that not all principles are automatically deployed in the manner originally proposed by Gestalt theorists.
Combining Multiple Forms Of Visual Information To Specify Contact Relations In Spatial Layout
NASA Astrophysics Data System (ADS)
Sedgwick, Hal A.
1990-03-01
An expert system, called Layout2, has been described, which models a subset of available visual information for spatial layout. The system is used to examine detailed interactions between multiple, partially redundant forms of information in an environment-centered geometrical model of an environment obeying certain rather general constraints. This paper discusses the extension of Layout2 to include generalized contact relations between surfaces. In an environment-centered model, the representation of viewer-centered distance is replaced by the representation of environmental location. This location information is propagated through the representation of the environment by a network of contact relations between contiguous surfaces. Perspective information interacts with other forms of information to specify these contact relations. The experimental study of human perception of contact relations in extended spatial layouts is also discussed. Differences between human results and Layout2 results reveal limitations in the human ability to register available information; they also point to the existence of certain forms of information not yet formalized in Layout2.
The Comparison of Visual Working Memory Representations with Perceptual Inputs
ERIC Educational Resources Information Center
Hyun, Joo-seok; Woodman, Geoffrey F.; Vogel, Edward K.; Hollingworth, Andrew; Luck, Steven J.
2009-01-01
The human visual system can notice differences between memories of previous visual inputs and perceptions of new visual inputs, but the comparison process that detects these differences has not been well characterized. In this study, the authors tested the hypothesis that differences between the memory of a stimulus array and the perception of a…
Spatial perception predicts laparoscopic skills on virtual reality laparoscopy simulator.
Hassan, I; Gerdes, B; Koller, M; Dick, B; Hellwig, D; Rothmund, M; Zielke, A
2007-06-01
This study evaluates the influence of visual-spatial perception on laparoscopic performance of novices with a virtual reality simulator (LapSim(R)). Twenty-four novices completed standardized tests of visual-spatial perception (Lameris Toegepaste Natuurwetenschappelijk Onderzoek [TNO] Test(R) and Stumpf-Fay Cube Perspectives Test(R)) and laparoscopic skills were assessed objectively, while performing 1-h practice sessions on the LapSim(R), comprising of coordination, cutting, and clip application tasks. Outcome variables included time to complete the tasks, economy of motion as well as total error scores, respectively. The degree of visual-spatial perception correlated significantly with laparoscopic performance on the LapSim(R) scores. Participants with a high degree of spatial perception (Group A) performed the tasks faster than those (Group B) who had a low degree of spatial perception (p = 0.001). Individuals with a high degree of spatial perception also scored better for economy of motion (p = 0.021), tissue damage (p = 0.009), and total error (p = 0.007). Among novices, visual-spatial perception is associated with manual skills performed on a virtual reality simulator. This result may be important for educators to develop adequate training programs that can be individually adapted.
Behrens, Janina R.; Kraft, Antje; Irlbacher, Kerstin; Gerhardt, Holger; Olma, Manuel C.; Brandt, Stephan A.
2017-01-01
Understanding processes performed by an intact visual cortex as the basis for developing methods that enhance or restore visual perception is of great interest to both researchers and medical practitioners. Here, we explore whether contrast sensitivity, a main function of the primary visual cortex (V1), can be improved in healthy subjects by repetitive, noninvasive anodal transcranial direct current stimulation (tDCS). Contrast perception was measured via threshold perimetry directly before and after intervention (tDCS or sham stimulation) on each day over 5 consecutive days (24 subjects, double-blind study). tDCS improved contrast sensitivity from the second day onwards, with significant effects lasting 24 h. After the last stimulation on day 5, the anodal group showed a significantly greater improvement in contrast perception than the sham group (23 vs. 5%). We found significant long-term effects in only the central 2–4° of the visual field 4 weeks after the last stimulation. We suspect a combination of two factors contributes to these lasting effects. First, the V1 area that represents the central retina was located closer to the polarization electrode, resulting in higher current density. Second, the central visual field is represented by a larger cortical area relative to the peripheral visual field (cortical magnification). This is the first study showing that tDCS over V1 enhances contrast perception in healthy subjects for several weeks. This study contributes to the investigation of the causal relationship between the external modulation of neuronal membrane potential and behavior (in our case, visual perception). Because the vast majority of human studies only show temporary effects after single tDCS sessions targeting the visual system, our study underpins the potential for lasting effects of repetitive tDCS-induced modulation of neuronal excitability. PMID:28860969
Distortions of Subjective Time Perception Within and Across Senses
van Wassenhove, Virginie; Buonomano, Dean V.; Shimojo, Shinsuke; Shams, Ladan
2008-01-01
Background The ability to estimate the passage of time is of fundamental importance for perceptual and cognitive processes. One experience of time is the perception of duration, which is not isomorphic to physical duration and can be distorted by a number of factors. Yet, the critical features generating these perceptual shifts in subjective duration are not understood. Methodology/Findings We used prospective duration judgments within and across sensory modalities to examine the effect of stimulus predictability and feature change on the perception of duration. First, we found robust distortions of perceived duration in auditory, visual and auditory-visual presentations despite the predictability of the feature changes in the stimuli. For example, a looming disc embedded in a series of steady discs led to time dilation, whereas a steady disc embedded in a series of looming discs led to time compression. Second, we addressed whether visual (auditory) inputs could alter the perception of duration of auditory (visual) inputs. When participants were presented with incongruent audio-visual stimuli, the perceived duration of auditory events could be shortened or lengthened by the presence of conflicting visual information; however, the perceived duration of visual events was seldom distorted by the presence of auditory information and was never perceived shorter than their actual durations. Conclusions/Significance These results support the existence of multisensory interactions in the perception of duration and, importantly, suggest that vision can modify auditory temporal perception in a pure timing task. Insofar as distortions in subjective duration can neither be accounted for by the unpredictability of an auditory, visual or auditory-visual event, we propose that it is the intrinsic features of the stimulus that critically affect subjective time distortions. PMID:18197248
Orientation of selective effects of body tilt on visually induced perception of self-motion.
Nakamura, S; Shimojo, S
1998-10-01
We examined the effect of body posture upon visually induced perception of self-motion (vection) with various angles of observer's tilt. The experiment indicated that the tilted body of observer could enhance perceived strength of vertical vection, while there was no effect of body tilt on horizontal vection. This result suggests that there is an interaction between the effects of visual and vestibular information on perception of self-motion.
Visual enhancing of tactile perception in the posterior parietal cortex.
Ro, Tony; Wallace, Ruth; Hagedorn, Judith; Farnè, Alessandro; Pienkos, Elizabeth
2004-01-01
The visual modality typically dominates over our other senses. Here we show that after inducing an extreme conflict in the left hand between vision of touch (present) and the feeling of touch (absent), sensitivity to touch increases for several minutes after the conflict. Transcranial magnetic stimulation of the posterior parietal cortex after this conflict not only eliminated the enduring visual enhancement of touch, but also impaired normal tactile perception. This latter finding demonstrates a direct role of the parietal lobe in modulating tactile perception as a result of the conflict between these senses. These results provide evidence for visual-to-tactile perceptual modulation and demonstrate effects of illusory vision of touch on touch perception through a long-lasting modulatory process in the posterior parietal cortex.
NASA Astrophysics Data System (ADS)
Ramirez, Joshua; Mann, Virginia
2005-08-01
Both dyslexics and auditory neuropathy (AN) subjects show inferior consonant-vowel (CV) perception in noise, relative to controls. To better understand these impairments, natural acoustic speech stimuli that were masked in speech-shaped noise at various intensities were presented to dyslexic, AN, and control subjects either in isolation or accompanied by visual articulatory cues. AN subjects were expected to benefit from the pairing of visual articulatory cues and auditory CV stimuli, provided that their speech perception impairment reflects a relatively peripheral auditory disorder. Assuming that dyslexia reflects a general impairment of speech processing rather than a disorder of audition, dyslexics were not expected to similarly benefit from an introduction of visual articulatory cues. The results revealed an increased effect of noise masking on the perception of isolated acoustic stimuli by both dyslexic and AN subjects. More importantly, dyslexics showed less effective use of visual articulatory cues in identifying masked speech stimuli and lower visual baseline performance relative to AN subjects and controls. Last, a significant positive correlation was found between reading ability and the ameliorating effect of visual articulatory cues on speech perception in noise. These results suggest that some reading impairments may stem from a central deficit of speech processing.
Spatial Frequency Requirements and Gaze Strategy in Visual-Only and Audiovisual Speech Perception
Wilson, Amanda H.; Paré, Martin; Munhall, Kevin G.
2016-01-01
Purpose The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. Method We presented vowel–consonant–vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent conditions (Experiment 1; N = 66). In Experiment 2 (N = 20), participants performed a visual-only speech perception task and in Experiment 3 (N = 20) an audiovisual task while having their gaze behavior monitored using eye-tracking equipment. Results In the visual-only condition, increasing image resolution led to monotonic increases in performance, and proficient speechreaders were more affected by the removal of high spatial information than were poor speechreaders. The McGurk effect also increased with increasing visual resolution, although it was less affected by the removal of high-frequency information. Observers tended to fixate on the mouth more in visual-only perception, but gaze toward the mouth did not correlate with accuracy of silent speechreading or the magnitude of the McGurk effect. Conclusions The results suggest that individual differences in silent speechreading and the McGurk effect are not related. This conclusion is supported by differential influences of high-resolution visual information on the 2 tasks and differences in the pattern of gaze. PMID:27537379
Chakraborty, Arijit; Anstice, Nicola S.; Jacobs, Robert J.; Paudel, Nabin; LaGasse, Linda L.; Lester, Barry M.; McKinlay, Christopher J. D.; Harding, Jane E.; Wouldes, Trecia A.; Thompson, Benjamin
2017-01-01
Global motion perception is often used as an index of dorsal visual stream function in neurodevelopmental studies. However, the relationship between global motion perception and visuomotor control, a primary function of the dorsal stream, is unclear. We measured global motion perception (motion coherence threshold; MCT) and performance on standardized measures of motor function in 606 4.5-year-old children born at risk of abnormal neurodevelopment. Visual acuity, stereoacuity and verbal IQ were also assessed. After adjustment for verbal IQ or both visual acuity and stereoacuity, MCT was modestly, but significantly, associated with all components of motor function with the exception of gross motor scores. In a separate analysis, stereoacuity, but not visual acuity, was significantly associated with both gross and fine motor scores. These results indicate that the development of motion perception and stereoacuity are associated with motor function in pre-school children. PMID:28435122
Visual motion integration for perception and pursuit
NASA Technical Reports Server (NTRS)
Stone, L. S.; Beutter, B. R.; Lorenceau, J.
2000-01-01
To examine the relationship between visual motion processing for perception and pursuit, we measured the pursuit eye-movement and perceptual responses to the same complex-motion stimuli. We show that humans can both perceive and pursue the motion of line-figure objects, even when partial occlusion makes the resulting image motion vastly different from the underlying object motion. Our results show that both perception and pursuit can perform largely accurate motion integration, i.e. the selective combination of local motion signals across the visual field to derive global object motion. Furthermore, because we manipulated perceived motion while keeping image motion identical, the observed parallel changes in perception and pursuit show that the motion signals driving steady-state pursuit and perception are linked. These findings disprove current pursuit models whose control strategy is to minimize retinal image motion, and suggest a new framework for the interplay between visual cortex and cerebellum in visuomotor control.
Mental Imagery: Functional Mechanisms and Clinical Applications
Pearson, Joel; Naselaris, Thomas; Holmes, Emily A.; Kosslyn, Stephen M.
2015-01-01
Mental imagery research has weathered both disbelief of the phenomenon and inherent methodological limitations. Here we review recent behavioral, brain imaging, and clinical research that has reshaped our understanding of mental imagery. Research supports the claim that visual mental imagery is a depictive internal representation that functions like a weak form of perception. Brain imaging work has demonstrated that neural representations of mental and perceptual images resemble one another as early as the primary visual cortex (V1). Activity patterns in V1 encode mental images and perceptual images via a common set of low-level depictive visual features. Recent translational and clinical research reveals the pivotal role that imagery plays in many mental disorders and suggests how clinicians can utilize imagery in treatment. PMID:26412097
Neuro-ophthalmic manifestations of cerebrovascular accidents.
Ghannam, Alaa S Bou; Subramanian, Prem S
2017-11-01
Ocular functions can be affected in almost any type of cerebrovascular accident (CVA) creating a burden on the patient and family and limiting functionality. The present review summarizes the different ocular outcomes after stroke, divided into three categories: vision, ocular motility, and visual perception. We also discuss interventions that have been proposed to help restore vision and perception after CVA. Interventions that might help expand or compensate for visual field loss and visuospatial neglect include explorative saccade training, prisms, visual restoration therapy (VRT), and transcranial direct current stimulation (tDCS). VRT makes use of neuroplasticity, which has shown efficacy in animal models but remains controversial in human studies. CVAs can lead to decreased visual acuity, visual field loss, ocular motility abnormalities, and visuospatial perception deficits. Although ocular motility problems can be corrected with surgery, vision, and perception deficits are more difficult to overcome. Interventions to restore or compensate for visual field deficits are controversial despite theoretical underpinnings, animal model evidence, and case reports of their efficacies.
A comparison of haptic material perception in blind and sighted individuals.
Baumgartner, Elisabeth; Wiebel, Christiane B; Gegenfurtner, Karl R
2015-10-01
We investigated material perception in blind participants to explore the influence of visual experience on material representations and the relationship between visual and haptic material perception. In a previous study with sighted participants, we had found participants' visual and haptic judgments of material properties to be very similar (Baumgartner, Wiebel, & Gegenfurtner, 2013). In a categorization task, however, visual exploration had led to higher categorization accuracy than haptic exploration. Here, we asked congenitally blind participants to explore different materials haptically and rate several material properties in order to assess the role of the visual sense for the emergence of haptic material perception. Principal components analyses combined with a procrustes superimposition showed that the material representations of blind and blindfolded sighted participants were highly similar. We also measured haptic categorization performance, which was equal for the two groups. We conclude that haptic material representations can emerge independently of visual experience, and that there are no advantages for either group of observers in haptic categorization. Copyright © 2015 Elsevier Ltd. All rights reserved.
Keil, Andreas; Sabatinelli, Dean; Ding, Mingzhou; Lang, Peter J.; Ihssen, Niklas; Heim, Sabine
2013-01-01
Re-entrant modulation of visual cortex has been suggested as a critical process for enhancing perception of emotionally arousing visual stimuli. This study explores how the time information inherent in large-scale electrocortical measures can be used to examine the functional relationships among the structures involved in emotional perception. Granger causality analysis was conducted on steady-state visual evoked potentials elicited by emotionally arousing pictures flickering at a rate of 10 Hz. This procedure allows one to examine the direction of neural connections. Participants viewed pictures that varied in emotional content, depicting people in neutral contexts, erotica, or interpersonal attack scenes. Results demonstrated increased coupling between visual and cortical areas when viewing emotionally arousing content. Specifically, intraparietal to inferotemporal and precuneus to calcarine connections were stronger for emotionally arousing picture content. Thus, we provide evidence for re-entrant signal flow during emotional perception, which originates from higher tiers and enters lower tiers of visual cortex. PMID:18095279
[Perception of physiological visual illusions by individuals with schizophrenia].
Ciszewski, Słowomir; Wichowicz, Hubert Michał; Żuk, Krzysztof
2015-01-01
Visual perception by individuals with schizophrenia has not been extensively researched. The focus of this review is the perception of physiological visual illusions by patients with schizophrenia, a differences of perception reported in a small number of studies. Increased or decreased susceptibility of these patients to various illusions seems to be unconnected to the location of origin in the visual apparatus, which also takes place in illusions connected to other modalities. The susceptibility of patients with schizophrenia to haptic illusions has not yet been investigated, although the need for such investigation has been is clear. The emerging picture is that some individuals with schizophrenia are "resistant" to some of the illusions and are able to assess visual phenomena more "rationally", yet certain illusions (ex. Müller-Lyer's) are perceived more intensely. Disturbances in the perception of visual illusions have neither been classified as possible diagnostic indicators of a dangerous mental condition, nor included in the endophenotype of schizophrenia. Although the relevant data are sparse, the ability to replicate the results is limited, and the research model lacks a "gold standard", some preliminary conclusions may be drawn. There are indications that disturbances in visual perception are connected to the extent of disorganization, poor initial social functioning, poor prognosis, and the types of schizophrenia described as neurodevelopmental. Patients with schizophrenia usually fail to perceive those illusions that require volitional controlled attention, and show lack of sensitivity to the contrast between shape and background.
Haase, Steven J; Fisk, Gary D
2011-08-01
A key problem in unconscious perception research is ruling out the possibility that weak conscious awareness of stimuli might explain the results. In the present study, signal detection theory was compared with the objective threshold/strategic model as explanations of results for detection and identification sensitivity in a commonly used unconscious perception task. In the task, 64 undergraduate participants detected and identified one of four briefly displayed, visually masked letters. Identification was significantly above baseline (i.e., proportion correct > .25) at the highest detection confidence rating. This result is most consistent with signal detection theory's continuum of sensory states and serves as a possible index of conscious perception. However, there was limited support for the other model in the form of a predicted "looker's inhibition" effect, which produced identification performance that was significantly below baseline. One additional result, an interaction between the target stimulus and type of mask, raised concerns for the generality of unconscious perception effects.
A model of color vision with a robot system
NASA Astrophysics Data System (ADS)
Wang, Haihui
2006-01-01
In this paper, we propose to generalize the saccade target method and state that perceptual stability in general arises by learning the effects one's actions have on sensor responses. The apparent visual stability of color percept across saccadic eye movements can be explained by positing that perception involves observing how sensory input changes in response to motor activities. The changes related to self-motion can be learned, and once learned, used to form stable percepts. The variation of sensor data in response to a motor act is therefore a requirement for stable perception rather than something that has to be compensated for in order to perceive a stable world. In this paper, we have provided a simple implementation of this sensory-motor contingency view of perceptual stability. We showed how a straightforward application of the temporal difference enhancement learning technique yielding color percepts that are stable across saccadic eye movements, even though the raw sensor input may change radically.
Lee, D H; Mehta, M D
2003-06-01
Effective risk communication in transfusion medicine is important for health-care consumers, but understanding the numerical magnitude of risks can be difficult. The objective of this study was to determine the effect of a visual risk communication tool on the knowledge and perception of transfusion risk. Laypeople were randomly assigned to receive transfusion risk information with either a written or a visual presentation format for communicating and comparing the probabilities of transfusion risks relative to other hazards. Knowledge of transfusion risk was ascertained with a multiple-choice quiz and risk perception was ascertained by psychometric scaling and principal components analysis. Two-hundred subjects were recruited and randomly assigned. Risk communication with both written and visual presentation formats increased knowledge of transfusion risk and decreased the perceived dread and severity of transfusion risk. Neither format changed the perceived knowledge and control of transfusion risk, nor the perceived benefit of transfusion. No differences in knowledge or risk perception outcomes were detected between the groups randomly assigned to written or visual presentation formats. Risk communication that incorporates risk comparisons in either written or visual presentation formats can improve knowledge and reduce the perception of transfusion risk in laypeople.
Applied estimation for hybrid dynamical systems using perceptional information
NASA Astrophysics Data System (ADS)
Plotnik, Aaron M.
This dissertation uses the motivating example of robotic tracking of mobile deep ocean animals to present innovations in robotic perception and estimation for hybrid dynamical systems. An approach to estimation for hybrid systems is presented that utilizes uncertain perceptional information about the system's mode to improve tracking of its mode and continuous states. This results in significant improvements in situations where previously reported methods of estimation for hybrid systems perform poorly due to poor distinguishability of the modes. The specific application that motivates this research is an automatic underwater robotic observation system that follows and films individual deep ocean animals. A first version of such a system has been developed jointly by the Stanford Aerospace Robotics Laboratory and Monterey Bay Aquarium Research Institute (MBARI). This robotic observation system is successfully fielded on MBARI's ROVs, but agile specimens often evade the system. When a human ROV pilot performs this task, one advantage that he has over the robotic observation system in these situations is the ability to use visual perceptional information about the target, immediately recognizing any changes in the specimen's behavior mode. With the approach of the human pilot in mind, a new version of the robotic observation system is proposed which is extended to (a) derive perceptional information (visual cues) about the behavior mode of the tracked specimen, and (b) merge this dissimilar, discrete and uncertain information with more traditional continuous noisy sensor data by extending existing algorithms for hybrid estimation. These performance enhancements are enabled by integrating techniques in hybrid estimation, computer vision and machine learning. First, real-time computer vision and classification algorithms extract a visual observation of the target's behavior mode. Existing hybrid estimation algorithms are extended to admit this uncertain but discrete observation, complementing the information available from more traditional sensors. State tracking is achieved using a new form of Rao-Blackwellized particle filter called the mode-observed Gaussian Particle Filter. Performance is demonstrated using data from simulation and data collected on actual specimens in the ocean. The framework for estimation using both traditional and perceptional information is easily extensible to other stochastic hybrid systems with mode-related perceptional observations available.
Lee, Irene Eunyoung; Latchoumane, Charles-Francois V.; Jeong, Jaeseung
2017-01-01
Emotional visual music is a promising tool for the study of aesthetic perception in human psychology; however, the production of such stimuli and the mechanisms of auditory-visual emotion perception remain poorly understood. In Experiment 1, we suggested a literature-based, directive approach to emotional visual music design, and inspected the emotional meanings thereof using the self-rated psychometric and electroencephalographic (EEG) responses of the viewers. A two-dimensional (2D) approach to the assessment of emotion (the valence-arousal plane) with frontal alpha power asymmetry EEG (as a proposed index of valence) validated our visual music as an emotional stimulus. In Experiment 2, we used our synthetic stimuli to investigate possible underlying mechanisms of affective evaluation mechanisms in relation to audio and visual integration conditions between modalities (namely congruent, complementation, or incongruent combinations). In this experiment, we found that, when arousal information between auditory and visual modalities was contradictory [for example, active (+) on the audio channel but passive (−) on the video channel], the perceived emotion of cross-modal perception (visual music) followed the channel conveying the stronger arousal. Moreover, we found that an enhancement effect (heightened and compacted in subjects' emotional responses) in the aesthetic perception of visual music might occur when the two channels contained contradictory arousal information and positive congruency in valence and texture/control. To the best of our knowledge, this work is the first to propose a literature-based directive production of emotional visual music prototypes and the validations thereof for the study of cross-modally evoked aesthetic experiences in human subjects. PMID:28421007
Kiefer, Markus; Ansorge, Ulrich; Haynes, John-Dylan; Hamker, Fred; Mattler, Uwe; Verleger, Rolf; Niedeggen, Michael
2011-01-01
Psychological and neuroscience approaches have promoted much progress in elucidating the cognitive and neural mechanisms that underlie phenomenal visual awareness during the last decades. In this article, we provide an overview of the latest research investigating important phenomena in conscious and unconscious vision. We identify general principles to characterize conscious and unconscious visual perception, which may serve as important building blocks for a unified model to explain the plethora of findings. We argue that in particular the integration of principles from both conscious and unconscious vision is advantageous and provides critical constraints for developing adequate theoretical models. Based on the principles identified in our review, we outline essential components of a unified model of conscious and unconscious visual perception. We propose that awareness refers to consolidated visual representations, which are accessible to the entire brain and therefore globally available. However, visual awareness not only depends on consolidation within the visual system, but is additionally the result of a post-sensory gating process, which is mediated by higher-level cognitive control mechanisms. We further propose that amplification of visual representations by attentional sensitization is not exclusive to the domain of conscious perception, but also applies to visual stimuli, which remain unconscious. Conscious and unconscious processing modes are highly interdependent with influences in both directions. We therefore argue that exactly this interdependence renders a unified model of conscious and unconscious visual perception valuable. Computational modeling jointly with focused experimental research could lead to a better understanding of the plethora of empirical phenomena in consciousness research. PMID:22253669
How do visual and postural cues combine for self-tilt perception during slow pitch rotations?
Scotto Di Cesare, C; Buloup, F; Mestre, D R; Bringoux, L
2014-11-01
Self-orientation perception relies on the integration of multiple sensory inputs which convey spatially-related visual and postural cues. In the present study, an experimental set-up was used to tilt the body and/or the visual scene to investigate how these postural and visual cues are integrated for self-tilt perception (the subjective sensation of being tilted). Participants were required to repeatedly rate a confidence level for self-tilt perception during slow (0.05°·s(-1)) body and/or visual scene pitch tilts up to 19° relative to vertical. Concurrently, subjects also had to perform arm reaching movements toward a body-fixed target at certain specific angles of tilt. While performance of a concurrent motor task did not influence the main perceptual task, self-tilt detection did vary according to the visuo-postural stimuli. Slow forward or backward tilts of the visual scene alone did not induce a marked sensation of self-tilt contrary to actual body tilt. However, combined body and visual scene tilt influenced self-tilt perception more strongly, although this effect was dependent on the direction of visual scene tilt: only a forward visual scene tilt combined with a forward body tilt facilitated self-tilt detection. In such a case, visual scene tilt did not seem to induce vection but rather may have produced a deviation of the perceived orientation of the longitudinal body axis in the forward direction, which may have lowered the self-tilt detection threshold during actual forward body tilt. Copyright © 2014 Elsevier B.V. All rights reserved.
Lee, Irene Eunyoung; Latchoumane, Charles-Francois V; Jeong, Jaeseung
2017-01-01
Emotional visual music is a promising tool for the study of aesthetic perception in human psychology; however, the production of such stimuli and the mechanisms of auditory-visual emotion perception remain poorly understood. In Experiment 1, we suggested a literature-based, directive approach to emotional visual music design, and inspected the emotional meanings thereof using the self-rated psychometric and electroencephalographic (EEG) responses of the viewers. A two-dimensional (2D) approach to the assessment of emotion (the valence-arousal plane) with frontal alpha power asymmetry EEG (as a proposed index of valence) validated our visual music as an emotional stimulus. In Experiment 2, we used our synthetic stimuli to investigate possible underlying mechanisms of affective evaluation mechanisms in relation to audio and visual integration conditions between modalities (namely congruent, complementation, or incongruent combinations). In this experiment, we found that, when arousal information between auditory and visual modalities was contradictory [for example, active (+) on the audio channel but passive (-) on the video channel], the perceived emotion of cross-modal perception (visual music) followed the channel conveying the stronger arousal. Moreover, we found that an enhancement effect (heightened and compacted in subjects' emotional responses) in the aesthetic perception of visual music might occur when the two channels contained contradictory arousal information and positive congruency in valence and texture/control. To the best of our knowledge, this work is the first to propose a literature-based directive production of emotional visual music prototypes and the validations thereof for the study of cross-modally evoked aesthetic experiences in human subjects.
A unified account of tilt illusions, association fields, and contour detection based on elastica.
Keemink, Sander W; van Rossum, Mark C W
2016-09-01
As expressed in the Gestalt law of good continuation, human perception tends to associate stimuli that form smooth continuations. Contextual modulation in primary visual cortex, in the form of association fields, is believed to play an important role in this process. Yet a unified and principled account of the good continuation law on the neural level is lacking. In this study we introduce a population model of primary visual cortex. Its contextual interactions depend on the elastica curvature energy of the smoothest contour connecting oriented bars. As expected, this model leads to association fields consistent with data. However, in addition the model displays tilt-illusions for stimulus configurations with grating and single bars that closely match psychophysics. Furthermore, the model explains not only pop-out of contours amid a variety of backgrounds, but also pop-out of single targets amid a uniform background. We thus propose that elastica is a unifying principle of the visual cortical network. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Aesthetic Response and Cosmic Aesthetic Distance
NASA Astrophysics Data System (ADS)
Madacsi, D.
2013-04-01
For Homo sapiens, the experience of a primal aesthetic response to nature was perhaps a necessary precursor to the arousal of an artistic impulse. Among the likely visual candidates for primal initiators of aesthetic response, arguments can be made in favor of the flower, the human face and form, and the sky and light itself as primordial aesthetic stimulants. Although visual perception of the sensory world of flowers and human faces and forms is mediated by light, it was most certainly in the sky that humans first could respond to the beauty of light per se. It is clear that as a species we do not yet identify and comprehend as nature, or part of nature, the entire universe beyond our terrestrial environs, the universe from which we remain inexorably separated by space and time. However, we now enjoy a technologically-enabled opportunity to probe the ultimate limits of visual aesthetic distance and the origins of human aesthetic response as we remotely explore deep space via the Hubble Space Telescope and its successors.
Wu, Huey-Min; Lin, Chin-Kai; Yang, Yu-Mao; Kuo, Bor-Chen
2014-11-12
Visual perception is the fundamental skill required for a child to recognize words, and to read and write. There was no visual perception assessment tool developed for preschool children based on Chinese characters in Taiwan. The purposes were to develop the computerized visual perception assessment tool for Chinese Characters Structures and to explore the psychometrical characteristic of assessment tool. This study adopted purposive sampling. The study evaluated 551 kindergarten-age children (293 boys, 258 girls) ranging from 46 to 81 months of age. The test instrument used in this study consisted of three subtests and 58 items, including tests of basic strokes, single-component characters, and compound characters. Based on the results of model fit analysis, the higher-order item response theory was used to estimate the performance in visual perception, basic strokes, single-component characters, and compound characters simultaneously. Analyses of variance were used to detect significant difference in age groups and gender groups. The difficulty of identifying items in a visual perception test ranged from -2 to 1. The visual perception ability of 4- to 6-year-old children ranged from -1.66 to 2.19. Gender did not have significant effects on performance. However, there were significant differences among the different age groups. The performance of 6-year-olds was better than that of 5-year-olds, which was better than that of 4-year-olds. This study obtained detailed diagnostic scores by using a higher-order item response theory model to understand the visual perception of basic strokes, single-component characters, and compound characters. Further statistical analysis showed that, for basic strokes and compound characters, girls performed better than did boys; there also were differences within each age group. For single-component characters, there was no difference in performance between boys and girls. However, again the performance of 6-year-olds was better than that of 4-year-olds, but there were no statistical differences between the performance of 5-year-olds and 6-year-olds. Results of tests with basic strokes, single-component characters and compound characters tests had good reliability and validity. Therefore, it can be apply to diagnose the problem of visual perception at preschool. Copyright © 2014 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Poon, K. W.; Li-Tsang, C. W .P.; Weiss, T. P. L.; Rosenblum, S.
2010-01-01
This study aimed to investigate the effect of a computerized visual perception and visual-motor integration training program to enhance Chinese handwriting performance among children with learning difficulties, particularly those with handwriting problems. Participants were 26 primary-one children who were assessed by educational psychologists and…
ERIC Educational Resources Information Center
Geldof, C. J. A.; van Wassenaer, A. G.; de Kieviet, J. F.; Kok, J. H.; Oosterlaan, J.
2012-01-01
A range of neurobehavioral impairments, including impaired visual perception and visual-motor integration, are found in very preterm born children, but reported findings show great variability. We aimed to aggregate the existing literature using meta-analysis, in order to provide robust estimates of the effect of very preterm birth on visual…
ERIC Educational Resources Information Center
Dodd, Barbara; McIntosh, Beth; Erdener, Dogu; Burnham, Denis
2008-01-01
An example of the auditory-visual illusion in speech perception, first described by McGurk and MacDonald, is the perception of [ta] when listeners hear [pa] in synchrony with the lip movements for [ka]. One account of the illusion is that lip-read and heard speech are combined in an articulatory code since people who mispronounce words respond…
Willems, Roel M; Clevis, Krien; Hagoort, Peter
2011-09-01
We investigated how visual and linguistic information interact in the perception of emotion. We borrowed a phenomenon from film theory which states that presentation of an as such neutral visual scene intensifies the percept of fear or suspense induced by a different channel of information, such as language. Our main aim was to investigate how neutral visual scenes can enhance responses to fearful language content in parts of the brain involved in the perception of emotion. Healthy participants' brain activity was measured (using functional magnetic resonance imaging) while they read fearful and less fearful sentences presented with or without a neutral visual scene. The main idea is that the visual scenes intensify the fearful content of the language by subtly implying and concretizing what is described in the sentence. Activation levels in the right anterior temporal pole were selectively increased when a neutral visual scene was paired with a fearful sentence, compared to reading the sentence alone, as well as to reading of non-fearful sentences presented with the same neutral scene. We conclude that the right anterior temporal pole serves a binding function of emotional information across domains such as visual and linguistic information.
Mandarin Visual Speech Information
ERIC Educational Resources Information Center
Chen, Trevor H.
2010-01-01
While the auditory-only aspects of Mandarin speech are heavily-researched and well-known in the field, this dissertation addresses its lesser-known aspects: The visual and audio-visual perception of Mandarin segmental information and lexical-tone information. Chapter II of this dissertation focuses on the audiovisual perception of Mandarin…
3-D vision and figure-ground separation by visual cortex.
Grossberg, S
1994-01-01
A neural network theory of three-dimensional (3-D) vision, called FACADE theory, is described. The theory proposes a solution of the classical figure-ground problem for biological vision. It does so by suggesting how boundary representations and surface representations are formed within a boundary contour system (BCS) and a feature contour system (FCS). The BCS and FCS interact reciprocally to form 3-D boundary and surface representations that are mutually consistent. Their interactions generate 3-D percepts wherein occluding and occluded object parts are separated, completed, and grouped. The theory clarifies how preattentive processes of 3-D perception and figure-ground separation interact reciprocally with attentive processes of spatial localization, object recognition, and visual search. A new theory of stereopsis is proposed that predicts how cells sensitive to multiple spatial frequencies, disparities, and orientations are combined by context-sensitive filtering, competition, and cooperation to form coherent BCS boundary segmentations. Several factors contribute to figure-ground pop-out, including: boundary contrast between spatially contiguous boundaries, whether due to scenic differences in luminance, color, spatial frequency, or disparity; partially ordered interactions from larger spatial scales and disparities to smaller scales and disparities; and surface filling-in restricted to regions surrounded by a connected boundary. Phenomena such as 3-D pop-out from a 2-D picture, Da Vinci stereopsis, 3-D neon color spreading, completion of partially occluded objects, and figure-ground reversals are analyzed. The BCS and FCS subsystems model aspects of how the two parvocellular cortical processing streams that join the lateral geniculate nucleus to prestriate cortical area V4 interact to generate a multiplexed representation of Form-And-Color-And-DEpth, or FACADE, within area V4. Area V4 is suggested to support figure-ground separation and to interact with cortical mechanisms of spatial attention, attentive object learning, and visual search. Adaptive resonance theory (ART) mechanisms model aspects of how prestriate visual cortex interacts reciprocally with a visual object recognition system in inferotemporal (IT) cortex for purposes of attentive object learning and categorization. Object attention mechanisms of the What cortical processing stream through IT cortex are distinguished from spatial attention mechanisms of the Where cortical processing stream through parietal cortex. Parvocellular BCS and FCS signals interact with the model What stream. Parvocellular FCS and magnocellular motion BCS signals interact with the model Where stream.(ABSTRACT TRUNCATED AT 400 WORDS)
... living. Functions affected include memory, language skills, visual perception, problem solving, self-management, and the ability to ... living. Functions affected include memory, language skills, visual perception, problem solving, self-management, and the ability to ...
Making the invisible visible: verbal but not visual cues enhance visual detection.
Lupyan, Gary; Spivey, Michael J
2010-07-07
Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d'). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception.
Face perception in women with Turner syndrome and its underlying factors.
Anaki, David; Zadikov Mor, Tal; Gepstein, Vardit; Hochberg, Ze'ev
2016-09-01
Turner syndrome (TS) is a chromosomal condition that affects development in females. It is characterized by short stature, ovarian failure and other congenital malformations, due to a partial or complete absence of the sex chromosome. Women with TS frequently suffer from various physical and hormonal dysfunctions, along with impairments in visual-spatial processing and social cognition difficulties. Previous research has also shown difficulties in face and emotion perception. In the current study we examined two questions: First, whether women with TS, that are impaired in face perception, also suffer from deficits in face-specific processes. The second question was whether these face impairments in TS are related to visual-spatial perceptual dysfunctions exhibited by TS individuals, or to impaired social cognition skills. Twenty-six women with TS and 26 control participants were tested on various cognitive and psychological tests to assess visual-spatial perception, face and facial expression perception, and social cognition skills. Results show that women with TS were less accurate in face perception and facial expression processing, yet they exhibited normal face-specific processes (configural and holistic processing). They also showed difficulties in spatial perception and social cognition capacities. Additional analyses revealed that their face perception impairments were related to their deficits in visual-spatial processing. Thus, our results do not support the claim that the impairments in face processing observed in TS are related to difficulties in social cognition. Rather, our data point to the possibility that face perception difficulties in TS stem from visual-spatial impairments and may not be specific to faces. Copyright © 2016 Elsevier Ltd. All rights reserved.
Prefrontal cortex modulates posterior alpha oscillations during top-down guided visual perception
Helfrich, Randolph F.; Huang, Melody; Wilson, Guy; Knight, Robert T.
2017-01-01
Conscious visual perception is proposed to arise from the selective synchronization of functionally specialized but widely distributed cortical areas. It has been suggested that different frequency bands index distinct canonical computations. Here, we probed visual perception on a fine-grained temporal scale to study the oscillatory dynamics supporting prefrontal-dependent sensory processing. We tested whether a predictive context that was embedded in a rapid visual stream modulated the perception of a subsequent near-threshold target. The rapid stream was presented either rhythmically at 10 Hz, to entrain parietooccipital alpha oscillations, or arrhythmically. We identified a 2- to 4-Hz delta signature that modulated posterior alpha activity and behavior during predictive trials. Importantly, delta-mediated top-down control diminished the behavioral effects of bottom-up alpha entrainment. Simultaneous source-reconstructed EEG and cross-frequency directionality analyses revealed that this delta activity originated from prefrontal areas and modulated posterior alpha power. Taken together, this study presents converging behavioral and electrophysiological evidence for frontal delta-mediated top-down control of posterior alpha activity, selectively facilitating visual perception. PMID:28808023
Stereo imaging with spaceborne radars
NASA Technical Reports Server (NTRS)
Leberl, F.; Kobrick, M.
1983-01-01
Stereo viewing is a valuable tool in photointerpretation and is used for the quantitative reconstruction of the three dimensional shape of a topographical surface. Stereo viewing refers to a visual perception of space by presenting an overlapping image pair to an observer so that a three dimensional model is formed in the brain. Some of the observer's function is performed by machine correlation of the overlapping images - so called automated stereo correlation. The direct perception of space with two eyes is often called natural binocular vision; techniques of generating three dimensional models of the surface from two sets of monocular image measurements is the topic of stereology.
Human dynamic orientation model applied to motion simulation. M.S. Thesis
NASA Technical Reports Server (NTRS)
Borah, J. D.
1976-01-01
The Ormsby model of dynamic orientation, in the form of a discrete time computer program was used to predict non-visually induced sensations during an idealized coordinated aircraft turn. To predict simulation fidelity, the Ormsby model was used to assign penalties for incorrect attitude and angular rate perceptions. It was determined that a three rotational degree of freedom simulation should remain faithful to attitude perception even at the expense of incorrect angular rate sensations. Implementing this strategy, a simulation profile for the idealized turn was designed for a Link GAT-1 trainer. A simple optokinetic display was added to improve the fidelity of roll rate sensations.
Logos: the emblem in the marketing wars.
Chan, S D
1994-07-01
The logo is a powerful marketing tool. It is a condensed form of communication for a selected niche in a mass audience. Effective logo design principles include considerations of art design, visual perception, memory retention, and succinct nonverbal communication. Design principles for logos for dental practices are presented. Dental practices may realize significant advantages if logos are properly conceived and executed and are creatively implemented.
Visual-perceptual impairment in children with cerebral palsy: a systematic review.
Ego, Anne; Lidzba, Karen; Brovedani, Paola; Belmonti, Vittorio; Gonzalez-Monge, Sibylle; Boudia, Baya; Ritz, Annie; Cans, Christine
2015-04-01
Visual perception is one of the cognitive functions often impaired in children with cerebral palsy (CP). The aim of this systematic literature review was to assess the frequency of visual-perceptual impairment (VPI) and its relationship with patient characteristics. Eligible studies were relevant papers assessing visual perception with five common standardized assessment instruments in children with CP published from January 1990 to August 2011. Of the 84 studies selected, 15 were retained. In children with CP, the proportion of VPI ranged from 40% to 50% and the mean visual perception quotient from 70 to 90. None of the studies reported a significant influence of CP subtype, IQ level, side of motor impairment, neuro-ophthalmological outcomes, or seizures. The severity of neuroradiological lesions seemed associated with VPI. The influence of prematurity was controversial, but a lower gestational age was more often associated with lower visual motor skills than with decreased visual-perceptual abilities. The impairment of visual perception in children with CP should be considered a core disorder within the CP syndrome. Further research, including a more systematic approach to neuropsychological testing, is needed to explore the specific impact of CP subgroups and of neuroradiological features on visual-perceptual development. © 2015 The Authors. Developmental Medicine & Child Neurology © 2015 Mac Keith Press.
Perception of Visual Speed While Moving
ERIC Educational Resources Information Center
Durgin, Frank H.; Gigone, Krista; Scott, Rebecca
2005-01-01
During self-motion, the world normally appears stationary. In part, this may be due to reductions in visual motion signals during self-motion. In 8 experiments, the authors used magnitude estimation to characterize changes in visual speed perception as a result of biomechanical self-motion alone (treadmill walking), physical translation alone…
The Impact of Visual Impairment on Perceived School Climate
ERIC Educational Resources Information Center
Schade, Benjamin; Larwin, Karen H.
2015-01-01
The current investigation examines whether visual impairment has an impact on a student's perception of the school climate. Using a large national sample of high school students, perceptions were examined for students with vision impairment relative to students with no visual impairments. Three factors were examined: self-reported level of…
Impact of Language on Development of Auditory-Visual Speech Perception
ERIC Educational Resources Information Center
Sekiyama, Kaoru; Burnham, Denis
2008-01-01
The McGurk effect paradigm was used to examine the developmental onset of inter-language differences between Japanese and English in auditory-visual speech perception. Participants were asked to identify syllables in audiovisual (with congruent or discrepant auditory and visual components), audio-only, and video-only presentations at various…
Experience, Context, and the Visual Perception of Human Movement
ERIC Educational Resources Information Center
Jacobs, Alissa; Pinto, Jeannine; Shiffrar, Maggie
2004-01-01
Why are human observers particularly sensitive to human movement? Seven experiments examined the roles of visual experience and motor processes in human movement perception by comparing visual sensitivities to point-light displays of familiar, unusual, and impossible gaits across gait-speed and identity discrimination tasks. In both tasks, visual…
Parents' Perceptions of Physical Activity for Their Children with Visual Impairments
ERIC Educational Resources Information Center
Perkins, Kara; Columna, Luis; Lieberman, Lauren; Bailey, JoEllen
2013-01-01
Introduction: Ongoing communication with parents and the acknowledgment of their preferences and expectations are crucial to promote the participation of physical activity by children with visual impairments. Purpose: The study presented here explored parents' perceptions of physical activity for their children with visual impairments and explored…
Attitudes towards and perceptions of visual loss and its causes among Hong Kong Chinese adults.
Lau, Joseph Tak Fai; Lee, Vincent; Fan, Dorothy; Lau, Mason; Michon, John
2004-06-01
As part of a study of visual function among Hong Kong Chinese adults, their attitudes and perceptions related to visual loss were examined. These included fear of visual loss, negative functional impacts of visual loss, the relationship between ageing and visual loss and help-seeking behaviours related to visual loss. Demographic factors associated with these variables were also studied. The study population were people aged 40 and above randomly selected from the Shatin district of Hong Kong. The participants underwent eye examinations that included visual acuity, intraocular pressure measurement, visual field, slit-lamp biomicroscopy and ophthalmoscopy. The primary cause of visual disability was recorded. The participants were also asked about their attitudes and perceptions regarding visual loss using a structured questionnaire. The prevalence of bilateral visual disability was 2.2% among adults aged 40 or above and 6.4% among adults aged 60 or above. Nearly 36% of the participants selected blindness as the most feared disabling medical condition, which was substantially higher than conditions such as dementia, loss of limbs, deafness or aphasia. Inability to take care of oneself (21.0%), inconvenience related to mobility (20.2%) and inability to work (14.8%) were the three most commonly mentioned 'worst impact' effects of visual loss. Fully 68% of the participants believed that loss of vision is related to ageing. A majority of participants would seek help and advice from family members in case of visual loss. Visual function is perceived to be very important by Hong Kong Chinese adults. The fear of visual loss is widespread and particularly affects self-care and functional abilities. Visual loss is commonly seen as related to ageing. Attitudes and perceptions in this population may be modified by educational and outreach efforts in order to take advantage of preventive measures.
Video quality assessment method motivated by human visual perception
NASA Astrophysics Data System (ADS)
He, Meiling; Jiang, Gangyi; Yu, Mei; Song, Yang; Peng, Zongju; Shao, Feng
2016-11-01
Research on video quality assessment (VQA) plays a crucial role in improving the efficiency of video coding and the performance of video processing. It is well acknowledged that the motion energy model generates motion energy responses in a middle temporal area by simulating the receptive field of neurons in V1 for the motion perception of the human visual system. Motivated by the biological evidence for the visual motion perception, a VQA method is proposed in this paper, which comprises the motion perception quality index and the spatial index. To be more specific, the motion energy model is applied to evaluate the temporal distortion severity of each frequency component generated from the difference of Gaussian filter bank, which produces the motion perception quality index, and the gradient similarity measure is used to evaluate the spatial distortion of the video sequence to get the spatial quality index. The experimental results of the LIVE, CSIQ, and IVP video databases demonstrate that the random forests regression technique trained by the generated quality indices is highly correspondent to human visual perception and has many significant improvements than comparable well-performing methods. The proposed method has higher consistency with subjective perception and higher generalization capability.
James, Karin H; Atwood, Thea P
2009-02-01
Functional specialization in the brain is considered a hallmark of efficient processing. It is therefore not surprising that there are brain areas specialized for processing letters. To better understand the causes of functional specialization for letters, we explore the emergence of this pattern of response in the ventral processing stream through a training paradigm. Previously, we hypothesized that the specialized response pattern seen during letter perception may be due in part to our experience in writing letters. The work presented here investigates whether or not this aspect of letter processing-the integration of sensorimotor systems through writing-leads to functional specialization in the visual system. To test this idea, we investigated whether or not different types of experiences with letter-like stimuli ("pseudoletters") led to functional specialization similar to that which exists for letters. Neural activation patterns were measured using functional magnetic resonance imaging (fMRI) before and after three different types of training sessions. Participants were trained to recognize pseudoletters by writing, typing, or purely visual practice. Results suggested that only after writing practice did neural activation patterns to pseudoletters resemble patterns seen for letters. That is, neural activation in the left fusiform and dorsal precentral gyrus was greater when participants viewed pseudoletters than other, similar stimuli but only after writing experience. Neural activation also increased after typing practice in the right fusiform and left precentral gyrus, suggesting that in some areas, any motor experience may change visual processing. The results of this experiment suggest an intimate interaction among perceptual and motor systems during pseudoletter perception that may be extended to everyday letter perception.
Ambiguities and conventions in the perception of visual art.
Mamassian, Pascal
2008-09-01
Vision perception is ambiguous and visual arts play with these ambiguities. While perceptual ambiguities are resolved with prior constraints, artistic ambiguities are resolved by conventions. Is there a relationship between priors and conventions? This review surveys recent work related to these ambiguities in composition, spatial scale, illumination and color, three-dimensional layout, shape, and movement. While most conventions seem to have their roots in perceptual constraints, those conventions that differ from priors may help us appreciate how visual arts differ from everyday perception.
Cortical visual dysfunction in children: a clinical study.
Dutton, G; Ballantyne, J; Boyd, G; Bradnam, M; Day, R; McCulloch, D; Mackie, R; Phillips, S; Saunders, K
1996-01-01
Damage to the cerebral cortex was responsible for impairment in vision in 90 of 130 consecutive children referred to the Vision Assessment Clinic in Glasgow. Cortical blindness was seen in 16 children. Only 2 were mobile, but both showed evidence of navigational blind-sight. Cortical visual impairment, in which it was possible to estimate visual acuity but generalised severe brain damage precluded estimation of cognitive visual function, was observed in 9 children. Complex disorders of cognitive vision were seen in 20 children. These could be divided into five categories and involved impairment of: (1) recognition, (2) orientation, (3) depth perception, (4) perception of movement and (5) simultaneous perception. These disorders were observed in a variety of combinations. The remaining children showed evidence of reduced visual acuity and/ or visual field loss, but without detectable disorders of congnitive visual function. Early recognition of disorders of cognitive vision is required if active training and remediation are to be implemented.
Gestalt Perception and Local-Global Processing in High-Functioning Autism
ERIC Educational Resources Information Center
Bolte, Sven; Holtmann, Martin; Poustka, Fritz; Scheurich, Armin; Schmidt, Lutz
2007-01-01
This study examined gestalt perception in high-functioning autism (HFA) and its relation to tasks indicative of local visual processing. Data on of gestalt perception, visual illusions (VI), hierarchical letters (HL), Block Design (BD) and the Embedded Figures Test (EFT) were collected in adult males with HFA, schizophrenia, depression and…
Audiovisual Perception of Congruent and Incongruent Dutch Front Vowels
ERIC Educational Resources Information Center
Valkenier, Bea; Duyne, Jurriaan Y.; Andringa, Tjeerd C.; Baskent, Deniz
2012-01-01
Purpose: Auditory perception of vowels in background noise is enhanced when combined with visually perceived speech features. The objective of this study was to investigate whether the influence of visual cues on vowel perception extends to incongruent vowels, in a manner similar to the McGurk effect observed with consonants. Method:…
Functional Dissociation between Perception and Action Is Evident Early in Life
ERIC Educational Resources Information Center
Hadad, Bat-Sheva; Avidan, Galia; Ganel, Tzvi
2012-01-01
The functional distinction between vision for perception and vision for action is well documented in the mature visual system. Ganel and colleagues recently provided direct evidence for this dissociation, showing that while visual processing for perception follows Weber's fundamental law of psychophysics, action violates this law. We tracked the…
A Dynamic Systems Theory Model of Visual Perception Development
ERIC Educational Resources Information Center
Coté, Carol A.
2015-01-01
This article presents a model for understanding the development of visual perception from a dynamic systems theory perspective. It contrasts to a hierarchical or reductionist model that is often found in the occupational therapy literature. In this proposed model vision and ocular motor abilities are not foundational to perception, they are seen…
Subliminal perception of complex visual stimuli.
Ionescu, Mihai Radu
2016-01-01
Rationale: Unconscious perception of various sensory modalities is an active subject of research though its function and effect on behavior is uncertain. Objective: The present study tried to assess if unconscious visual perception could occur with more complex visual stimuli than previously utilized. Methods and Results: Videos containing slideshows of indifferent complex images with interspersed frames of interest of various durations were presented to 24 healthy volunteers. The perception of the stimulus was evaluated with a forced-choice questionnaire while awareness was quantified by self-assessment with a modified awareness scale annexed to each question with 4 categories of awareness. At values of 16.66 ms of stimulus duration, conscious awareness was not possible and answers regarding the stimulus were random. At 50 ms, nonrandom answers were coupled with no self-reported awareness suggesting unconscious perception of the stimulus. At larger durations of stimulus presentation, significantly correct answers were coupled with a certain conscious awareness. Discussion: At values of 50 ms, unconscious perception is possible even with complex visual stimuli. Further studies are recommended with a focus on a range of interest of stimulus duration between 50 to 16.66 ms.
Motion transparency: making models of motion perception transparent.
Snowden; Verstraten
1999-10-01
In daily life our visual system is bombarded with motion information. We see cars driving by, flocks of birds flying in the sky, clouds passing behind trees that are dancing in the wind. Vision science has a good understanding of the first stage of visual motion processing, that is, the mechanism underlying the detection of local motions. Currently, research is focused on the processes that occur beyond the first stage. At this level, local motions have to be integrated to form objects, define the boundaries between them, construct surfaces and so on. An interesting, if complicated case is known as motion transparency: the situation in which two overlapping surfaces move transparently over each other. In that case two motions have to be assigned to the same retinal location. Several researchers have tried to solve this problem from a computational point of view, using physiological and psychophysical results as a guideline. We will discuss two models: one uses the traditional idea known as 'filter selection' and the other a relatively new approach based on Bayesian inference. Predictions from these models are compared with our own visual behaviour and that of the neural substrates that are presumed to underlie these perceptions.
Vividness of Visual Imagery Depends on the Neural Overlap with Perception in Visual Areas.
Dijkstra, Nadine; Bosch, Sander E; van Gerven, Marcel A J
2017-02-01
Research into the neural correlates of individual differences in imagery vividness point to an important role of the early visual cortex. However, there is also great fluctuation of vividness within individuals, such that only looking at differences between people necessarily obscures the picture. In this study, we show that variation in moment-to-moment experienced vividness of visual imagery, within human subjects, depends on the activity of a large network of brain areas, including frontal, parietal, and visual areas. Furthermore, using a novel multivariate analysis technique, we show that the neural overlap between imagery and perception in the entire visual system correlates with experienced imagery vividness. This shows that the neural basis of imagery vividness is much more complicated than studies of individual differences seemed to suggest. Visual imagery is the ability to visualize objects that are not in our direct line of sight: something that is important for memory, spatial reasoning, and many other tasks. It is known that the better people are at visual imagery, the better they can perform these tasks. However, the neural correlates of moment-to-moment variation in visual imagery remain unclear. In this study, we show that the more the neural response during imagery is similar to the neural response during perception, the more vivid or perception-like the imagery experience is. Copyright © 2017 the authors 0270-6474/17/371367-07$15.00/0.
Yang, Yan-Li; Deng, Hong-Xia; Xing, Gui-Yang; Xia, Xiao-Luan; Li, Hai-Fang
2015-02-01
It is not clear whether the method used in functional brain-network related research can be applied to explore the feature binding mechanism of visual perception. In this study, we investigated feature binding of color and shape in visual perception. Functional magnetic resonance imaging data were collected from 38 healthy volunteers at rest and while performing a visual perception task to construct brain networks active during resting and task states. Results showed that brain regions involved in visual information processing were obviously activated during the task. The components were partitioned using a greedy algorithm, indicating the visual network existed during the resting state. Z-values in the vision-related brain regions were calculated, confirming the dynamic balance of the brain network. Connectivity between brain regions was determined, and the result showed that occipital and lingual gyri were stable brain regions in the visual system network, the parietal lobe played a very important role in the binding process of color features and shape features, and the fusiform and inferior temporal gyri were crucial for processing color and shape information. Experimental findings indicate that understanding visual feature binding and cognitive processes will help establish computational models of vision, improve image recognition technology, and provide a new theoretical mechanism for feature binding in visual perception.
Testing effects in visual short-term memory: The case of an object's size.
Makovski, Tal
2018-05-29
In many daily activities, we need to form and retain temporary representations of an object's size. Typically, such visual short-term memory (VSTM) representations follow perception and are considered reliable. Here, participants were asked to hold in mind a single simple object for a short duration and to reproduce its size by adjusting the length and width of a test probe. Experiment 1 revealed two powerful findings: First, similar to a recently reported perceptual illusion, participants greatly overestimated the size of open objects - ones with missing boundaries - relative to the same-size fully closed objects. This finding confirms that object boundaries are critical for size perception and memory. Second, and in contrast to perception, even the size of the closed objects was largely overestimated. Both inflation effects were substantial and were replicated and extended in Experiments 2-5. Experiments 6-8 used a different testing procedure to examine whether the overestimation effects are due to inflation of size in VSTM representations or to biases introduced during the reproduction phase. These data showed that while the overestimation of the open objects was repeated, the overestimation of the closed objects was not. Taken together, these findings suggest that similar to perception, only the size representation of open objects is inflated in VSTM. Importantly, they demonstrate the considerable impact of the testing procedure on VSTM tasks and further question the use of reproduction procedures for measuring VSTM.
The role of temporo-parietal junction (TPJ) in global Gestalt perception.
Huberle, Elisabeth; Karnath, Hans-Otto
2012-07-01
Grouping processes enable the coherent perception of our environment. A number of brain areas has been suggested to be involved in the integration of elements into objects including early and higher visual areas along the ventral visual pathway as well as motion-processing areas of the dorsal visual pathway. However, integration not only is required for the cortical representation of individual objects, but is also essential for the perception of more complex visual scenes consisting of several different objects and/or shapes. The present fMRI experiments aimed to address such integration processes. We investigated the neural correlates underlying the global Gestalt perception of hierarchically organized stimuli that allowed parametrical degrading of the object at the global level. The comparison of intact versus disturbed perception of the global Gestalt revealed a network of cortical areas including the temporo-parietal junction (TPJ), anterior cingulate cortex and the precuneus. The TPJ location corresponds well with the areas known to be typically lesioned in stroke patients with simultanagnosia following bilateral brain damage. These patients typically show a deficit in identifying the global Gestalt of a visual scene. Further, we found the closest relation between behavioral performance and fMRI activation for the TPJ. Our data thus argue for a significant role of the TPJ in human global Gestalt perception.
Dale, Naomi; Sakkalou, Elena; O'Reilly, Michelle; Springall, Clare; De Haan, Michelle; Salt, Alison
2017-07-01
To investigate how vision relates to early development by studying vision and cognition in a national cohort of 1-year-old infants with congenital disorders of the peripheral visual system and visual impairment. This was a cross-sectional observational investigation of a nationally recruited cohort of infants with 'simple' and 'complex' congenital disorders of the peripheral visual system. Entry age was 8 to 16 months. Vision level (Near Detection Scale) and non-verbal cognition (sensorimotor understanding, Reynell Zinkin Scales) were assessed. Parents completed demographic questionnaires. Of 90 infants (49 males, 41 females; mean 13mo, standard deviation [SD] 2.5mo; range 7-17mo); 25 (28%) had profound visual impairment (light perception at best) and 65 (72%) had severe visual impairment (basic 'form' vision). The Near Detection Scale correlated significantly with sensorimotor understanding developmental quotients in the 'total', 'simple', and 'complex' groups (all p<0.001). Age and vision accounted for 48% of sensorimotor understanding variance. Infants with profound visual impairment, especially in the 'complex' group with congenital disorders of the peripheral visual system with known brain involvement, showed the greatest cognitive delay. Lack of vision is associated with delayed early-object manipulative abilities and concepts; 'form' vision appeared to support early developmental advance. This paper provides baseline characteristics for cross-sectional and longitudinal follow-up investigations in progress. A methodological strength of the study was the representativeness of the cohort according to national epidemiological and population census data. © 2017 Mac Keith Press.
Behind Mathematical Learning Disabilities: What about Visual Perception and Motor Skills?
ERIC Educational Resources Information Center
Pieters, Stefanie; Desoete, Annemie; Roeyers, Herbert; Vanderswalmen, Ruth; Van Waelvelde, Hilde
2012-01-01
In a sample of 39 children with mathematical learning disabilities (MLD) and 106 typically developing controls belonging to three control groups of three different ages, we found that visual perception, motor skills and visual-motor integration explained a substantial proportion of the variance in either number fact retrieval or procedural…
Using neuronal populations to study the mechanisms underlying spatial and feature attention
Cohen, Marlene R.; Maunsell, John H.R.
2012-01-01
Summary Visual attention affects both perception and neuronal responses. Whether the same neuronal mechanisms mediate spatial attention, which improves perception of attended locations, and non-spatial forms of attention has been a subject of considerable debate. Spatial and feature attention have similar effects on individual neurons. Because visual cortex is retinotopically organized, however, spatial attention can co-modulate local neuronal populations, while feature attention generally requires more selective modulation. We compared the effects of feature and spatial attention on local and spatially separated populations by recording simultaneously from dozens of neurons in both hemispheres of V4. Feature and spatial attention affect the activity of local populations similarly, modulating both firing rates and correlations between pairs of nearby neurons. However, while spatial attention appears to act on local populations, feature attention is coordinated across hemispheres. Our results are consistent with a unified attentional mechanism that can modulate the responses of arbitrary subgroups of neurons. PMID:21689604
Commonalities between Perception and Cognition.
Tacca, Michela C
2011-01-01
Perception and cognition are highly interrelated. Given the influence that these systems exert on one another, it is important to explain how perceptual representations and cognitive representations interact. In this paper, I analyze the similarities between visual perceptual representations and cognitive representations in terms of their structural properties and content. Specifically, I argue that the spatial structure underlying visual object representation displays systematicity - a property that is considered to be characteristic of propositional cognitive representations. To this end, I propose a logical characterization of visual feature binding as described by Treisman's Feature Integration Theory and argue that systematicity is not only a property of language-like representations, but also of spatially organized visual representations. Furthermore, I argue that if systematicity is taken to be a criterion to distinguish between conceptual and non-conceptual representations, then visual representations, that display systematicity, might count as an early type of conceptual representations. Showing these analogies between visual perception and cognition is an important step toward understanding the interface between the two systems. The ideas here presented might also set the stage for new empirical studies that directly compare binding (and other relational operations) in visual perception and higher cognition.
Commonalities between Perception and Cognition
Tacca, Michela C.
2011-01-01
Perception and cognition are highly interrelated. Given the influence that these systems exert on one another, it is important to explain how perceptual representations and cognitive representations interact. In this paper, I analyze the similarities between visual perceptual representations and cognitive representations in terms of their structural properties and content. Specifically, I argue that the spatial structure underlying visual object representation displays systematicity – a property that is considered to be characteristic of propositional cognitive representations. To this end, I propose a logical characterization of visual feature binding as described by Treisman’s Feature Integration Theory and argue that systematicity is not only a property of language-like representations, but also of spatially organized visual representations. Furthermore, I argue that if systematicity is taken to be a criterion to distinguish between conceptual and non-conceptual representations, then visual representations, that display systematicity, might count as an early type of conceptual representations. Showing these analogies between visual perception and cognition is an important step toward understanding the interface between the two systems. The ideas here presented might also set the stage for new empirical studies that directly compare binding (and other relational operations) in visual perception and higher cognition. PMID:22144974
Feeling form: the neural basis of haptic shape perception.
Yau, Jeffrey M; Kim, Sung Soo; Thakur, Pramodsingh H; Bensmaia, Sliman J
2016-02-01
The tactile perception of the shape of objects critically guides our ability to interact with them. In this review, we describe how shape information is processed as it ascends the somatosensory neuraxis of primates. At the somatosensory periphery, spatial form is represented in the spatial patterns of activation evoked across populations of mechanoreceptive afferents. In the cerebral cortex, neurons respond selectively to particular spatial features, like orientation and curvature. While feature selectivity of neurons in the earlier processing stages can be understood in terms of linear receptive field models, higher order somatosensory neurons exhibit nonlinear response properties that result in tuning for more complex geometrical features. In fact, tactile shape processing bears remarkable analogies to its visual counterpart and the two may rely on shared neural circuitry. Furthermore, one of the unique aspects of primate somatosensation is that it contains a deformable sensory sheet. Because the relative positions of cutaneous mechanoreceptors depend on the conformation of the hand, the haptic perception of three-dimensional objects requires the integration of cutaneous and proprioceptive signals, an integration that is observed throughout somatosensory cortex. Copyright © 2016 the American Physiological Society.
Modeling the Perception of Audiovisual Distance: Bayesian Causal Inference and Other Models
2016-01-01
Studies of audiovisual perception of distance are rare. Here, visual and auditory cue interactions in distance are tested against several multisensory models, including a modified causal inference model. In this causal inference model predictions of estimate distributions are included. In our study, the audiovisual perception of distance was overall better explained by Bayesian causal inference than by other traditional models, such as sensory dominance and mandatory integration, and no interaction. Causal inference resolved with probability matching yielded the best fit to the data. Finally, we propose that sensory weights can also be estimated from causal inference. The analysis of the sensory weights allows us to obtain windows within which there is an interaction between the audiovisual stimuli. We find that the visual stimulus always contributes by more than 80% to the perception of visual distance. The visual stimulus also contributes by more than 50% to the perception of auditory distance, but only within a mobile window of interaction, which ranges from 1 to 4 m. PMID:27959919
Bilateral Theta-Burst TMS to Influence Global Gestalt Perception
Ritzinger, Bernd; Huberle, Elisabeth; Karnath, Hans-Otto
2012-01-01
While early and higher visual areas along the ventral visual pathway in the inferotemporal cortex are critical for the recognition of individual objects, the neural representation of human perception of complex global visual scenes remains under debate. Stroke patients with a selective deficit in the perception of a complex global Gestalt with intact recognition of individual objects – a deficit termed simultanagnosia – greatly helped to study this question. Interestingly, simultanagnosia typically results from bilateral lesions of the temporo-parietal junction (TPJ). The present study aimed to verify the relevance of this area for human global Gestalt perception. We applied continuous theta-burst TMS either unilaterally (left or right) or bilateral simultaneously over TPJ. Healthy subjects were presented with hierarchically organized visual stimuli that allowed parametrical degrading of the object at the global level. Identification of the global Gestalt was significantly modulated only for the bilateral TPJ stimulation condition. Our results strengthen the view that global Gestalt perception in the human brain involves TPJ and is co-dependent on both hemispheres. PMID:23110106
Bilateral theta-burst TMS to influence global gestalt perception.
Ritzinger, Bernd; Huberle, Elisabeth; Karnath, Hans-Otto
2012-01-01
While early and higher visual areas along the ventral visual pathway in the inferotemporal cortex are critical for the recognition of individual objects, the neural representation of human perception of complex global visual scenes remains under debate. Stroke patients with a selective deficit in the perception of a complex global Gestalt with intact recognition of individual objects - a deficit termed simultanagnosia - greatly helped to study this question. Interestingly, simultanagnosia typically results from bilateral lesions of the temporo-parietal junction (TPJ). The present study aimed to verify the relevance of this area for human global Gestalt perception. We applied continuous theta-burst TMS either unilaterally (left or right) or bilateral simultaneously over TPJ. Healthy subjects were presented with hierarchically organized visual stimuli that allowed parametrical degrading of the object at the global level. Identification of the global Gestalt was significantly modulated only for the bilateral TPJ stimulation condition. Our results strengthen the view that global Gestalt perception in the human brain involves TPJ and is co-dependent on both hemispheres.
Chakraborty, Arijit; Anstice, Nicola S; Jacobs, Robert J; Paudel, Nabin; LaGasse, Linda L; Lester, Barry M; McKinlay, Christopher J D; Harding, Jane E; Wouldes, Trecia A; Thompson, Benjamin
2017-06-01
Global motion perception is often used as an index of dorsal visual stream function in neurodevelopmental studies. However, the relationship between global motion perception and visuomotor control, a primary function of the dorsal stream, is unclear. We measured global motion perception (motion coherence threshold; MCT) and performance on standardized measures of motor function in 606 4.5-year-old children born at risk of abnormal neurodevelopment. Visual acuity, stereoacuity and verbal IQ were also assessed. After adjustment for verbal IQ or both visual acuity and stereoacuity, MCT was modestly, but significantly, associated with all components of motor function with the exception of fine motor scores. In a separate analysis, stereoacuity, but not visual acuity, was significantly associated with both gross and fine motor scores. These results indicate that the development of motion perception and stereoacuity are associated with motor function in pre-school children. Copyright © 2017 Elsevier Ltd. All rights reserved.
Rojas, David; Kapralos, Bill; Cristancho, Sayra; Collins, Karen; Hogue, Andrew; Conati, Cristina; Dubrowski, Adam
2012-01-01
Despite the benefits associated with virtual learning environments and serious games, there are open, fundamental issues regarding simulation fidelity and multi-modal cue interaction and their effect on immersion, transfer of knowledge, and retention. Here we describe the results of a study that examined the effect of ambient (background) sound on the perception of visual fidelity (defined with respect to texture resolution). Results suggest that the perception of visual fidelity is dependent on ambient sound and more specifically, white noise can have detrimental effects on our perception of high quality visuals. The results of this study will guide future studies that will ultimately aid in developing an understanding of the role that fidelity, and multi-modal interactions play with respect to knowledge transfer and retention for users of virtual simulations and serious games.
[Visual perception abilities in children with reading disabilities].
Werpup-Stüwe, Lina; Petermann, Franz
2015-05-01
Visual perceptual abilities are increasingly being neglected in research concerning reading disabilities. This study measures the visual perceptual abilities of children with disabilities in reading. The visual perceptual abilities of 35 children with specific reading disorder and 30 controls were compared using the German version of the Developmental Test of Visual Perception – Adolescent and Adult (DTVP-A). 11 % of the children with specific reading disorder show clinically relevant performance on the DTVP-A. The perceptual abilities of both groups differ significantly. No significant group differences exist after controlling for general IQ or Perceptional Reasoning Index, but they do remain after controlling for Verbal Comprehension, Working Memory, and Processing Speed Index. The number of children with reading difficulties suffering from visual perceptual disorders has been underestimated. For this reason, visual perceptual abilities should always be tested when making a reading disorder diagnosis. Profiles of IQ-test results of children suffering from reading and visual perceptual disorders should be interpreted carefully.
Object formation in visual working memory: Evidence from object-based attention.
Zhou, Jifan; Zhang, Haihang; Ding, Xiaowei; Shui, Rende; Shen, Mowei
2016-09-01
We report on how visual working memory (VWM) forms intact perceptual representations of visual objects using sub-object elements. Specifically, when objects were divided into fragments and sequentially encoded into VWM, the fragments were involuntarily integrated into objects in VWM, as evidenced by the occurrence of both positive and negative object-based attention effects: In Experiment 1, when subjects' attention was cued to a location occupied by the VWM object, the target presented at the location of that object was perceived as occurring earlier than that presented at the location of a different object. In Experiment 2, responses to a target were significantly slower when a distractor was presented at the same location as the cued object (Experiment 2). These results suggest that object fragments can be integrated into objects within VWM in a manner similar to that of visual perception. Copyright © 2016 Elsevier B.V. All rights reserved.
Cross-sensory reference frame transfer in spatial memory: the case of proprioceptive learning.
Avraamides, Marios N; Sarrou, Mikaella; Kelly, Jonathan W
2014-04-01
In three experiments, we investigated whether the information available to visual perception prior to encoding the locations of objects in a path through proprioception would influence the reference direction from which the spatial memory was formed. Participants walked a path whose orientation was misaligned to the walls of the enclosing room and to the square sheet that covered the path prior to learning (Exp. 1) and, in addition, to the intrinsic structure of a layout studied visually prior to walking the path and to the orientation of stripes drawn on the floor (Exps. 2 and 3). Despite the availability of prior visual information, participants constructed spatial memories that were aligned with the canonical axes of the path, as opposed to the reference directions primed by visual experience. The results are discussed in the context of previous studies documenting transfer of reference frames within and across perceptual modalities.
Visual Motion Perception and Visual Attentive Processes.
1988-04-01
88-0551 Visual Motion Perception and Visual Attentive Processes George Spering , New YorkUnivesity A -cesson For DTIC TAB rant AFOSR 85-0364... Spering . HIPSt: A Unix-based image processing syslem. Computer Vision, Graphics, and Image Processing, 1984,25. 331-347. ’HIPS is the Human Information...Processing Laboratory’s Image Processing System. 1985 van Santen, Jan P. It, and George Spering . Elaborated Reichardt detectors. Journal of the Optical
Compensatory shifts in visual perception are associated with hallucinations in Lewy body disorders.
Bowman, Alan Robert; Bruce, Vicki; Colbourn, Christopher J; Collerton, Daniel
2017-01-01
Visual hallucinations are a common, distressing, and disabling symptom of Lewy body and other diseases. Current models suggest that interactions in internal cognitive processes generate hallucinations. However, these neglect external factors. Pareidolic illusions are an experimental analogue of hallucinations. They are easily induced in Lewy body disease, have similar content to spontaneous hallucinations, and respond to cholinesterase inhibitors in the same way. We used a primed pareidolia task with hallucinating participants with Lewy body disorders (n = 16), non-hallucinating participants with Lewy body disorders (n = 19), and healthy controls (n = 20). Participants were presented with visual "noise" that sometimes contained degraded visual objects and were required to indicate what they saw. Some perceptions were cued in advance by a visual prime. Results showed that hallucinating participants were impaired in discerning visual signals from noise, with a relaxed criterion threshold for perception compared to both other groups. After the presentation of a visual prime, the criterion was comparable to the other groups. The results suggest that participants with hallucinations compensate for perceptual deficits by relaxing perceptual criteria, at a cost of seeing things that are not there, and that visual cues regularize perception. This latter finding may provide a mechanism for understanding the interaction between environments and hallucinations.
Jordan, Timothy R; Sheen, Mercedes; Abedipour, Lily; Paterson, Kevin B
2014-01-01
When observing a talking face, it has often been argued that visual speech to the left and right of fixation may produce differences in performance due to divided projections to the two cerebral hemispheres. However, while it seems likely that such a division in hemispheric projections exists for areas away from fixation, the nature and existence of a functional division in visual speech perception at the foveal midline remains to be determined. We investigated this issue by presenting visual speech in matched hemiface displays to the left and right of a central fixation point, either exactly abutting the foveal midline or else located away from the midline in extrafoveal vision. The location of displays relative to the foveal midline was controlled precisely using an automated, gaze-contingent eye-tracking procedure. Visual speech perception showed a clear right hemifield advantage when presented in extrafoveal locations but no hemifield advantage (left or right) when presented abutting the foveal midline. Thus, while visual speech observed in extrafoveal vision appears to benefit from unilateral projections to left-hemisphere processes, no evidence was obtained to indicate that a functional division exists when visual speech is observed around the point of fixation. Implications of these findings for understanding visual speech perception and the nature of functional divisions in hemispheric projection are discussed.
Burnham, Denis; Dodd, Barbara
2004-12-01
The McGurk effect, in which auditory [ba] dubbed onto [ga] lip movements is perceived as "da" or "tha," was employed in a real-time task to investigate auditory-visual speech perception in prelingual infants. Experiments 1A and 1B established the validity of real-time dubbing for producing the effect. In Experiment 2, 4 1/2-month-olds were tested in a habituation-test paradigm, in which an auditory-visual stimulus was presented contingent upon visual fixation of a live face. The experimental group was habituated to a McGurk stimulus (auditory [ba] visual [ga]), and the control group to matching auditory-visual [ba]. Each group was then presented with three auditory-only test trials, [ba], [da], and [(delta)a] (as in then). Visual-fixation durations in test trials showed that the experimental group treated the emergent percept in the McGurk effect, [da] or [(delta)a], as familiar (even though they had not heard these sounds previously) and [ba] as novel. For control group infants [da] and [(delta)a] were no more familiar than [ba]. These results are consistent with infants' perception of the McGurk effect, and support the conclusion that prelinguistic infants integrate auditory and visual speech information. Copyright 2004 Wiley Periodicals, Inc.
Clevis, Krien; Hagoort, Peter
2011-01-01
We investigated how visual and linguistic information interact in the perception of emotion. We borrowed a phenomenon from film theory which states that presentation of an as such neutral visual scene intensifies the percept of fear or suspense induced by a different channel of information, such as language. Our main aim was to investigate how neutral visual scenes can enhance responses to fearful language content in parts of the brain involved in the perception of emotion. Healthy participants’ brain activity was measured (using functional magnetic resonance imaging) while they read fearful and less fearful sentences presented with or without a neutral visual scene. The main idea is that the visual scenes intensify the fearful content of the language by subtly implying and concretizing what is described in the sentence. Activation levels in the right anterior temporal pole were selectively increased when a neutral visual scene was paired with a fearful sentence, compared to reading the sentence alone, as well as to reading of non-fearful sentences presented with the same neutral scene. We conclude that the right anterior temporal pole serves a binding function of emotional information across domains such as visual and linguistic information. PMID:20530540
On the role of crossmodal prediction in audiovisual emotion perception.
Jessen, Sarah; Kotz, Sonja A
2013-01-01
Humans rely on multiple sensory modalities to determine the emotional state of others. In fact, such multisensory perception may be one of the mechanisms explaining the ease and efficiency by which others' emotions are recognized. But how and when exactly do the different modalities interact? One aspect in multisensory perception that has received increasing interest in recent years is the concept of cross-modal prediction. In emotion perception, as in most other settings, visual information precedes the auditory information. Thereby, leading in visual information can facilitate subsequent auditory processing. While this mechanism has often been described in audiovisual speech perception, so far it has not been addressed in audiovisual emotion perception. Based on the current state of the art in (a) cross-modal prediction and (b) multisensory emotion perception research, we propose that it is essential to consider the former in order to fully understand the latter. Focusing on electroencephalographic (EEG) and magnetoencephalographic (MEG) studies, we provide a brief overview of the current research in both fields. In discussing these findings, we suggest that emotional visual information may allow more reliable predicting of auditory information compared to non-emotional visual information. In support of this hypothesis, we present a re-analysis of a previous data set that shows an inverse correlation between the N1 EEG response and the duration of visual emotional, but not non-emotional information. If the assumption that emotional content allows more reliable predicting can be corroborated in future studies, cross-modal prediction is a crucial factor in our understanding of multisensory emotion perception.
Perceptual learning in a non-human primate model of artificial vision
Killian, Nathaniel J.; Vurro, Milena; Keith, Sarah B.; Kyada, Margee J.; Pezaris, John S.
2016-01-01
Visual perceptual grouping, the process of forming global percepts from discrete elements, is experience-dependent. Here we show that the learning time course in an animal model of artificial vision is predicted primarily from the density of visual elements. Three naïve adult non-human primates were tasked with recognizing the letters of the Roman alphabet presented at variable size and visualized through patterns of discrete visual elements, specifically, simulated phosphenes mimicking a thalamic visual prosthesis. The animals viewed a spatially static letter using a gaze-contingent pattern and then chose, by gaze fixation, between a matching letter and a non-matching distractor. Months of learning were required for the animals to recognize letters using simulated phosphene vision. Learning rates increased in proportion to the mean density of the phosphenes in each pattern. Furthermore, skill acquisition transferred from trained to untrained patterns, not depending on the precise retinal layout of the simulated phosphenes. Taken together, the findings suggest that learning of perceptual grouping in a gaze-contingent visual prosthesis can be described simply by the density of visual activation. PMID:27874058
Bahrick, Lorraine E.; Lickliter, Robert; Castellanos, Irina
2014-01-01
Although research has demonstrated impressive face perception skills of young infants, little attention has focused on conditions that enhance versus impair infant face perception. The present studies tested the prediction, generated from the Intersensory Redundancy Hypothesis (IRH), that face discrimination, which relies on detection of visual featural information, would be impaired in the context of intersensory redundancy provided by audiovisual speech, and enhanced in the absence of intersensory redundancy (unimodal visual and asynchronous audiovisual speech) in early development. Later in development, following improvements in attention, faces should be discriminated in both redundant audiovisual and nonredundant stimulation. Results supported these predictions. Two-month-old infants discriminated a novel face in unimodal visual and asynchronous audiovisual speech but not in synchronous audiovisual speech. By 3 months, face discrimination was evident even during synchronous audiovisual speech. These findings indicate that infant face perception is enhanced and emerges developmentally earlier following unimodal visual than synchronous audiovisual exposure and that intersensory redundancy generated by naturalistic audiovisual speech can interfere with face processing. PMID:23244407
ERIC Educational Resources Information Center
Habraken, Clarisse L.
1996-01-01
Highlights the need to reinvigorate chemistry education by means of the visual-spatial approach, an approach wholly in conformance with the way modern chemistry is thought about and practiced. Discusses the changing world, multiple intelligences, imagery, chemistry's pictorial language, and perceptions in chemistry. Presents suggestions on how to…
To See or Not to See: Analyzing Difficulties in Geometry from the Perspective of Visual Perception
ERIC Educational Resources Information Center
Gal, Hagar; Linchevski, Liora
2010-01-01
In this paper, we consider theories about processes of visual perception and perception-based knowledge representation (VPR) in order to explain difficulties encountered in figural processing in junior high school geometry tasks. In order to analyze such difficulties, we take advantage of the following perspectives of VPR: (1) Perceptual…
Tran, Truyet T.; Craven, Ashley P.; Leung, Tsz-Wing; Chat, Sandy W.; Levi, Dennis M.
2016-01-01
Neurons in the early visual cortex are finely tuned to different low-level visual features, forming a multi-channel system analysing the visual image formed on the retina in a parallel manner. However, little is known about the potential ‘cross-talk’ among these channels. Here, we systematically investigated whether stereoacuity, over a large range of target spatial frequencies, can be enhanced by perceptual learning. Using narrow-band visual stimuli, we found that practice with coarse (low spatial frequency) targets substantially improves performance, and that the improvement spreads from coarse to fine (high spatial frequency) three-dimensional perception, generalizing broadly across untrained spatial frequencies and orientations. Notably, we observed an asymmetric transfer of learning across the spatial frequency spectrum. The bandwidth of transfer was broader when training was at a high spatial frequency than at a low spatial frequency. Stereoacuity training is most beneficial when trained with fine targets. This broad transfer of stereoacuity learning contrasts with the highly specific learning reported for other basic visual functions. We also revealed strategies to boost learning outcomes ‘beyond-the-plateau’. Our investigations contribute to understanding the functional properties of the network subserving stereovision. The ability to generalize may provide a key principle for restoring impaired binocular vision in clinical situations. PMID:26909178
Mining Videos for Features that Drive Attention
2015-04-01
Psychology & Neuroscience Graduate Program, University of Southern California, 3641 Watt Way, HNB 10, Los Angeles, CA 90089, USA e-mail: itti@usc.edu...challenging question in neuroscience . Since the onset of visual experience, a human or animal begins to form a subjective percept which, depending on...been added based on neuroscience discoveries of mechanisms of vision in the brain as well as useful features based on computer vision. Figure14.1 illus
Effect Of Contrast On Perceived Motion Of A Plaid
NASA Technical Reports Server (NTRS)
Stone, L. S.; Watson, A. B.; Mulligan, J. B.
1992-01-01
Report desribes series of experiments examining effect of contrast on perception of moving plaids. Each plaid pattern used in experiments was sum of two drifting sinusoidal gratings of different orientations. One of many studies helping to show how brain processes visual information on moving patterns. When gratings forming plaid differ in contrast, apparent direction of motion of plaid biased up to 20 degrees toward direction of grating of higher contrast.
Wang, Li; Sun, Yuhua; Zhou, Xinlin
2016-01-01
Previous studies have observed inconsistent relations between the acuity of the Approximate Number System (ANS) and mathematical achievement. In this paper, we hypothesize that the relation between ANS acuity and mathematical achievement is influenced by fluency; that is, the mathematical achievement test covering a greater expanse of mathematical fluency may better reflect the relation between ANS acuity and mathematics skills. We explored three types of mathematical achievement tests utilized in this study: Subtraction, graded, and semester-final examination. The subtraction test was designed to measure the mathematical fluency. The graded test was more fluency-based than the semester-final examination, but both involved the same mathematical knowledge from the class curriculum. A total of 219 fifth graders from primary schools were asked to perform all three tests, then given a numerosity comparison task, a visual form perception task (figure matching), and a series of other tasks to assess general cognitive processes (mental rotation, non-verbal matrix reasoning, and choice reaction time). The findings were consistent with our expectations. The relation between ANS acuity and mathematical achievement was particularly clearly reflected in the participants’ performance on the visual form perception task, which supports the domain-general explanations for the underlying mechanisms of the relation between ANS acuity and math achievement. PMID:28066291
Ventral and Dorsal Visual Stream Contributions to the Perception of Object Shape and Object Location
Zachariou, Valentinos; Klatzky, Roberta; Behrmann, Marlene
2017-01-01
Growing evidence suggests that the functional specialization of the two cortical visual pathways may not be as distinct as originally proposed. Here, we explore possible contributions of the dorsal “where/how” visual stream to shape perception and, conversely, contributions of the ventral “what” visual stream to location perception in human adults. Participants performed a shape detection task and a location detection task while undergoing fMRI. For shape detection, comparable BOLD activation in the ventral and dorsal visual streams was observed, and the magnitude of this activation was correlated with behavioral performance. For location detection, cortical activation was significantly stronger in the dorsal than ventral visual pathway and did not correlate with the behavioral outcome. This asymmetry in cortical profile across tasks is particularly noteworthy given that the visual input was identical and that the tasks were matched for difficulty in performance. We confirmed the asymmetry in a subsequent psychophysical experiment in which participants detected changes in either object location or shape, while ignoring the other, task-irrelevant dimension. Detection of a location change was slowed by an irrelevant shape change matched for difficulty, but the reverse did not hold. We conclude that both ventral and dorsal visual streams contribute to shape perception, but that location processing appears to be essentially a function of the dorsal visual pathway. PMID:24001005
Audio aided electro-tactile perception training for finger posture biofeedback.
Vargas, Jose Gonzalez; Yu, Wenwei
2008-01-01
Visual information is one of the prerequisites for most biofeedback studies. The aim of this study is to explore how the usage of an audio aided training helps in the learning process of dynamical electro-tactile perception without any visual feedback. In this research, the electrical simulation patterns associated with the experimenter's finger postures and motions were presented to the subjects. Along with the electrical stimulation patterns 2 different types of information, verbal and audio information on finger postures and motions, were presented to the verbal training subject group (group 1) and audio training subject group (group 2), respectively. The results showed an improvement in the ability to distinguish and memorize electrical stimulation patterns correspondent to finger postures and motions without visual feedback, and with audio tones aid, the learning was faster and the perception became more precise after training. Thus, this study clarified that, as a substitution to visual presentation, auditory information could help effectively in the formation of electro-tactile perception. Further research effort needed to make clear the difference between the visual guided and audio aided training in terms of information compilation, post-training effect and robustness of the perception.
Burnat, Kalina; Hu, Tjing-Tjing; Kossut, Małgorzata; Eysel, Ulf T; Arckens, Lutgarde
2017-09-13
Induction of a central retinal lesion in both eyes of adult mammals is a model for macular degeneration and leads to retinotopic map reorganization in the primary visual cortex (V1). Here we characterized the spatiotemporal dynamics of molecular activity levels in the central and peripheral representation of five higher-order visual areas, V2/18, V3/19, V4/21a,V5/PMLS, area 7, and V1/17, in adult cats with central 10° retinal lesions (both sexes), by means of real-time PCR for the neuronal activity reporter gene zif268. The lesions elicited a similar, permanent reduction in activity in the center of the lesion projection zone of area V1/17, V2/18, V3/19, and V4/21a, but not in the motion-driven V5/PMLS, which instead displayed an increase in molecular activity at 3 months postlesion, independent of visual field coordinates. Also area 7 only displayed decreased activity in its LPZ in the first weeks postlesion and increased activities in its periphery from 1 month onward. Therefore we examined the impact of central vision loss on motion perception using random dot kinematograms to test the capacity for form from motion detection based on direction and velocity cues. We revealed that the central retinal lesions either do not impair motion detection or even result in better performance, specifically when motion discrimination was based on velocity discrimination. In conclusion, we propose that central retinal damage leads to enhanced peripheral vision by sensitizing the visual system for motion processing relying on feedback from V5/PMLS and area 7. SIGNIFICANCE STATEMENT Central retinal lesions, a model for macular degeneration, result in functional reorganization of the primary visual cortex. Examining the level of cortical reactivation with the molecular activity marker zif268 revealed reorganization in visual areas outside V1. Retinotopic lesion projection zones typically display an initial depression in zif268 expression, followed by partial recovery with postlesion time. Only the motion-sensitive area V5/PMLS shows no decrease, and even a significant activity increase at 3 months post-retinal lesion. Behavioral tests of motion perception found no impairment and even better sensitivity to higher random dot stimulus velocities. We demonstrate that the loss of central vision induces functional mobilization of motion-sensitive visual cortex, resulting in enhanced perception of moving stimuli. Copyright © 2017 the authors 0270-6474/17/378989-11$15.00/0.
The Neural Basis of Mark Making: A Functional MRI Study of Drawing
Yuan, Ye; Brown, Steven
2014-01-01
Compared to most other forms of visually-guided motor activity, drawing is unique in that it “leaves a trail behind” in the form of the emanating image. We took advantage of an MRI-compatible drawing tablet in order to examine both the motor production and perceptual emanation of images. Subjects participated in a series of mark making tasks in which they were cued to draw geometric patterns on the tablet's surface. The critical comparison was between when visual feedback was displayed (image generation) versus when it was not (no image generation). This contrast revealed an occipito-parietal stream involved in motion-based perception of the emerging image, including areas V5/MT+, LO, V3A, and the posterior part of the intraparietal sulcus. Interestingly, when subjects passively viewed animations of visual patterns emerging on the projected surface, all of the sensorimotor network involved in drawing was strongly activated, with the exception of the primary motor cortex. These results argue that the origin of the human capacity to draw and write involves not only motor skills for tool use but also motor-sensory links between drawing movements and the visual images that emanate from them in real time. PMID:25271440
Schaadt, Gesa; van der Meer, Elke; Pannekamp, Ann; Oberecker, Regine; Männel, Claudia
2018-01-17
During information processing, individuals benefit from bimodally presented input, as has been demonstrated for speech perception (i.e., printed letters and speech sounds) or the perception of emotional expressions (i.e., facial expression and voice tuning). While typically developing individuals show this bimodal benefit, school children with dyslexia do not. Currently, it is unknown whether the bimodal processing deficit in dyslexia also occurs for visual-auditory speech processing that is independent of reading and spelling acquisition (i.e., no letter-sound knowledge is required). Here, we tested school children with and without spelling problems on their bimodal perception of video-recorded mouth movements pronouncing syllables. We analyzed the event-related potential Mismatch Response (MMR) to visual-auditory speech information and compared this response to the MMR to monomodal speech information (i.e., auditory-only, visual-only). We found a reduced MMR with later onset to visual-auditory speech information in children with spelling problems compared to children without spelling problems. Moreover, when comparing bimodal and monomodal speech perception, we found that children without spelling problems showed significantly larger responses in the visual-auditory experiment compared to the visual-only response, whereas children with spelling problems did not. Our results suggest that children with dyslexia exhibit general difficulties in bimodal speech perception independently of letter-speech sound knowledge, as apparent in altered bimodal speech perception and lacking benefit from bimodal information. This general deficit in children with dyslexia may underlie the previously reported reduced bimodal benefit for letter-speech sound combinations and similar findings in emotion perception. Copyright © 2018 Elsevier Ltd. All rights reserved.
D Visualization of Mangrove and Aquaculture Conversion in Banate Bay, Iloilo
NASA Astrophysics Data System (ADS)
Domingo, G. A.; Mallillin, M. M.; Perez, A. M. C.; Claridades, A. R. C.; Tamondong, A. M.
2017-10-01
Studies have shown that mangrove forests in the Philippines have been drastically reduced due to conversion to fishponds, salt ponds, reclamation, as well as other forms of industrial development and as of 2011, Iloilo's 95 % mangrove forest was converted to fishponds. In this research, six (6) Landsat images acquired on the years 1973, 1976, 2000, 2006, 2010, and 2016, were classified using Support Vector Machine (SVM) Classification to determine land cover changes, particularly the area change of mangrove and aquaculture from 1976 to 2016. The results of the classification were used as layers for the generation of 3D visualization models using four (4) platforms namely Google Earth, ArcScene, Virtual Terrain Project, and Terragen. A perception survey was conducted among respondents with different levels of expertise in spatial analysis, 3D visualization, as well as in forestry, fisheries, and aquatic resources to assess the usability, effectiveness, and potential of the various platforms used. Change detection showed that largest negative change for mangrove areas happened from 1976 to 2000, with the mangrove area decreasing from 545.374 hectares to 286.935 hectares. Highest increase in fishpond area occurred from 1973 to 1976 rising from 2,930.67 hectares to 3,441.51 hectares. Results of the perception survey showed that ArcScene is preferred for spatial analysis while respondents favored Terragen for 3D visualization and for forestry, fishery and aquatic resources applications.
Public health nurse perceptions of Omaha System data visualization.
Lee, Seonah; Kim, Era; Monsen, Karen A
2015-10-01
Electronic health records (EHRs) provide many benefits related to the storage, deployment, and retrieval of large amounts of patient data. However, EHRs have not fully met the need to reuse data for decision making on follow-up care plans. Visualization offers new ways to present health data, especially in EHRs. Well-designed data visualization allows clinicians to communicate information efficiently and effectively, contributing to improved interpretation of clinical data and better patient care monitoring and decision making. Public health nurse (PHN) perceptions of Omaha System data visualization prototypes for use in EHRs have not been evaluated. To visualize PHN-generated Omaha System data and assess PHN perceptions regarding the visual validity, helpfulness, usefulness, and importance of the visualizations, including interactive functionality. Time-oriented visualization for problems and outcomes and Matrix visualization for problems and interventions were developed using PHN-generated Omaha System data to help PHNs consume data and plan care at the point of care. Eleven PHNs evaluated prototype visualizations. Overall PHNs response to visualizations was positive, and feedback for improvement was provided. This study demonstrated the potential for using visualization techniques within EHRs to summarize Omaha System patient data for clinicians. Further research is needed to improve and refine these visualizations and assess the potential to incorporate visualizations within clinical EHRs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Maekawa, Toshihiko; Tobimatsu, Shozo; Inada, Naoko; Oribe, Naoya; Onitsuka, Toshiaki; Kanba, Shigenobu; Kamio, Yoko
2011-01-01
Individuals with high-functioning autism spectrum disorder (HF-ASD) often show superior performance in simple visual tasks, despite difficulties in the perception of socially important information such as facial expression. The neural basis of visual perception abnormalities associated with HF-ASD is currently unclear. We sought to elucidate the…
ERIC Educational Resources Information Center
Association for Education of the Visually Handicapped, Philadelphia, PA.
Essays on the visually handicapped are concerned with congenital rubella, an evaluation of multiply handicapped children, the use and abuse of the IQ, visual perception dysfunction, spatial perceptions in the partially sighted, programs in daily living skills, sex education needs, and physical activity as an enhancement of functioning. Other…
Biometric Research in Perception and Neurology Related to the Study of Visual Communication.
ERIC Educational Resources Information Center
Metallinos, Nikos
Contemporary research findings in the fields of perceptual psychology and neurology of the human brain that are directly related to the study of visual communication are reviewed and briefly discussed in this paper. Specifically, the paper identifies those major research findings in visual perception that are relevant to the study of visual…
Buchanan, John J
2016-01-01
The primary goal of this chapter is to merge together the visual perception perspective of observational learning and the coordination dynamics theory of pattern formation in perception and action. Emphasis is placed on identifying movement features that constrain and inform action-perception and action-production processes. Two sources of visual information are examined, relative motion direction and relative phase. The visual perception perspective states that the topological features of relative motion between limbs and joints remains invariant across an actor's motion and therefore are available for pickup by an observer. Relative phase has been put forth as an informational variable that links perception to action within the coordination dynamics theory. A primary assumption of the coordination dynamics approach is that environmental information is meaningful only in terms of the behavior it modifies. Across a series of single limb tasks and bimanual tasks it is shown that the relative motion and relative phase between limbs and joints is picked up through visual processes and supports observational learning of motor skills. Moreover, internal estimations of motor skill proficiency and competency are linked to the informational content found in relative motion and relative phase. Thus, the chapter links action to perception and vice versa and also links cognitive evaluations to the coordination dynamics that support action-perception and action-production processes.
NASA Technical Reports Server (NTRS)
Berthoz, A.; Pavard, B.; Young, L. R.
1975-01-01
The basic characteristics of the sensation of linear horizontal motion have been studied. Objective linear motion was induced by means of a moving cart. Visually induced linear motion perception (linearvection) was obtained by projection of moving images at the periphery of the visual field. Image velocity and luminance thresholds for the appearance of linearvection have been measured and are in the range of those for image motion detection (without sensation of self motion) by the visual system. Latencies of onset are around 1 sec and short term adaptation has been shown. The dynamic range of the visual analyzer as judged by frequency analysis is lower than the vestibular analyzer. Conflicting situations in which visual cues contradict vestibular and other proprioceptive cues show, in the case of linearvection a dominance of vision which supports the idea of an essential although not independent role of vision in self motion perception.
Vatakis, Argiro; Maragos, Petros; Rodomagoulakis, Isidoros; Spence, Charles
2012-01-01
We investigated how the physical differences associated with the articulation of speech affect the temporal aspects of audiovisual speech perception. Video clips of consonants and vowels uttered by three different speakers were presented. The video clips were analyzed using an auditory-visual signal saliency model in order to compare signal saliency and behavioral data. Participants made temporal order judgments (TOJs) regarding which speech-stream (auditory or visual) had been presented first. The sensitivity of participants' TOJs and the point of subjective simultaneity (PSS) were analyzed as a function of the place, manner of articulation, and voicing for consonants, and the height/backness of the tongue and lip-roundedness for vowels. We expected that in the case of the place of articulation and roundedness, where the visual-speech signal is more salient, temporal perception of speech would be modulated by the visual-speech signal. No such effect was expected for the manner of articulation or height. The results demonstrate that for place and manner of articulation, participants' temporal percept was affected (although not always significantly) by highly-salient speech-signals with the visual-signals requiring smaller visual-leads at the PSS. This was not the case when height was evaluated. These findings suggest that in the case of audiovisual speech perception, a highly salient visual-speech signal may lead to higher probabilities regarding the identity of the auditory-signal that modulate the temporal window of multisensory integration of the speech-stimulus. PMID:23060756
Contextual effects on motion perception and smooth pursuit eye movements.
Spering, Miriam; Gegenfurtner, Karl R
2008-08-15
Smooth pursuit eye movements are continuous, slow rotations of the eyes that allow us to follow the motion of a visual object of interest. These movements are closely related to sensory inputs from the visual motion processing system. To track a moving object in the natural environment, its motion first has to be segregated from the motion signals provided by surrounding stimuli. Here, we review experiments on the effect of the visual context on motion processing with a focus on the relationship between motion perception and smooth pursuit eye movements. While perception and pursuit are closely linked, we show that they can behave quite distinctly when required by the visual context.
A new taxonomy for perceptual filling-in
Weil, Rimona S.; Rees, Geraint
2011-01-01
Perceptual filling-in occurs when structures of the visual system interpolate information across regions of visual space where that information is physically absent. It is a ubiquitous and heterogeneous phenomenon, which takes place in different forms almost every time we view the world around us, such as when objects are occluded by other objects or when they fall behind the blind spot. Yet, to date, there is no clear framework for relating these various forms of perceptual filling-in. Similarly, whether these and other forms of filling-in share common mechanisms is not yet known. Here we present a new taxonomy to categorize the different forms of perceptual filling-in. We then examine experimental evidence for the processes involved in each type of perceptual filling-in. Finally, we use established theories of general surface perception to show how contextualizing filling-in using this framework broadens our understanding of the possible shared mechanisms underlying perceptual filling-in. In particular, we consider the importance of the presence of boundaries in determining the phenomenal experience of perceptual filling-in. PMID:21059374
Improving spatial perception in 5-yr.-old Spanish children.
Jiménez, Andrés Canto; Sicilia, Antonio Oña; Vera, Juan Granda
2007-06-01
Assimilation of distance perception was studied in 70 Spanish primary school children. This assimilation involves the generation of projective images which are acquired through two mechanisms. One mechanism is spatial perception, wherein perceptual processes develop ensuring successful immersion in space and the acquisition of visual cues which a person may use to interpret images seen in the distance. The other mechanism is movement through space so that these images are produced. The present study evaluated the influence on improvements in spatial perception of using increasingly larger spaces for training sessions within a motor skills program. Visual parameters were measured in relation to the capture and tracking of moving objects or ocular motility and speed of detection or visual reaction time. Analysis showed that for the group trained in increasingly larger spaces, ocular motility and visual reaction time were significantly improved during. different phases of the program.
Body ownership promotes visual awareness.
van der Hoort, Björn; Reingardt, Maria; Ehrsson, H Henrik
2017-08-17
The sense of ownership of one's body is important for survival, e.g., in defending the body against a threat. However, in addition to affecting behavior, it also affects perception of the world. In the case of visuospatial perception, it has been shown that the sense of ownership causes external space to be perceptually scaled according to the size of the body. Here, we investigated the effect of ownership on another fundamental aspect of visual perception: visual awareness. In two binocular rivalry experiments, we manipulated the sense of ownership of a stranger's hand through visuotactile stimulation while that hand was one of the rival stimuli. The results show that ownership, but not mere visuotactile stimulation, increases the dominance of the hand percept. This effect is due to a combination of longer perceptual dominance durations and shorter suppression durations. Together, these results suggest that the sense of body ownership promotes visual awareness.
Curvilinear approach to an intersection and visual detection of a collision.
Berthelon, C; Mestre, D
1993-09-01
Visual motion perception plays a fundamental role in vehicle control. Recent studies have shown that the pattern of optical flow resulting from the observer's self-motion through a stable environment is used by the observer to accurately control his or her movements. However, little is known about the perception of another vehicle during self-motion--for instance, when a car driver approaches an intersection with traffic. In a series of experiments using visual simulations of car driving, we show that observers are able to detect the presence of a moving object during self-motion. However, the perception of the other car's trajectory appears to be strongly dependent on environmental factors, such as the presence of a road sign near the intersection or the shape of the road. These results suggest that local and global visual factors determine the perception of a car's trajectory during self-motion.
Body ownership promotes visual awareness
Reingardt, Maria; Ehrsson, H Henrik
2017-01-01
The sense of ownership of one’s body is important for survival, e.g., in defending the body against a threat. However, in addition to affecting behavior, it also affects perception of the world. In the case of visuospatial perception, it has been shown that the sense of ownership causes external space to be perceptually scaled according to the size of the body. Here, we investigated the effect of ownership on another fundamental aspect of visual perception: visual awareness. In two binocular rivalry experiments, we manipulated the sense of ownership of a stranger’s hand through visuotactile stimulation while that hand was one of the rival stimuli. The results show that ownership, but not mere visuotactile stimulation, increases the dominance of the hand percept. This effect is due to a combination of longer perceptual dominance durations and shorter suppression durations. Together, these results suggest that the sense of body ownership promotes visual awareness. PMID:28826500
Visual Perception Based Rate Control Algorithm for HEVC
NASA Astrophysics Data System (ADS)
Feng, Zeqi; Liu, PengYu; Jia, Kebin
2018-01-01
For HEVC, rate control is an indispensably important video coding technology to alleviate the contradiction between video quality and the limited encoding resources during video communication. However, the rate control benchmark algorithm of HEVC ignores subjective visual perception. For key focus regions, bit allocation of LCU is not ideal and subjective quality is unsatisfied. In this paper, a visual perception based rate control algorithm for HEVC is proposed. First bit allocation weight of LCU level is optimized based on the visual perception of luminance and motion to ameliorate video subjective quality. Then λ and QP are adjusted in combination with the bit allocation weight to improve rate distortion performance. Experimental results show that the proposed algorithm reduces average 0.5% BD-BR and maximum 1.09% BD-BR at no cost in bitrate accuracy compared with HEVC (HM15.0). The proposed algorithm devotes to improving video subjective quality under various video applications.
Perception of Emotion: Differences in Mode of Presentation, Sex of Perceiver, and Race of Expressor.
ERIC Educational Resources Information Center
Kozel, Nicholas J.; Gitter, A. George
A 2 x 2 x 4 factorial design was utilized to investigate the effects of sex of perceiver, race of expressor (Negro and White), and mode of presentation of stimuli (audio and visual, visual only, audio only, and still pictures) on perception of emotion (POE). Perception of seven emotions (anger, happiness, surprise, fear, disgust, pain, and…
Molloy, Carly S; Di Battista, Ashley M; Anderson, Vicki A; Burnett, Alice; Lee, Katherine J; Roberts, Gehan; Cheong, Jeanie Ly; Anderson, Peter J; Doyle, Lex W
2017-04-01
Children born extremely preterm (EP, <28 weeks) and/or extremely low birth weight (ELBW, <1000 g) have more academic deficiencies than their term-born peers, which may be due to problems with visual processing. The aim of this study is to determine (1) if visual processing is related to poor academic outcomes in EP/ELBW adolescents, and (2) how much of the variance in academic achievement in EP/ELBW adolescents is explained by visual processing ability after controlling for perinatal risk factors and other known contributors to academic performance, particularly attention and working memory. A geographically determined cohort of 228 surviving EP/ELBW adolescents (mean age 17 years) was studied. The relationships between measures of visual processing (visual acuity, binocular stereopsis, eye convergence, and visual perception) and academic achievement were explored within the EP/ELBW group. Analyses were repeated controlling for perinatal and social risk, and measures of attention and working memory. It was found that visual acuity, convergence and visual perception are related to scores for academic achievement on univariable regression analyses. After controlling for potential confounds (perinatal and social risk, working memory and attention), visual acuity, convergence and visual perception remained associated with reading and math computation, but only convergence and visual perception are related to spelling. The additional variance explained by visual processing is up to 6.6% for reading, 2.7% for spelling, and 2.2% for math computation. None of the visual processing variables or visual motor integration are associated with handwriting on multivariable analysis. Working memory is generally a stronger predictor of reading, spelling, and math computation than visual processing. It was concluded that visual processing difficulties are significantly related to academic outcomes in EP/ELBW adolescents; therefore, specific attention should be paid to academic remediation strategies incorporating the management of working memory and visual processing in EP/ELBW children.
Tracking without perceiving: a dissociation between eye movements and motion perception.
Spering, Miriam; Pomplun, Marc; Carrasco, Marisa
2011-02-01
Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept.
Tracking Without Perceiving: A Dissociation Between Eye Movements and Motion Perception
Spering, Miriam; Pomplun, Marc; Carrasco, Marisa
2011-01-01
Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept. PMID:21189353
The Role of Amodal Surface Completion in Stereoscopic Transparency
Anderson, Barton L.; Schmid, Alexandra C.
2012-01-01
Previous work has shown that the visual system can decompose stereoscopic textures into percepts of inhomogeneous transparency. We investigate whether this form of layered image decomposition is shaped by constraints on amodal surface completion. We report a series of experiments that demonstrate that stereoscopic depth differences are easier to discriminate when the stereo images generate a coherent percept of surface color, than when images require amodally integrating a series of color changes into a coherent surface. Our results provide further evidence for the intimate link between the segmentation processes that occur in conditions of transparency and occlusion, and the interpolation processes involved in the formation of amodally completed surfaces. PMID:23060829
Making the Invisible Visible: Verbal but Not Visual Cues Enhance Visual Detection
Lupyan, Gary; Spivey, Michael J.
2010-01-01
Background Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Methodology/Principal Findings Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d′). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Conclusions/Significance Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception. PMID:20628646
Human infrared vision is triggered by two-photon chromophore isomerization
Palczewska, Grazyna; Vinberg, Frans; Stremplewski, Patrycjusz; Bircher, Martin P.; Salom, David; Komar, Katarzyna; Zhang, Jianye; Cascella, Michele; Wojtkowski, Maciej; Kefalov, Vladimir J.; Palczewski, Krzysztof
2014-01-01
Vision relies on photoactivation of visual pigments in rod and cone photoreceptor cells of the retina. The human eye structure and the absorption spectra of pigments limit our visual perception of light. Our visual perception is most responsive to stimulating light in the 400- to 720-nm (visible) range. First, we demonstrate by psychophysical experiments that humans can perceive infrared laser emission as visible light. Moreover, we show that mammalian photoreceptors can be directly activated by near infrared light with a sensitivity that paradoxically increases at wavelengths above 900 nm, and display quadratic dependence on laser power, indicating a nonlinear optical process. Biochemical experiments with rhodopsin, cone visual pigments, and a chromophore model compound 11-cis-retinyl-propylamine Schiff base demonstrate the direct isomerization of visual chromophore by a two-photon chromophore isomerization. Indeed, quantum mechanics modeling indicates the feasibility of this mechanism. Together, these findings clearly show that human visual perception of near infrared light occurs by two-photon isomerization of visual pigments. PMID:25453064
The effect of phasic auditory alerting on visual perception.
Petersen, Anders; Petersen, Annemarie Hilkjær; Bundesen, Claus; Vangkilde, Signe; Habekost, Thomas
2017-08-01
Phasic alertness refers to a short-lived change in the preparatory state of the cognitive system following an alerting signal. In the present study, we examined the effect of phasic auditory alerting on distinct perceptual processes, unconfounded by motor components. We combined an alerting/no-alerting design with a pure accuracy-based single-letter recognition task. Computational modeling based on Bundesen's Theory of Visual Attention was used to examine the effect of phasic alertness on visual processing speed and threshold of conscious perception. Results show that phasic auditory alertness affects visual perception by increasing the visual processing speed and lowering the threshold of conscious perception (Experiment 1). By manipulating the intensity of the alerting cue, we further observed a positive relationship between alerting intensity and processing speed, which was not seen for the threshold of conscious perception (Experiment 2). This was replicated in a third experiment, in which pupil size was measured as a physiological marker of alertness. Results revealed that the increase in processing speed was accompanied by an increase in pupil size, substantiating the link between alertness and processing speed (Experiment 3). The implications of these results are discussed in relation to a newly developed mathematical model of the relationship between levels of alertness and the speed with which humans process visual information. Copyright © 2017 Elsevier B.V. All rights reserved.
Shen, Mowei; Xu, Haokui; Zhang, Haihang; Shui, Rende; Zhang, Meng; Zhou, Jifan
2015-08-01
Visual working memory (VWM) has been traditionally viewed as a mental structure subsequent to visual perception that stores the final output of perceptual processing. However, VWM has recently been emphasized as a critical component of online perception, providing storage for the intermediate perceptual representations produced during visual processing. This interactive view holds the core assumption that VWM is not the terminus of perceptual processing; the stored visual information rather continues to undergo perceptual processing if necessary. The current study tests this assumption, demonstrating an example of involuntary integration of the VWM content, by creating the Ponzo illusion in VWM: when the Ponzo illusion figure was divided into its individual components and sequentially encoded into VWM, the temporally separated components were involuntarily integrated, leading to the distorted length perception of the two horizontal lines. This VWM Ponzo illusion was replicated when the figure components were presented in different combinations and presentation order. The magnitude of the illusion was significantly correlated between VWM and perceptual versions of the Ponzo illusion. These results suggest that the information integration underling the VWM Ponzo illusion is constrained by the laws of visual perception and similarly affected by the common individual factors that govern its perception. Thus, our findings provide compelling evidence that VWM functions as a buffer serving perceptual processes at early stages. Copyright © 2015 Elsevier B.V. All rights reserved.
Short-term memory affects color perception in context.
Olkkonen, Maria; Allred, Sarah R
2014-01-01
Color-based object selection - for instance, looking for ripe tomatoes in the market - places demands on both perceptual and memory processes: it is necessary to form a stable perceptual estimate of surface color from a variable visual signal, as well as to retain multiple perceptual estimates in memory while comparing objects. Nevertheless, perceptual and memory processes in the color domain are generally studied in separate research programs with the assumption that they are independent. Here, we demonstrate a strong failure of independence between color perception and memory: the effect of context on color appearance is substantially weakened by a short retention interval between a reference and test stimulus. This somewhat counterintuitive result is consistent with Bayesian estimation: as the precision of the representation of the reference surface and its context decays in memory, prior information gains more weight, causing the retained percepts to be drawn toward prior information about surface and context color. This interaction implies that to fully understand information processing in real-world color tasks, perception and memory need to be considered jointly.
Simione, Luca; Akyürek, Elkan G; Vastola, Valentina; Raffone, Antonino; Bowman, Howard
2017-05-01
We investigated the relationship between different kinds of target reports in a rapid serial visual presentation task, and their associated perceptual experience. Participants reported the identity of two targets embedded in a stream of stimuli and their associated subjective visibility. In our task, target stimuli could be combined together to form more complex ones, thus allowing participants to report temporally integrated percepts. We found that integrated percepts were associated with high subjective visibility scores, whereas reports in which the order of targets was reversed led to a poorer perceptual experience. We also found a reciprocal relationship between the chance of the second target not being reported correctly and the perceptual experience associated with the first one. Principally, our results indicate that integrated percepts are experienced as a unique, clear perceptual event, whereas order reversals are experienced as confused, similar to cases in which an entirely wrong response was given. Copyright © 2017 Elsevier Inc. All rights reserved.
A conceptual review on action-perception coupling in the musicians’ brain: what is it good for?
Novembre, Giacomo; Keller, Peter E.
2014-01-01
Experience with a sensorimotor task, such as practicing a piano piece, leads to strong coupling of sensory (visual or auditory) and motor cortices. Here we review behavioral and neurophysiological (M/EEG, TMS and fMRI) research exploring this topic using the brain of musicians as a model system. Our review focuses on a recent body of evidence suggesting that this form of coupling might have (at least) two cognitive functions. First, it leads to the generation of equivalent predictions (concerning both when and what event is more likely to occur) during both perception and production of music. Second, it underpins the common coding of perception and action that supports the integration of the motor output of multiple musicians’ in the context of joint musical tasks. Essentially, training-based coupling of perception and action might scaffold the human ability to represent complex (structured) actions and to entrain multiple agents—via reciprocal prediction and adaptation—in the pursuit of shared goals. PMID:25191246
Chen, Yi-Chuan; Lewis, Terri L; Shore, David I; Maurer, Daphne
2017-02-20
Temporal simultaneity provides an essential cue for integrating multisensory signals into a unified perception. Early visual deprivation, in both animals and humans, leads to abnormal neural responses to audiovisual signals in subcortical and cortical areas [1-5]. Behavioral deficits in integrating complex audiovisual stimuli in humans are also observed [6, 7]. It remains unclear whether early visual deprivation affects visuotactile perception similarly to audiovisual perception and whether the consequences for either pairing differ after monocular versus binocular deprivation [8-11]. Here, we evaluated the impact of early visual deprivation on the perception of simultaneity for audiovisual and visuotactile stimuli in humans. We tested patients born with dense cataracts in one or both eyes that blocked all patterned visual input until the cataractous lenses were removed and the affected eyes fitted with compensatory contact lenses (mean duration of deprivation = 4.4 months; range = 0.3-28.8 months). Both monocularly and binocularly deprived patients demonstrated lower precision in judging audiovisual simultaneity. However, qualitatively different outcomes were observed for the two patient groups: the performance of monocularly deprived patients matched that of young children at immature stages, whereas that of binocularly deprived patients did not match any stage in typical development. Surprisingly, patients performed normally in judging visuotactile simultaneity after either monocular or binocular deprivation. Therefore, early binocular input is necessary to develop normal neural substrates for simultaneity perception of visual and auditory events but not visual and tactile events. Copyright © 2017 Elsevier Ltd. All rights reserved.
Clonal selection versus clonal cooperation: the integrated perception of immune objects
Nataf, Serge
2016-01-01
Analogies between the immune and nervous systems were first envisioned by the immunologist Niels Jerne who introduced the concepts of antigen "recognition" and immune "memory". However, since then, it appears that only the cognitive immunology paradigm proposed by Irun Cohen, attempted to further theorize the immune system functions through the prism of neurosciences. The present paper is aimed at revisiting this analogy-based reasoning. In particular, a parallel is drawn between the brain pathways of visual perception and the processes allowing the global perception of an "immune object". Thus, in the visual system, distinct features of a visual object (shape, color, motion) are perceived separately by distinct neuronal populations during a primary perception task. The output signals generated during this first step instruct then an integrated perception task performed by other neuronal networks. Such a higher order perception step is by essence a cooperative task that is mandatory for the global perception of visual objects. Based on a re-interpretation of recent experimental data, it is suggested that similar general principles drive the integrated perception of immune objects in secondary lymphoid organs (SLOs). In this scheme, the four main categories of signals characterizing an immune object (antigenic, contextual, temporal and localization signals) are first perceived separately by distinct networks of immunocompetent cells. Then, in a multitude of SLO niches, the output signals generated during this primary perception step are integrated by TH-cells at the single cell level. This process eventually generates a multitude of T-cell and B-cell clones that perform, at the scale of SLOs, an integrated perception of immune objects. Overall, this new framework proposes that integrated immune perception and, consequently, integrated immune responses, rely essentially on clonal cooperation rather than clonal selection. PMID:27830060
Perception of CPR quality: Influence of CPR feedback, Just-in-Time CPR training and provider role.
Cheng, Adam; Overly, Frank; Kessler, David; Nadkarni, Vinay M; Lin, Yiqun; Doan, Quynh; Duff, Jonathan P; Tofil, Nancy M; Bhanji, Farhan; Adler, Mark; Charnovich, Alex; Hunt, Elizabeth A; Brown, Linda L
2015-02-01
Many healthcare providers rely on visual perception to guide cardiopulmonary resuscitation (CPR), but little is known about the accuracy of provider perceptions of CPR quality. We aimed to describe the difference between perceived versus measured CPR quality, and to determine the impact of provider role, real-time visual CPR feedback and Just-in-Time (JIT) CPR training on provider perceptions. We conducted secondary analyses of data collected from a prospective, multicenter, randomized trial of 324 healthcare providers who participated in a simulated cardiac arrest scenario between July 2012 and April 2014. Participants were randomized to one of four permutations of: JIT CPR training and real-time visual CPR feedback. We calculated the difference between perceived and measured quality of CPR and reported the proportion of subjects accurately estimating the quality of CPR within each study arm. Participants overestimated achieving adequate chest compression depth (mean difference range: 16.1-60.6%) and rate (range: 0.2-51%), and underestimated chest compression fraction (0.2-2.9%) across all arms. Compared to no intervention, the use of real-time feedback and JIT CPR training (alone or in combination) improved perception of depth (p<0.001). Accurate estimation of CPR quality was poor for chest compression depth (0-13%), rate (5-46%) and chest compression fraction (60-63%). Perception of depth is more accurate in CPR providers versus team leaders (27.8% vs. 7.4%; p=0.043) when using real-time feedback. Healthcare providers' visual perception of CPR quality is poor. Perceptions of CPR depth are improved by using real-time visual feedback and with prior JIT CPR training. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Parallel processing of general and specific threat during early stages of perception
2016-01-01
Differential processing of threat can consummate as early as 100 ms post-stimulus. Moreover, early perception not only differentiates threat from non-threat stimuli but also distinguishes among discrete threat subtypes (e.g. fear, disgust and anger). Combining spatial-frequency-filtered images of fear, disgust and neutral scenes with high-density event-related potentials and intracranial source estimation, we investigated the neural underpinnings of general and specific threat processing in early stages of perception. Conveyed in low spatial frequencies, fear and disgust images evoked convergent visual responses with similarly enhanced N1 potentials and dorsal visual (middle temporal gyrus) cortical activity (relative to neutral cues; peaking at 156 ms). Nevertheless, conveyed in high spatial frequencies, fear and disgust elicited divergent visual responses, with fear enhancing and disgust suppressing P1 potentials and ventral visual (occipital fusiform) cortical activity (peaking at 121 ms). Therefore, general and specific threat processing operates in parallel in early perception, with the ventral visual pathway engaged in specific processing of discrete threats and the dorsal visual pathway in general threat processing. Furthermore, selectively tuned to distinctive spatial-frequency channels and visual pathways, these parallel processes underpin dimensional and categorical threat characterization, promoting efficient threat response. These findings thus lend support to hybrid models of emotion. PMID:26412811
Nakamura, S; Shimojo, S
1998-10-01
The effects of the size and eccentricity of the visual stimulus upon visually induced perception of self-motion (vection) were examined with various sizes of central and peripheral visual stimulation. Analysis indicated the strength of vection increased linearly with the size of the area in which the moving pattern was presented, but there was no difference in vection strength between central and peripheral stimuli when stimulus sizes were the same. Thus, the effect of stimulus size is homogeneous across eccentricities in the visual field.
Campana, Florence; Rebollo, Ignacio; Urai, Anne; Wyart, Valentin; Tallon-Baudry, Catherine
2016-05-11
The reverse hierarchy theory (Hochstein and Ahissar, 2002) makes strong, but so far untested, predictions on conscious vision. In this theory, local details encoded in lower-order visual areas are unconsciously processed before being automatically and rapidly combined into global information in higher-order visual areas, where conscious percepts emerge. Contingent on current goals, local details can afterward be consciously retrieved. This model therefore predicts that (1) global information is perceived faster than local details, (2) global information is computed regardless of task demands during early visual processing, and (3) spontaneous vision is dominated by global percepts. We designed novel textured stimuli that are, as opposed to the classic Navon's letters, truly hierarchical (i.e., where global information is solely defined by local information but where local and global orientations can still be manipulated separately). In line with the predictions, observers were systematically faster reporting global than local properties of those stimuli. Second, global information could be decoded from magneto-encephalographic data during early visual processing regardless of task demands. Last, spontaneous subjective reports were dominated by global information and the frequency and speed of spontaneous global perception correlated with the accuracy and speed in the global task. No such correlation was observed for local information. We therefore show that information at different levels of the visual hierarchy is not equally likely to become conscious; rather, conscious percepts emerge preferentially at a global level. We further show that spontaneous reports can be reliable and are tightly linked to objective performance at the global level. Is information encoded at different levels of the visual system (local details in low-level areas vs global shapes in high-level areas) equally likely to become conscious? We designed new hierarchical stimuli and provide the first empirical evidence based on behavioral and MEG data that global information encoded at high levels of the visual hierarchy dominates perception. This result held both in the presence and in the absence of task demands. The preferential emergence of percepts at high levels can account for two properties of conscious vision, namely, the dominance of global percepts and the feeling of visual richness reported independently of the perception of local details. Copyright © 2016 the authors 0270-6474/16/365200-14$15.00/0.
Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu
2015-01-01
Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828
Coherent modulation of stimulus colour can affect visually induced self-motion perception.
Nakamura, Shinji; Seno, Takeharu; Ito, Hiroyuki; Sunaga, Shoji
2010-01-01
The effects of dynamic colour modulation on vection were investigated to examine whether perceived variation of illumination affects self-motion perception. Participants observed expanding optic flow which simulated their forward self-motion. Onset latency, accumulated duration, and estimated magnitude of the self-motion were measured as indices of vection strength. Colour of the dots in the visual stimulus was modulated between white and red (experiment 1), white and grey (experiment 2), and grey and red (experiment 3). The results indicated that coherent colour oscillation in the visual stimulus significantly suppressed the strength of vection, whereas incoherent or static colour modulation did not affect vection. There was no effect of the types of the colour modulation; both achromatic and chromatic modulations turned out to be effective in inhibiting self-motion perception. Moreover, in a situation where the simulated direction of a spotlight was manipulated dynamically, vection strength was also suppressed (experiment 4). These results suggest that observer's perception of illumination is critical for self-motion perception, and rapid variation of perceived illumination would impair the reliabilities of visual information in determining self-motion.
Neural Integration in Body Perception.
Ramsey, Richard
2018-06-19
The perception of other people is instrumental in guiding social interactions. For example, the appearance of the human body cues a wide range of inferences regarding sex, age, health, and personality, as well as emotional state and intentions, which influence social behavior. To date, most neuroscience research on body perception has aimed to characterize the functional contribution of segregated patches of cortex in the ventral visual stream. In light of the growing prominence of network architectures in neuroscience, the current article reviews neuroimaging studies that measure functional integration between different brain regions during body perception. The review demonstrates that body perception is not restricted to processing in the ventral visual stream but instead reflects a functional alliance between the ventral visual stream and extended neural systems associated with action perception, executive functions, and theory of mind. Overall, these findings demonstrate how body percepts are constructed through interactions in distributed brain networks and underscore that functional segregation and integration should be considered together when formulating neurocognitive theories of body perception. Insight from such an updated model of body perception generalizes to inform the organizational structure of social perception and cognition more generally and also informs disorders of body image, such as anorexia nervosa, which may rely on atypical integration of body-related information.
Psycho acoustical Measures in Individuals with Congenital Visual Impairment.
Kumar, Kaushlendra; Thomas, Teenu; Bhat, Jayashree S; Ranjan, Rajesh
2017-12-01
In congenital visual impaired individuals one modality is impaired (visual modality) this impairment is compensated by other sensory modalities. There is evidence that visual impaired performed better in different auditory task like localization, auditory memory, verbal memory, auditory attention, and other behavioural tasks when compare to normal sighted individuals. The current study was aimed to compare the temporal resolution, frequency resolution and speech perception in noise ability in individuals with congenital visual impaired and normal sighted. Temporal resolution, frequency resolution, and speech perception in noise were measured using MDT, GDT, DDT, SRDT, and SNR50 respectively. Twelve congenital visual impaired participants with age range of 18 to 40 years were taken and equal in number with normal sighted participants. All the participants had normal hearing sensitivity with normal middle ear functioning. Individual with visual impairment showed superior threshold in MDT, SRDT and SNR50 as compared to normal sighted individuals. This may be due to complexity of the tasks; MDT, SRDT and SNR50 are complex tasks than GDT and DDT. Visual impairment showed superior performance in auditory processing and speech perception with complex auditory perceptual tasks.
Ansorge, Ulrich; Francis, Gregory; Herzog, Michael H; Oğmen, Haluk
2008-07-15
The 1990s, the "decade of the brain," witnessed major advances in the study of visual perception, cognition, and consciousness. Impressive techniques in neurophysiology, neuroanatomy, neuropsychology, electrophysiology, psychophysics and brain-imaging were developed to address how the nervous system transforms and represents visual inputs. Many of these advances have dealt with the steady-state properties of processing. To complement this "steady-state approach," more recent research emphasized the importance of dynamic aspects of visual processing. Visual masking has been a paradigm of choice for more than a century when it comes to the study of dynamic vision. A recent workshop (http://lpsy.epfl.ch/VMworkshop/), held in Delmenhorst, Germany, brought together an international group of researchers to present state-of-the-art research on dynamic visual processing with a focus on visual masking. This special issue presents peer-reviewed contributions by the workshop participants and provides a contemporary synthesis of how visual masking can inform the dynamics of human perception, cognition, and consciousness.
Ansorge, Ulrich; Francis, Gregory; Herzog, Michael H.; Öğmen, Haluk
2008-01-01
The 1990s, the “decade of the brain,” witnessed major advances in the study of visual perception, cognition, and consciousness. Impressive techniques in neurophysiology, neuroanatomy, neuropsychology, electrophysiology, psychophysics and brain-imaging were developed to address how the nervous system transforms and represents visual inputs. Many of these advances have dealt with the steady-state properties of processing. To complement this “steady-state approach,” more recent research emphasized the importance of dynamic aspects of visual processing. Visual masking has been a paradigm of choice for more than a century when it comes to the study of dynamic vision. A recent workshop (http://lpsy.epfl.ch/VMworkshop/), held in Delmenhorst, Germany, brought together an international group of researchers to present state-of-the-art research on dynamic visual processing with a focus on visual masking. This special issue presents peer-reviewed contributions by the workshop participants and provides a contemporary synthesis of how visual masking can inform the dynamics of human perception, cognition, and consciousness. PMID:20517493
Mechanisms of migraine aura revealed by functional MRI in human visual cortex
Hadjikhani, Nouchine; Sanchez del Rio, Margarita; Wu, Ona; Schwartz, Denis; Bakker, Dick; Fischl, Bruce; Kwong, Kenneth K.; Cutrer, F. Michael; Rosen, Bruce R.; Tootell, Roger B. H.; Sorensen, A. Gregory; Moskowitz, Michael A.
2001-01-01
Cortical spreading depression (CSD) has been suggested to underlie migraine visual aura. However, it has been challenging to test this hypothesis in human cerebral cortex. Using high-field functional MRI with near-continuous recording during visual aura in three subjects, we observed blood oxygenation level-dependent (BOLD) signal changes that demonstrated at least eight characteristics of CSD, time-locked to percept/onset of the aura. Initially, a focal increase in BOLD signal (possibly reflecting vasodilation), developed within extrastriate cortex (area V3A). This BOLD change progressed contiguously and slowly (3.5 ± 1.1 mm/min) over occipital cortex, congruent with the retinotopy of the visual percept. Following the same retinotopic progression, the BOLD signal then diminished (possibly reflecting vasoconstriction after the initial vasodilation), as did the BOLD response to visual activation. During periods with no visual stimulation, but while the subject was experiencing scintillations, BOLD signal followed the retinotopic progression of the visual percept. These data strongly suggest that an electrophysiological event such as CSD generates the aura in human visual cortex. PMID:11287655
Wu, Xiang; He, Sheng; Bushara, Khalaf; Zeng, Feiyan; Liu, Ying; Zhang, Daren
2012-10-01
Object recognition occurs even when environmental information is incomplete. Illusory contours (ICs), in which a contour is perceived though the contour edges are incomplete, have been extensively studied as an example of such a visual completion phenomenon. Despite the neural activity in response to ICs in visual cortical areas from low (V1 and V2) to high (LOC: the lateral occipital cortex) levels, the details of the neural processing underlying IC perception are largely not clarified. For example, how do the visual areas function in IC perception and how do they interact to archive the coherent contour perception? IC perception involves the process of completing the local discrete contour edges (contour completion) and the process of representing the global completed contour information (contour representation). Here, functional magnetic resonance imaging was used to dissociate contour completion and contour representation by varying each in opposite directions. The results show that the neural activity was stronger to stimuli with more contour completion than to stimuli with more contour representation in V1 and V2, which was the reverse of that in the LOC. When inspecting the neural activity change across the visual pathway, the activation remained high for the stimuli with more contour completion and increased for the stimuli with more contour representation. These results suggest distinct neural correlates of contour completion and contour representation, and the possible collaboration between the two processes during IC perception, indicating a neural connection between the discrete retinal input and the coherent visual percept. Copyright © 2011 Wiley Periodicals, Inc.
Training of attention functions in children with attention deficit hyperactivity disorder.
Tucha, Oliver; Tucha, Lara; Kaumann, Gesa; König, Sebastian; Lange, Katharina M; Stasik, Dorota; Streather, Zoe; Engelschalk, Tobias; Lange, Klaus W
2011-09-01
Pharmacological treatment of children with ADHD has been shown to be successful; however, medication may not normalize attention functions. The present study was based on a neuropsychological model of attention and assessed the effect of an attention training program on attentional functioning of children with ADHD. Thirty-two children with ADHD and 16 healthy children participated in the study. Children with ADHD were randomly assigned to one of the two conditions, i.e., an attention training program which trained aspects of vigilance, selective attention and divided attention, or a visual perception training which trained perceptual skills, such as perception of figure and ground, form constancy and position in space. The training programs were applied in individual sessions, twice a week, for a period of four consecutive weeks. Healthy children did not receive any training. Alertness, vigilance, selective attention, divided attention, and flexibility were examined prior to and following the interventions. Children with ADHD were assessed and trained while on ADHD medications. Data analysis revealed that the attention training used in the present study led to significant improvements of various aspects of attention, including vigilance, divided attention, and flexibility, while the visual perception training had no specific effects. The findings indicate that attention training programs have the potential to facilitate attentional functioning in children with ADHD treated with ADHD drugs.
Visualizing the Perception Filter and Breaching It with Active-Learning Strategies
ERIC Educational Resources Information Center
White, Harold B.
2012-01-01
Teachers' perception filter operates in all realms of their consciousness. It plays an important part in what and how students learn and should play a central role in what and how they teach. This may be obvious, but having a visual model of a perception filter can guide the way they think about education. In this article, the author talks about…
Effects of Form Perception and Meaning on the Visual Evoked Potential with Author’s Update
2009-09-01
2001. Controlling binocular rivalry. Vision Research. 41: 2943-2950. Freud , S. 1995. The Basic Writings of Sigmund Freud (Psychopathology of... Freud , S. 1990. The Ego and the Id (The Standard Edition of the Complete Psychological Works of Sigmund Freud ). P. Gay, Introduction. New York...1935), and psychoanalytic talk therapies associated with S. Freud (1995, 1990), gave results that were less accessible to scientific method. In short
1982-03-01
are two qualitatively different forms of human information processing (James, 1890; Hasher & Zacks, 1979; LaBerge , 1973, 1975; Logan, 1978, 1979...Kristofferson, M. W. When item recognition and visual search functions are similar. Perception & Psychophysics, 1972, 12, 379-384. LaBerge , D. Attention and...the measurement of perceptual learning. Hemory and3 Conition, 1973, 1, 263-276. LaBerge , D. Acquisition of automatic processing in purceptual and
Bókkon, I; Salari, V; Tuszynski, J A; Antal, I
2010-09-02
Recently, we have proposed a redox molecular hypothesis about the natural biophysical substrate of visual perception and imagery [1,6]. Namely, the retina transforms external photon signals into electrical signals that are carried to the V1 (striatecortex). Then, V1 retinotopic electrical signals (spike-related electrical signals along classical axonal-dendritic pathways) can be converted into regulated ultraweak bioluminescent photons (biophotons) through redox processes within retinotopic visual neurons that make it possible to create intrinsic biophysical pictures during visual perception and imagery. However, the consensus opinion is to consider biophotons as by-products of cellular metabolism. This paper argues that biophotons are not by-products, other than originating from regulated cellular radical/redox processes. It also shows that the biophoton intensity can be considerably higher inside cells than outside. Our simple calculations, within a level of accuracy, suggest that the real biophoton intensity in retinotopic neurons may be sufficient for creating intrinsic biophysical picture representation of a single-object image during visual perception. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Affective and physiological correlates of the perception of unimodal and bimodal emotional stimuli.
Rosa, Pedro J; Oliveira, Jorge; Alghazzawi, Daniyal; Fardoun, Habib; Gamito, Pedro
2017-08-01
Despite the multisensory nature of perception, previous research on emotions has been focused on unimodal emotional cues with visual stimuli. To the best of our knowledge, there is no evidence on the extent to which incongruent emotional cues from visual and auditory sensory channels affect pupil size. To investigate the effects of audiovisual emotional information perception on the physiological and affective response, but also to determine the impact of mismatched cues in emotional perception on these physiological indexes. Pupil size, electrodermal activity and affective subjective responses were recorded while 30 participants were exposed to visual and auditory stimuli with varied emotional content in three different experimental conditions: pictures and sounds presented alone (unimodal), emotionally matched audio-visual stimuli (bimodal congruent) and emotionally mismatched audio-visual stimuli (bimodal incongruent). The data revealed no effect of emotional incongruence on physiological and affective responses. On the other hand, pupil size covaried with skin conductance response (SCR), but the subjective experience was partially dissociated from autonomic responses. Emotional stimuli are able to trigger physiological responses regardless of valence, sensory modality or level of emotional congruence.
Reduced efficiency of audiovisual integration for nonnative speech.
Yi, Han-Gyol; Phelps, Jasmine E B; Smiljanic, Rajka; Chandrasekaran, Bharath
2013-11-01
The role of visual cues in native listeners' perception of speech produced by nonnative speakers has not been extensively studied. Native perception of English sentences produced by native English and Korean speakers in audio-only and audiovisual conditions was examined. Korean speakers were rated as more accented in audiovisual than in the audio-only condition. Visual cues enhanced word intelligibility for native English speech but less so for Korean-accented speech. Reduced intelligibility of Korean-accented audiovisual speech was associated with implicit visual biases, suggesting that listener-related factors partially influence the efficiency of audiovisual integration for nonnative speech perception.
When writing impairs reading: letter perception's susceptibility to motor interference.
James, Karin H; Gauthier, Isabel
2009-08-01
The effect of writing on the concurrent visual perception of letters was investigated in a series of studies using an interference paradigm. Participants drew shapes and letters while simultaneously visually identifying letters and shapes embedded in noise. Experiments 1-3 demonstrated that letter perception, but not the perception of shapes, was affected by motor interference. This suggests a strong link between the perception of letters and the neural substrates engaged during writing. The overlap both in category (letter vs. shape) and in the perceptual similarity of the features (straight vs. curvy) of the seen and drawn items determined the amount of interference. Experiment 4 demonstrated that intentional production of letters is not necessary for the interference to occur, because passive movement of the hand in the shape of letters also interfered with letter perception. When passive movements were used, however, only the category of the drawn items (letters vs. shapes), but not the perceptual similarity, had an influence, suggesting that motor representations for letters may selectively influence visual perception of letters through proprioceptive feedback, with an additional influence of perceptual similarity that depends on motor programs.
Influence of visual path information on human heading perception during rotation.
Li, Li; Chen, Jing; Peng, Xiaozhe
2009-03-31
How does visual path information influence people's perception of their instantaneous direction of self-motion (heading)? We have previously shown that humans can perceive heading without direct access to visual path information. Here we vary two key parameters for estimating heading from optic flow, the field of view (FOV) and the depth range of environmental points, to investigate the conditions under which visual path information influences human heading perception. The display simulated an observer traveling on a circular path. Observers used a joystick to rotate their line of sight until deemed aligned with true heading. Four FOV sizes (110 x 94 degrees, 48 x 41 degrees, 16 x 14 degrees, 8 x 7 degrees) and depth ranges (6-50 m, 6-25 m, 6-12.5 m, 6-9 m) were tested. Consistent with our computational modeling results, heading bias increased with the reduction of FOV or depth range when the display provided a sequence of velocity fields but no direct path information. When the display provided path information, heading bias was not influenced as much by the reduction of FOV or depth range. We conclude that human heading and path perception involve separate visual processes. Path helps heading perception when the display does not contain enough optic-flow information for heading estimation during rotation.
Unconscious Imagination and the Mental Imagery Debate
Brogaard, Berit; Gatzia, Dimitria Electra
2017-01-01
Traditionally, philosophers have appealed to the phenomenological similarity between visual experience and visual imagery to support the hypothesis that there is significant overlap between the perceptual and imaginative domains. The current evidence, however, is inconclusive: while evidence from transcranial brain stimulation seems to support this conclusion, neurophysiological evidence from brain lesion studies (e.g., from patients with brain lesions resulting in a loss of mental imagery but not a corresponding loss of perception and vice versa) indicates that there are functional and anatomical dissociations between mental imagery and perception. Assuming that the mental imagery and perception do not overlap, at least, to the extent traditionally assumed, then the question arises as to what exactly mental imagery is and whether it parallels perception by proceeding via several functionally distinct mechanisms. In this review, we argue that even though there may not be a shared mechanism underlying vision for perception and conscious imagery, there is an overlap between the mechanisms underlying vision for action and unconscious visual imagery. On the basis of these findings, we propose a modification of Kosslyn’s model of imagery that accommodates unconscious imagination and explore possible explanations of the quasi-pictorial phenomenology of conscious visual imagery in light of the fact that its underlying neural substrates and mechanisms typically are distinct from those of visual experience. PMID:28588527
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, Andrew; Haass, Michael; Rintoul, Mark Daniel
GazeAppraise advances the state of the art of gaze pattern analysis using methods that simultaneously analyze spatial and temporal characteristics of gaze patterns. GazeAppraise enables novel research in visual perception and cognition; for example, using shape features as distinguishing elements to assess individual differences in visual search strategy. Given a set of point-to-point gaze sequences, hereafter referred to as scanpaths, the method constructs multiple descriptive features for each scanpath. Once the scanpath features have been calculated, they are used to form a multidimensional vector representing each scanpath and cluster analysis is performed on the set of vectors from all scanpaths.more » An additional benefit of this method is the identification of causal or correlated characteristics of the stimuli, subjects, and visual task through statistical analysis of descriptive metadata distributions within and across clusters.« less
Gestalt Theory Rearranged: Back to Wertheimer
Guberman, Shelia
2017-01-01
Wertheimer's seminal paper of 1923 was of gerat influence in psychology and other sciences. Wertheimer also emphasized the weaknesses of the newborn Gestalt theory: too many basic laws, and the ambiguity of definitions. At the same time, the paper contained potential solutions to these problems, in the form of a number of very important ideas, some of which were presented implicitly: perception through imitation, communicative nature of linear drawings and writings, transfer from the visual domain to motor domain, linguistic interpretation of the Gestalt. In this paper it will be shown that based on these ideas the Gestalt theory can be rearranged so that the main notions can be well defined, and the general principle of Gestalt perception, which overarches all known laws and unifies different Gestalt phenomena (the imitation principle) can be introduced. The presented model of Gestalt perception is supported by fundamental neurophysiological data—the mirror neurons phenomenon and simulation theory. PMID:29075220
Gestalt Theory Rearranged: Back to Wertheimer.
Guberman, Shelia
2017-01-01
Wertheimer's seminal paper of 1923 was of gerat influence in psychology and other sciences. Wertheimer also emphasized the weaknesses of the newborn Gestalt theory: too many basic laws, and the ambiguity of definitions. At the same time, the paper contained potential solutions to these problems, in the form of a number of very important ideas, some of which were presented implicitly: perception through imitation, communicative nature of linear drawings and writings, transfer from the visual domain to motor domain, linguistic interpretation of the Gestalt. In this paper it will be shown that based on these ideas the Gestalt theory can be rearranged so that the main notions can be well defined, and the general principle of Gestalt perception, which overarches all known laws and unifies different Gestalt phenomena (the imitation principle) can be introduced. The presented model of Gestalt perception is supported by fundamental neurophysiological data-the mirror neurons phenomenon and simulation theory.
Gori, Simone; Molteni, Massimo; Facoetti, Andrea
2016-01-01
A visual illusion refers to a percept that is different in some aspect from the physical stimulus. Illusions are a powerful non-invasive tool for understanding the neurobiology of vision, telling us, indirectly, how the brain processes visual stimuli. There are some neurodevelopmental disorders characterized by visual deficits. Surprisingly, just a few studies investigated illusory perception in clinical populations. Our aim is to review the literature supporting a possible role for visual illusions in helping us understand the visual deficits in developmental dyslexia and autism spectrum disorder. Future studies could develop new tools – based on visual illusions – to identify an early risk for neurodevelopmental disorders. PMID:27199702
Treleaven, Julia; Takasaki, Hiroshi
2015-02-01
Subjective visual vertical (SVV) assesses visual dependence for spacial orientation, via vertical perception testing. Using the computerized rod-and-frame test (CRFT), SVV is thought to be an important measure of cervical proprioception and might be greater in those with whiplash associated disorder (WAD), but to date research findings are inconsistent. The aim of this study was to investigate the most sensitive SVV error measurement to detect group differences between no neck pain control, idiopathic neck pain (INP) and WAD subjects. Cross sectional study. Neck Disability Index (NDI), Dizziness Handicap Inventory short form (DHIsf) and the average constant error (CE), absolute error (AE), root mean square error (RMSE), and variable error (VE) of the SVV were obtained from 142 subjects (48 asymptomatic, 36 INP, 42 WAD). The INP group had significantly (p < 0.03) greater VE and RMSE when compared to both the control and WAD groups. There were no differences seen between the WAD and controls. The results demonstrated that people with INP (not WAD), had an altered strategy for maintaining the perception of vertical by increasing variability of performance. This may be due to the complexity of the task. Further, the SVV performance was not related to reported pain or dizziness handicap. These findings are inconsistent with other measures of cervical proprioception in neck pain and more research is required before the SVV can be considered an important measure and utilized clinically. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
Rocha, Karolinne Maia; Vabre, Laurent; Chateau, Nicolas; Krueger, Ronald R
2010-01-01
To evaluate the changes in visual acuity and visual perception generated by correcting higher order aberrations in highly aberrated eyes using a large-stroke adaptive optics visual simulator. A crx1 Adaptive Optics Visual Simulator (Imagine Eyes) was used to correct and modify the wavefront aberrations in 12 keratoconic eyes and 8 symptomatic postoperative refractive surgery (LASIK) eyes. After measuring ocular aberrations, the device was programmed to compensate for the eye's wavefront error from the second order to the fifth order (6-mm pupil). Visual acuity was assessed through the adaptive optics system using computer-generated ETDRS opto-types and the Freiburg Visual Acuity and Contrast Test. Mean higher order aberration root-mean-square (RMS) errors in the keratoconus and symptomatic LASIK eyes were 1.88+/-0.99 microm and 1.62+/-0.79 microm (6-mm pupil), respectively. The visual simulator correction of the higher order aberrations present in the keratoconus eyes improved their visual acuity by a mean of 2 lines when compared to their best spherocylinder correction (mean decimal visual acuity with spherocylindrical correction was 0.31+/-0.18 and improved to 0.44+/-0.23 with higher order aberration correction). In the symptomatic LASIK eyes, the mean decimal visual acuity with spherocylindrical correction improved from 0.54+/-0.16 to 0.71+/-0.13 with higher order aberration correction. The visual perception of ETDRS letters was improved when correcting higher order aberrations. The adaptive optics visual simulator can effectively measure and compensate for higher order aberrations (second to fifth order), which are associated with diminished visual acuity and perception in highly aberrated eyes. The adaptive optics technology may be of clinical benefit when counseling patients with highly aberrated eyes regarding their maximum subjective potential for vision correction. Copyright 2010, SLACK Incorporated.
ERIC Educational Resources Information Center
Roberson, Debi; Pak, Hyensou; Hanley, J. Richard
2008-01-01
In this study we demonstrate that Korean (but not English) speakers show Categorical perception (CP) on a visual search task for a boundary between two Korean colour categories that is not marked in English. These effects were observed regardless of whether target items were presented to the left or right visual field. Because this boundary is…
Wallace, Deanna L.
2017-01-01
The neuromodulator acetylcholine modulates spatial integration in visual cortex by altering the balance of inputs that generate neuronal receptive fields. These cholinergic effects may provide a neurobiological mechanism underlying the modulation of visual representations by visual spatial attention. However, the consequences of cholinergic enhancement on visuospatial perception in humans are unknown. We conducted two experiments to test whether enhancing cholinergic signaling selectively alters perceptual measures of visuospatial interactions in human subjects. In Experiment 1, a double-blind placebo-controlled pharmacology study, we measured how flanking distractors influenced detection of a small contrast decrement of a peripheral target, as a function of target-flanker distance. We found that cholinergic enhancement with the cholinesterase inhibitor donepezil improved target detection, and modeling suggested that this was mainly due to a narrowing of the extent of facilitatory perceptual spatial interactions. In Experiment 2, we tested whether these effects were selective to the cholinergic system or would also be observed following enhancements of related neuromodulators dopamine or norepinephrine. Unlike cholinergic enhancement, dopamine (bromocriptine) and norepinephrine (guanfacine) manipulations did not improve performance or systematically alter the spatial profile of perceptual interactions between targets and distractors. These findings reveal mechanisms by which cholinergic signaling influences visual spatial interactions in perception and improves processing of a visual target among distractors, effects that are notably similar to those of spatial selective attention. SIGNIFICANCE STATEMENT Acetylcholine influences how visual cortical neurons integrate signals across space, perhaps providing a neurobiological mechanism for the effects of visual selective attention. However, the influence of cholinergic enhancement on visuospatial perception remains unknown. Here we demonstrate that cholinergic enhancement improves detection of a target flanked by distractors, consistent with sharpened visuospatial perceptual representations. Furthermore, whereas most pharmacological studies focus on a single neurotransmitter, many neuromodulators can have related effects on cognition and perception. Thus, we also demonstrate that enhancing noradrenergic and dopaminergic systems does not systematically improve visuospatial perception or alter its tuning. Our results link visuospatial tuning effects of acetylcholine at the neuronal and perceptual levels and provide insights into the connection between cholinergic signaling and visual attention. PMID:28336568
Buchan, Julie N; Munhall, Kevin G
2011-01-01
Conflicting visual speech information can influence the perception of acoustic speech, causing an illusory percept of a sound not present in the actual acoustic speech (the McGurk effect). We examined whether participants can voluntarily selectively attend to either the auditory or visual modality by instructing participants to pay attention to the information in one modality and to ignore competing information from the other modality. We also examined how performance under these instructions was affected by weakening the influence of the visual information by manipulating the temporal offset between the audio and video channels (experiment 1), and the spatial frequency information present in the video (experiment 2). Gaze behaviour was also monitored to examine whether attentional instructions influenced the gathering of visual information. While task instructions did have an influence on the observed integration of auditory and visual speech information, participants were unable to completely ignore conflicting information, particularly information from the visual stream. Manipulating temporal offset had a more pronounced interaction with task instructions than manipulating the amount of visual information. Participants' gaze behaviour suggests that the attended modality influences the gathering of visual information in audiovisual speech perception.
Cao, Yongqiang; Grossberg, Stephen
2012-02-01
A laminar cortical model of stereopsis and 3D surface perception is developed and simulated. The model shows how spiking neurons that interact in hierarchically organized laminar circuits of the visual cortex can generate analog properties of 3D visual percepts. The model describes how monocular and binocular oriented filtering interact with later stages of 3D boundary formation and surface filling-in in the LGN and cortical areas V1, V2, and V4. It proposes how interactions between layers 4, 3B, and 2/3 in V1 and V2 contribute to stereopsis, and how binocular and monocular information combine to form 3D boundary and surface representations. The model suggests how surface-to-boundary feedback from V2 thin stripes to pale stripes helps to explain how computationally complementary boundary and surface formation properties lead to a single consistent percept, eliminate redundant 3D boundaries, and trigger figure-ground perception. The model also shows how false binocular boundary matches may be eliminated by Gestalt grouping properties. In particular, the disparity filter, which helps to solve the correspondence problem by eliminating false matches, is realized using inhibitory interneurons as part of the perceptual grouping process by horizontal connections in layer 2/3 of cortical area V2. The 3D sLAMINART model simulates 3D surface percepts that are consciously seen in 18 psychophysical experiments. These percepts include contrast variations of dichoptic masking and the correspondence problem, the effect of interocular contrast differences on stereoacuity, Panum's limiting case, the Venetian blind illusion, stereopsis with polarity-reversed stereograms, da Vinci stereopsis, and perceptual closure. The model hereby illustrates a general method of unlumping rate-based models that use the membrane equations of neurophysiology into models that use spiking neurons, and which may be embodied in VLSI chips that use spiking neurons to minimize heat production. Copyright © 2011 Elsevier Ltd. All rights reserved.
Predictive and postdictive mechanisms jointly contribute to visual awareness.
Soga, Ryosuke; Akaishi, Rei; Sakai, Katsuyuki
2009-09-01
One of the fundamental issues in visual awareness is how we are able to perceive the scene in front of our eyes on time despite the delay in processing visual information. The prediction theory postulates that our visual system predicts the future to compensate for such delays. On the other hand, the postdiction theory postulates that our visual awareness is inevitably a delayed product. In the present study we used flash-lag paradigms in motion and color domains and examined how the perception of visual information at the time of flash is influenced by prior and subsequent visual events. We found that both types of event additively influence the perception of the present visual image, suggesting that our visual awareness results from joint contribution of predictive and postdictive mechanisms.
Moving Stimuli Facilitate Synchronization But Not Temporal Perception
Silva, Susana; Castro, São Luís
2016-01-01
Recent studies have shown that a moving visual stimulus (e.g., a bouncing ball) facilitates synchronization compared to a static stimulus (e.g., a flashing light), and that it can even be as effective as an auditory beep. We asked a group of participants to perform different tasks with four stimulus types: beeps, siren-like sounds, visual flashes (static) and bouncing balls. First, participants performed synchronization with isochronous sequences (stimulus-guided synchronization), followed by a continuation phase in which the stimulus was internally generated (imagery-guided synchronization). Then they performed a perception task, in which they judged whether the final part of a temporal sequence was compatible with the previous beat structure (stimulus-guided perception). Similar to synchronization, an imagery-guided variant was added, in which sequences contained a gap in between (imagery-guided perception). Balls outperformed flashes and matched beeps (powerful ball effect) in stimulus-guided synchronization but not in perception (stimulus- or imagery-guided). In imagery-guided synchronization, performance accuracy decreased for beeps and balls, but not for flashes and sirens. Our findings suggest that the advantages of moving visual stimuli over static ones are grounded in action rather than perception, and they support the hypothesis that the sensorimotor coupling mechanisms for auditory (beeps) and moving visual stimuli (bouncing balls) overlap. PMID:27909419
Moving Stimuli Facilitate Synchronization But Not Temporal Perception.
Silva, Susana; Castro, São Luís
2016-01-01
Recent studies have shown that a moving visual stimulus (e.g., a bouncing ball) facilitates synchronization compared to a static stimulus (e.g., a flashing light), and that it can even be as effective as an auditory beep. We asked a group of participants to perform different tasks with four stimulus types: beeps, siren-like sounds, visual flashes (static) and bouncing balls. First, participants performed synchronization with isochronous sequences (stimulus-guided synchronization), followed by a continuation phase in which the stimulus was internally generated (imagery-guided synchronization). Then they performed a perception task, in which they judged whether the final part of a temporal sequence was compatible with the previous beat structure (stimulus-guided perception). Similar to synchronization, an imagery-guided variant was added, in which sequences contained a gap in between (imagery-guided perception). Balls outperformed flashes and matched beeps (powerful ball effect) in stimulus-guided synchronization but not in perception (stimulus- or imagery-guided). In imagery-guided synchronization, performance accuracy decreased for beeps and balls, but not for flashes and sirens. Our findings suggest that the advantages of moving visual stimuli over static ones are grounded in action rather than perception, and they support the hypothesis that the sensorimotor coupling mechanisms for auditory (beeps) and moving visual stimuli (bouncing balls) overlap.
Shirane, Seiko; Inagaki, Masumi; Sata, Yoshimi; Kaga, Makiko
2004-07-01
In order to evaluate visual perception, the P300 event-related potentials (ERPs) for visual oddball tasks were recorded in 11 patients with attention deficit/hyperactivity disorders (AD/HD), 12 with mental retardation (MR) and 14 age-matched healthy controls. With the aim of revealing trial-to-trial variabilities which are neglected by investigating averaged ERPs, single sweep P300s (ss-P300s) were assessed in addition to averaged P300. There were no significant differences of averaged P300 latency and amplitude between controls and AD/HD patients. AD/HD patients showed an increased variability in the amplitude of ss-P300s, while MR patient showed an increased variability in latency. These findings suggest that in AD/HD patients general attention is impaired to a larger extent than selective attention and visual perception.
Modeling Color Difference for Visualization Design.
Szafir, Danielle Albers
2018-01-01
Color is frequently used to encode values in visualizations. For color encodings to be effective, the mapping between colors and values must preserve important differences in the data. However, most guidelines for effective color choice in visualization are based on either color perceptions measured using large, uniform fields in optimal viewing environments or on qualitative intuitions. These limitations may cause data misinterpretation in visualizations, which frequently use small, elongated marks. Our goal is to develop quantitative metrics to help people use color more effectively in visualizations. We present a series of crowdsourced studies measuring color difference perceptions for three common mark types: points, bars, and lines. Our results indicate that peoples' abilities to perceive color differences varies significantly across mark types. Probabilistic models constructed from the resulting data can provide objective guidance for designers, allowing them to anticipate viewer perceptions in order to inform effective encoding design.
Mann, David L; Abernethy, Bruce; Farrow, Damian
2010-07-01
Coupled interceptive actions are understood to be the result of neural processing-and visual information-which is distinct from that used for uncoupled perceptual responses. To examine the visual information used for action and perception, skilled cricket batters anticipated the direction of balls bowled toward them using a coupled movement (an interceptive action that preserved the natural coupling between perception and action) or an uncoupled (verbal) response, in each of four different visual blur conditions (plano, +1.00, +2.00, +3.00). Coupled responses were found to be better than uncoupled ones, with the blurring of vision found to result in different effects for the coupled and uncoupled response conditions. Low levels of visual blur did not affect coupled anticipation, a finding consistent with the comparatively poorer visual information on which online interceptive actions are proposed to rely. In contrast, some evidence was found to suggest that low levels of blur may enhance the uncoupled verbal perception of movement.
Dukic, T; Hanson, L; Falkmer, T
2006-01-15
The study examined the effects of manual control locations on two groups of randomly selected young and old drivers in relation to visual time off road, steering wheel deviation and safety perception. Measures of visual time off road, steering wheel deviations and safety perception were performed with young and old drivers during real traffic. The results showed an effect of both driver's age and button location on the dependent variables. Older drivers spent longer visual time off road when pushing the buttons and had larger steering wheel deviations. Moreover, the greater the eccentricity between the normal line of sight and the button locations, the longer the visual time off road and the larger the steering wheel deviations. No interaction effect between button location and age was found with regard to visual time off road. Button location had an effect on perceived safety: the further away from the normal line of sight the lower the rating.
The “Visual Shock” of Francis Bacon: an essay in neuroesthetics
Zeki, Semir; Ishizu, Tomohiro
2013-01-01
In this paper we discuss the work of Francis Bacon in the context of his declared aim of giving a “visual shock.”We explore what this means in terms of brain activity and what insights into the brain's visual perceptive system his work gives. We do so especially with reference to the representation of faces and bodies in the human visual brain. We discuss the evidence that shows that both these categories of stimuli have a very privileged status in visual perception, compared to the perception of other stimuli, including man-made artifacts such as houses, chairs, and cars. We show that viewing stimuli that depart significantly from a normal representation of faces and bodies entails a significant difference in the pattern of brain activation. We argue that Bacon succeeded in delivering his “visual shock” because he subverted the normal neural representation of faces and bodies, without at the same time subverting the representation of man-made artifacts. PMID:24339812
The "Visual Shock" of Francis Bacon: an essay in neuroesthetics.
Zeki, Semir; Ishizu, Tomohiro
2013-01-01
In this paper we discuss the work of Francis Bacon in the context of his declared aim of giving a "visual shock."We explore what this means in terms of brain activity and what insights into the brain's visual perceptive system his work gives. We do so especially with reference to the representation of faces and bodies in the human visual brain. We discuss the evidence that shows that both these categories of stimuli have a very privileged status in visual perception, compared to the perception of other stimuli, including man-made artifacts such as houses, chairs, and cars. We show that viewing stimuli that depart significantly from a normal representation of faces and bodies entails a significant difference in the pattern of brain activation. We argue that Bacon succeeded in delivering his "visual shock" because he subverted the normal neural representation of faces and bodies, without at the same time subverting the representation of man-made artifacts.
Dima, Diana C; Perry, Gavin; Singh, Krish D
2018-06-11
In navigating our environment, we rapidly process and extract meaning from visual cues. However, the relationship between visual features and categorical representations in natural scene perception is still not well understood. Here, we used natural scene stimuli from different categories and filtered at different spatial frequencies to address this question in a passive viewing paradigm. Using representational similarity analysis (RSA) and cross-decoding of magnetoencephalography (MEG) data, we show that categorical representations emerge in human visual cortex at ∼180 ms and are linked to spatial frequency processing. Furthermore, dorsal and ventral stream areas reveal temporally and spatially overlapping representations of low and high-level layer activations extracted from a feedforward neural network. Our results suggest that neural patterns from extrastriate visual cortex switch from low-level to categorical representations within 200 ms, highlighting the rapid cascade of processing stages essential in human visual perception. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Chen, Yi-Nan; Lin, Chin-Kai; Wei, Ta-Sen; Liu, Chi-Hsin; Wuang, Yee-Pay
2013-12-01
This study compared the effectiveness of three approaches to improving visual perception among preschool children 4-6 years old with developmental delays: multimedia visual perceptual group training, multimedia visual perceptual individual training, and paper visual perceptual group training. A control group received no special training. This study employed a pretest-posttest control group of true experimental design. A total of 64 children 4-6 years old with developmental delays were randomized into four groups: (1) multimedia visual perceptual group training (15 subjects); (2) multimedia visual perceptual individual training group (15 subjects); paper visual perceptual group training (19 subjects); and (4) a control group (15 subjects) with no visual perceptual training. Forty minute training sessions were conducted once a week for 14 weeks. The Test of Visual Perception Skills, third edition, was used to evaluate the effectiveness of the intervention. Paired-samples t-test showed significant differences pre- and post-test among the three groups, but no significant difference was found between the pre-test and post-test scores among the control group. ANOVA results showed significant differences in improvement levels among the four study groups. Scheffe post hoc test results showed significant differences between: group 1 and group 2; group 1 and group 3; group 1 and the control group; and group 2 and the control group. No significant differences were reported between group 2 and group 3, and group 3 and the control group. The results showed all three therapeutic programs produced significant differences between pretest and posttest scores. The training effect on the multimedia visual perceptual group program and the individual program was greater than the developmental effect Both the multimedia visual perceptual group training program and the multimedia visual perceptual individual training program produced significant effects on visual perception. The multimedia visual perceptual group training program was more effective for improving visual perception than was multimedia visual perceptual individual training program. The multimedia visual perceptual group training program was more effective than was the paper visual perceptual group training program. Copyright © 2013 Elsevier Ltd. All rights reserved.
Spatiotemporal Filter for Visual Motion Integration from Pursuit Eye Movements in Humans and Monkeys
Liu, Bing
2017-01-01
Despite the enduring interest in motion integration, a direct measure of the space–time filter that the brain imposes on a visual scene has been elusive. This is perhaps because of the challenge of estimating a 3D function from perceptual reports in psychophysical tasks. We take a different approach. We exploit the close connection between visual motion estimates and smooth pursuit eye movements to measure stimulus–response correlations across space and time, computing the linear space–time filter for global motion direction in humans and monkeys. Although derived from eye movements, we find that the filter predicts perceptual motion estimates quite well. To distinguish visual from motor contributions to the temporal duration of the pursuit motion filter, we recorded single-unit responses in the monkey middle temporal cortical area (MT). We find that pursuit response delays are consistent with the distribution of cortical neuron latencies and that temporal motion integration for pursuit is consistent with a short integration MT subpopulation. Remarkably, the visual system appears to preferentially weight motion signals across a narrow range of foveal eccentricities rather than uniformly over the whole visual field, with a transiently enhanced contribution from locations along the direction of motion. We find that the visual system is most sensitive to motion falling at approximately one-third the radius of the stimulus aperture. Hypothesizing that the visual drive for pursuit is related to the filtered motion energy in a motion stimulus, we compare measured and predicted eye acceleration across several other target forms. SIGNIFICANCE STATEMENT A compact model of the spatial and temporal processing underlying global motion perception has been elusive. We used visually driven smooth eye movements to find the 3D space–time function that best predicts both eye movements and perception of translating dot patterns. We found that the visual system does not appear to use all available motion signals uniformly, but rather weights motion preferentially in a narrow band at approximately one-third the radius of the stimulus. Although not universal, the filter predicts responses to other types of stimuli, demonstrating a remarkable degree of generalization that may lead to a deeper understanding of visual motion processing. PMID:28003348
Marx, Svenja; Gruenhage, Gina; Walper, Daniel; Rutishauser, Ueli; Einhäuser, Wolfgang
2015-01-01
Competition is ubiquitous in perception. For example, items in the visual field compete for processing resources, and attention controls their priority (biased competition). The inevitable ambiguity in the interpretation of sensory signals yields another form of competition: distinct perceptual interpretations compete for access to awareness. Rivalry, where two equally likely percepts compete for dominance, explicates the latter form of competition. Building upon the similarity between attention and rivalry, we propose to model rivalry by a generic competitive circuit that is widely used in the attention literature—a winner-take-all (WTA) network. Specifically, we show that a network of two coupled WTA circuits replicates three common hallmarks of rivalry: the distribution of dominance durations, their dependence on input strength (“Levelt's propositions”), and the effects of stimulus removal (blanking). This model introduces a form of memory by forming discrete states and explains experimental data better than competitive models of rivalry without memory. This result supports the crucial role of memory in rivalry specifically and in competitive processes in general. Our approach unifies the seemingly distinct phenomena of rivalry, memory, and attention in a single model with competition as the common underlying principle. PMID:25581077
Dynamic Stimuli And Active Processing In Human Visual Perception
NASA Astrophysics Data System (ADS)
Haber, Ralph N.
1990-03-01
Theories of visual perception traditionally have considered a static retinal image to be the starting point for processing; and has considered processing both to be passive and a literal translation of that frozen, two dimensional, pictorial image. This paper considers five problem areas in the analysis of human visually guided locomotion, in which the traditional approach is contrasted to newer ones that utilize dynamic definitions of stimulation, and an active perceiver: (1) differentiation between object motion and self motion, and among the various kinds of self motion (e.g., eyes only, head only, whole body, and their combinations); (2) the sources and contents of visual information that guide movement; (3) the acquisition and performance of perceptual motor skills; (4) the nature of spatial representations, percepts, and the perceived layout of space; and (5) and why the retinal image is a poor starting point for perceptual processing. These newer approaches argue that stimuli must be considered as dynamic: humans process the systematic changes in patterned light when objects move and when they themselves move. Furthermore, the processing of visual stimuli must be active and interactive, so that perceivers can construct panoramic and stable percepts from an interaction of stimulus information and expectancies of what is contained in the visual environment. These developments all suggest a very different approach to the computational analyses of object location and identification, and of the visual guidance of locomotion.
Evolution and Optimality of Similar Neural Mechanisms for Perception and Action during Search
Zhang, Sheng; Eckstein, Miguel P.
2010-01-01
A prevailing theory proposes that the brain's two visual pathways, the ventral and dorsal, lead to differing visual processing and world representations for conscious perception than those for action. Others have claimed that perception and action share much of their visual processing. But which of these two neural architectures is favored by evolution? Successful visual search is life-critical and here we investigate the evolution and optimality of neural mechanisms mediating perception and eye movement actions for visual search in natural images. We implement an approximation to the ideal Bayesian searcher with two separate processing streams, one controlling the eye movements and the other stream determining the perceptual search decisions. We virtually evolved the neural mechanisms of the searchers' two separate pathways built from linear combinations of primary visual cortex receptive fields (V1) by making the simulated individuals' probability of survival depend on the perceptual accuracy finding targets in cluttered backgrounds. We find that for a variety of targets, backgrounds, and dependence of target detectability on retinal eccentricity, the mechanisms of the searchers' two processing streams converge to similar representations showing that mismatches in the mechanisms for perception and eye movements lead to suboptimal search. Three exceptions which resulted in partial or no convergence were a case of an organism for which the targets are equally detectable across the retina, an organism with sufficient time to foveate all possible target locations, and a strict two-pathway model with no interconnections and differential pre-filtering based on parvocellular and magnocellular lateral geniculate cell properties. Thus, similar neural mechanisms for perception and eye movement actions during search are optimal and should be expected from the effects of natural selection on an organism with limited time to search for food that is not equi-detectable across its retina and interconnected perception and action neural pathways. PMID:20838589
Functional analysis from visual and compositional data. An artificial intelligence approach.
NASA Astrophysics Data System (ADS)
Barceló, J. A.; Moitinho de Almeida, V.
Why archaeological artefacts are the way they are? In this paper we try to solve such a question by investigating the relationship between form and function. We propose new ways of studying the way behaviour in the past can be asserted on the examination of archaeological observables in the present. In any case, we take into account that there are also non-visual features characterizing ancient objects and materials (i.e., compositional information based on mass spectrometry data, chronological information based on radioactive decay measurements, etc.). Information that should make us aware of many functional properties of objects is multidimensional in nature: size, which makes reference to height, length, depth, weight and mass; shape and form, which make reference to the geometry of contours and volumes; texture, which refers to the microtopography (roughness, waviness, and lay) and visual appearance (colour variations, brightness, reflectivity and transparency) of surfaces; and finally material, meaning the combining of distinct compositional elements and properties to form a whole. With the exception of material data, the other relevant aspects for functional reasoning have been traditionally described in rather ambiguous terms, without taking into account the advantages of quantitative measurements of shape/form, and texture. Reasoning about the functionality of archaeological objects recovered at the archaeological site requires a cross-disciplinary investigation, which may also range from recognition techniques used in computer vision and robotics to reasoning, representation, and learning methods in artificial intelligence. The approach we adopt here is to follow current computational theories of object perception to ameliorate the way archaeology can deal with the explanation of human behaviour in the past (function) from the analysis of visual and non-visual data, taking into account that visual appearances and even compositional characteristics only constrain the way an object may be used, but never fully determine it.
Seeing Is the Hardest Thing to See: Using Illusions to Teach Visual Perception
ERIC Educational Resources Information Center
Riener, Cedar
2015-01-01
This chapter describes three examples of using illusions to teach visual perception. The illusions present ways for students to change their perspective regarding how their eyes work and also offer opportunities to question assumptions regarding their approach to knowledge.
Attentional Episodes in Visual Perception
ERIC Educational Resources Information Center
Wyble, Brad; Potter, Mary C.; Bowman, Howard; Nieuwenstein, Mark
2011-01-01
Is one's temporal perception of the world truly as seamless as it appears? This article presents a computationally motivated theory suggesting that visual attention samples information from temporal episodes (episodic simultaneous type/serial token model; Wyble, Bowman, & Nieuwenstein, 2009). Breaks between these episodes are punctuated by periods…
Core, Cynthia; Brown, Janean W; Larsen, Michael D; Mahshie, James
2014-01-01
The objectives of this research were to determine whether an adapted version of a Hybrid Visual Habituation procedure could be used to assess speech perception of phonetic and prosodic features of speech (vowel height, lexical stress, and intonation) in individual pre-school-age children who use cochlear implants. Nine children ranging in age from 3;4 to 5;5 participated in this study. Children were prelingually deaf and used cochlear implants and had no other known disabilities. Children received two speech feature tests using an adaptation of a Hybrid Visual Habituation procedure. Seven of the nine children demonstrated perception of at least one speech feature using this procedure using results from a Bayesian linear regression analysis. At least one child demonstrated perception of each speech feature using this assessment procedure. An adapted version of the Hybrid Visual Habituation Procedure with an appropriate statistical analysis provides a way to assess phonetic and prosodicaspects of speech in pre-school-age children who use cochlear implants.
Emotional voice and emotional body postures influence each other independently of visual awareness.
Stienen, Bernard M C; Tanaka, Akihiro; de Gelder, Beatrice
2011-01-01
Multisensory integration may occur independently of visual attention as previously shown with compound face-voice stimuli. We investigated in two experiments whether the perception of whole body expressions and the perception of voices influence each other when observers are not aware of seeing the bodily expression. In the first experiment participants categorized masked happy and angry bodily expressions while ignoring congruent or incongruent emotional voices. The onset between target and mask varied from -50 to +133 ms. Results show that the congruency between the emotion in the voice and the bodily expressions influences audiovisual perception independently of the visibility of the stimuli. In the second experiment participants categorized the emotional voices combined with masked bodily expressions as fearful or happy. This experiment showed that bodily expressions presented outside visual awareness still influence prosody perception. Our experiments show that audiovisual integration between bodily expressions and affective prosody can take place outside and independent of visual awareness.
Perception of touch quality in piano tones.
Goebl, Werner; Bresin, Roberto; Fujinaga, Ichiro
2014-11-01
Both timbre and dynamics of isolated piano tones are determined exclusively by the speed with which the hammer hits the strings. This physical view has been challenged by pianists who emphasize the importance of the way the keyboard is touched. This article presents empirical evidence from two perception experiments showing that touch-dependent sound components make sounds with identical hammer velocities but produced with different touch forms clearly distinguishable. The first experiment focused on finger-key sounds: musicians could identify pressed and struck touches. When the finger-key sounds were removed from the sounds, the effect vanished, suggesting that these sounds were the primary identification cue. The second experiment looked at key-keyframe sounds that occur when the key reaches key-bottom. Key-bottom impact was identified from key motion measured by a computer-controlled piano. Musicians were able to discriminate between piano tones that contain a key-bottom sound from those that do not. However, this effect might be attributable to sounds associated with the mechanical components of the piano action. In addition to the demonstrated acoustical effects of different touch forms, visual and tactile modalities may play important roles during piano performance that influence the production and perception of musical expression on the piano.
The 50s cliff: a decline in perceptuo-motor learning, not a deficit in visual motion perception.
Ren, Jie; Huang, Shaochen; Zhang, Jiancheng; Zhu, Qin; Wilson, Andrew D; Snapp-Childs, Winona; Bingham, Geoffrey P
2015-01-01
Previously, we measured perceptuo-motor learning rates across the lifespan and found a sudden drop in learning rates between ages 50 and 60, called the "50s cliff." The task was a unimanual visual rhythmic coordination task in which participants used a joystick to oscillate one dot in a display in coordination with another dot oscillated by a computer. Participants learned to produce a coordination with a 90° relative phase relation between the dots. Learning rates for participants over 60 were half those of younger participants. Given existing evidence for visual motion perception deficits in people over 60 and the role of visual motion perception in the coordination task, it remained unclear whether the 50s cliff reflected onset of this deficit or a genuine decline in perceptuo-motor learning. The current work addressed this question. Two groups of 12 participants in each of four age ranges (20s, 50s, 60s, 70s) learned to perform a bimanual coordination of 90° relative phase. One group trained with only haptic information and the other group with both haptic and visual information about relative phase. Both groups were tested in both information conditions at baseline and post-test. If the 50s cliff was caused by an age dependent deficit in visual motion perception, then older participants in the visual group should have exhibited less learning than those in the haptic group, which should not exhibit the 50s cliff, and older participants in both groups should have performed less well when tested with visual information. Neither of these expectations was confirmed by the results, so we concluded that the 50s cliff reflects a genuine decline in perceptuo-motor learning with aging, not the onset of a deficit in visual motion perception.
Al-Marri, Faraj; Reza, Faruque; Begum, Tahamina; Hitam, Wan Hazabbah Wan; Jin, Goh Khean; Xiang, Jing
2017-10-25
Visual cognitive function is important to build up executive function in daily life. Perception of visual Number form (e.g., Arabic digit) and numerosity (magnitude of the Number) is of interest to cognitive neuroscientists. Neural correlates and the functional measurement of Number representations are complex occurrences when their semantic categories are assimilated with other concepts of shape and colour. Colour perception can be processed further to modulate visual cognition. The Ishihara pseudoisochromatic plates are one of the best and most common screening tools for basic red-green colour vision testing. However, there is a lack of study of visual cognitive function assessment using these pseudoisochromatic plates. We recruited 25 healthy normal trichromat volunteers and extended these studies using a 128-sensor net to record event-related EEG. Subjects were asked to respond by pressing Numbered buttons when they saw the Number and Non-number plates of the Ishihara colour vision test. Amplitudes and latencies of N100 and P300 event related potential (ERP) components were analysed from 19 electrode sites in the international 10-20 system. A brain topographic map, cortical activation patterns and Granger causation (effective connectivity) were analysed from 128 electrode sites. No major significant differences between N100 ERP components in either stimulus indicate early selective attention processing was similar for Number and Non-number plate stimuli, but Non-number plate stimuli evoked significantly higher amplitudes, longer latencies of the P300 ERP component with a slower reaction time compared to Number plate stimuli imply the allocation of attentional load was more in Non-number plate processing. A different pattern of asymmetric scalp voltage map was noticed for P300 components with a higher intensity in the left hemisphere for Number plate tasks and higher intensity in the right hemisphere for Non-number plate tasks. Asymmetric cortical activation and connectivity patterns revealed that Number recognition occurred in the occipital and left frontal areas where as the consequence was limited to the occipital area during the Non-number plate processing. Finally, the results displayed that the visual recognition of Numbers dissociates from the recognition of Non-numbers at the level of defined neural networks. Number recognition was not only a process of visual perception and attention, but it was also related to a higher level of cognitive function, that of language.
Global motion perception deficits in autism are reflected as early as primary visual cortex
Thomas, Cibu; Kravitz, Dwight J.; Wallace, Gregory L.; Baron-Cohen, Simon; Martin, Alex; Baker, Chris I.
2014-01-01
Individuals with autism are often characterized as ‘seeing the trees, but not the forest’—attuned to individual details in the visual world at the expense of the global percept they compose. Here, we tested the extent to which global processing deficits in autism reflect impairments in (i) primary visual processing; or (ii) decision-formation, using an archetypal example of global perception, coherent motion perception. In an event-related functional MRI experiment, 43 intelligence quotient and age-matched male participants (21 with autism, age range 15–27 years) performed a series of coherent motion perception judgements in which the amount of local motion signals available to be integrated into a global percept was varied by controlling stimulus viewing duration (0.2 or 0.6 s) and the proportion of dots moving in the correct direction (coherence: 4%, 15%, 30%, 50%, or 75%). Both typical participants and those with autism evidenced the same basic pattern of accuracy in judging the direction of motion, with performance decreasing with reduced coherence and shorter viewing durations. Critically, these effects were exaggerated in autism: despite equal performance at the long duration, performance was more strongly reduced by shortening viewing duration in autism (P < 0.015) and decreasing stimulus coherence (P < 0.008). To assess the neural correlates of these effects we focused on the responses of primary visual cortex and the middle temporal area, critical in the early visual processing of motion signals, as well as a region in the intraparietal sulcus thought to be involved in perceptual decision-making. The behavioural results were mirrored in both primary visual cortex and the middle temporal area, with a greater reduction in response at short, compared with long, viewing durations in autism compared with controls (both P < 0.018). In contrast, there was no difference between the groups in the intraparietal sulcus (P > 0.574). These findings suggest that reduced global motion perception in autism is driven by an atypical response early in visual processing and may reflect a fundamental perturbation in neural circuitry. PMID:25060095
Mental Rotation Meets the Motion Aftereffect: The Role of hV5/MT+ in Visual Mental Imagery
ERIC Educational Resources Information Center
Seurinck, Ruth; de Lange, Floris P.; Achten, Erik; Vingerhoets, Guy
2011-01-01
A growing number of studies show that visual mental imagery recruits the same brain areas as visual perception. Although the necessity of hV5/MT+ for motion perception has been revealed by means of TMS, its relevance for motion imagery remains unclear. We induced a direction-selective adaptation in hV5/MT+ by means of an MAE while subjects…
Odors Bias Time Perception in Visual and Auditory Modalities
Yue, Zhenzhu; Gao, Tianyu; Chen, Lihan; Wu, Jiashuang
2016-01-01
Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of attentional deployment between the inducers (odors) and emotionally neutral stimuli (visual dots and sound beeps). PMID:27148143
The Effect of Temporal Perception on Weight Perception
Kambara, Hiroyuki; Shin, Duk; Kawase, Toshihiro; Yoshimura, Natsue; Akahane, Katsuhito; Sato, Makoto; Koike, Yasuharu
2013-01-01
A successful catch of a falling ball requires an accurate estimation of the timing for when the ball hits the hand. In a previous experiment in which participants performed ball-catching task in virtual reality environment, we accidentally found that the weight of a falling ball was perceived differently when the timing of ball load force to the hand was shifted from the timing expected from visual information. Although it is well known that spatial information of an object, such as size, can easily deceive our perception of its heaviness, the relationship between temporal information and perceived heaviness is still not clear. In this study, we investigated the effect of temporal factors on weight perception. We conducted ball-catching experiments in a virtual environment where the timing of load force exertion was shifted away from the visual contact timing (i.e., time when the ball hit the hand in the display). We found that the ball was perceived heavier when force was applied earlier than visual contact and lighter when force was applied after visual contact. We also conducted additional experiments in which participants were conditioned to one of two constant time offsets prior to testing weight perception. After performing ball-catching trials with 60 ms advanced or delayed load force exertion, participants’ subjective judgment on the simultaneity of visual contact and force exertion changed, reflecting a shift in perception of time offset. In addition, timing of catching motion initiation relative to visual contact changed, reflecting a shift in estimation of force timing. We also found that participants began to perceive the ball as lighter after conditioning to 60 ms advanced offset and heavier after the 60 ms delayed offset. These results suggest that perceived heaviness depends not on the actual time offset between force exertion and visual contact but on the subjectively perceived time offset between them and/or estimation error in force timing. PMID:23450805
NASA Astrophysics Data System (ADS)
Hyde, Jerald R.
2004-05-01
It is clear to those who ``listen'' to concert halls and evaluate their degree of acoustical success that it is quite difficult to separate the acoustical response at a given seat from the multi-modal perception of the whole event. Objective concert hall data have been collected for the purpose of finding a link with their related subjective evaluation and ultimately with the architectural correlates which produce the sound field. This exercise, while important, tends to miss the point that a concert or opera event utilizes all the senses of which the sound field and visual stimuli are both major contributors to the experience. Objective acoustical factors point to visual input as being significant in the perception of ``acoustical intimacy'' and with the perception of loudness versus distance in large halls. This paper will review the evidence of visual input as a factor in what we ``hear'' and introduce concepts of perceptual constancy, distance perception, static and dynamic visual stimuli, and the general process of the psychology of the integrated experience. A survey of acousticians on their opinions about the auditory-visual aspects of the concert hall experience will be presented. [Work supported in part from the Veneklasen Research Foundation and Veneklasen Associates.
Metacognitive Confidence Increases with, but Does Not Determine, Visual Perceptual Learning.
Zizlsperger, Leopold; Kümmel, Florian; Haarmeier, Thomas
2016-01-01
While perceptual learning increases objective sensitivity, the effects on the constant interaction of the process of perception and its metacognitive evaluation have been rarely investigated. Visual perception has been described as a process of probabilistic inference featuring metacognitive evaluations of choice certainty. For visual motion perception in healthy, naive human subjects here we show that perceptual sensitivity and confidence in it increased with training. The metacognitive sensitivity-estimated from certainty ratings by a bias-free signal detection theoretic approach-in contrast, did not. Concomitant 3Hz transcranial alternating current stimulation (tACS) was applied in compliance with previous findings on effective high-low cross-frequency coupling subserving signal detection. While perceptual accuracy and confidence in it improved with training, there were no statistically significant tACS effects. Neither metacognitive sensitivity in distinguishing between their own correct and incorrect stimulus classifications, nor decision confidence itself determined the subjects' visual perceptual learning. Improvements of objective performance and the metacognitive confidence in it were rather determined by the perceptual sensitivity at the outset of the experiment. Post-decision certainty in visual perceptual learning was neither independent of objective performance, nor requisite for changes in sensitivity, but rather covaried with objective performance. The exact functional role of metacognitive confidence in human visual perception has yet to be determined.
Terhune, Devin B; Murray, Elizabeth; Near, Jamie; Stagg, Charlotte J; Cowey, Alan; Cohen Kadosh, Roi
2015-11-01
Phosphenes are illusory visual percepts produced by the application of transcranial magnetic stimulation to occipital cortex. Phosphene thresholds, the minimum stimulation intensity required to reliably produce phosphenes, are widely used as an index of cortical excitability. However, the neural basis of phosphene thresholds and their relationship to individual differences in visual cognition are poorly understood. Here, we investigated the neurochemical basis of phosphene perception by measuring basal GABA and glutamate levels in primary visual cortex using magnetic resonance spectroscopy. We further examined whether phosphene thresholds would relate to the visuospatial phenomenology of grapheme-color synesthesia, a condition characterized by atypical binding and involuntary color photisms. Phosphene thresholds negatively correlated with glutamate concentrations in visual cortex, with lower thresholds associated with elevated glutamate. This relationship was robust, present in both controls and synesthetes, and exhibited neurochemical, topographic, and threshold specificity. Projector synesthetes, who experience color photisms as spatially colocalized with inducing graphemes, displayed lower phosphene thresholds than associator synesthetes, who experience photisms as internal images, with both exhibiting lower thresholds than controls. These results suggest that phosphene perception is driven by interindividual variation in glutamatergic activity in primary visual cortex and relates to cortical processes underlying individual differences in visuospatial awareness. © The Author 2015. Published by Oxford University Press.
Plastic reorganization of neural systems for perception of others in the congenitally blind.
Fairhall, S L; Porter, K B; Bellucci, C; Mazzetti, M; Cipolli, C; Gobbini, M I
2017-09-01
Recent evidence suggests that the function of the core system for face perception might extend beyond visual face-perception to a broader role in person perception. To critically test the broader role of core face-system in person perception, we examined the role of the core system during the perception of others in 7 congenitally blind individuals and 15 sighted subjects by measuring their neural responses using fMRI while they listened to voices and performed identity and emotion recognition tasks. We hypothesised that in people who have had no visual experience of faces, core face-system areas may assume a role in the perception of others via voices. Results showed that emotions conveyed by voices can be decoded in homologues of the core face system only in the blind. Moreover, there was a specific enhancement of response to verbal as compared to non-verbal stimuli in bilateral fusiform face areas and the right posterior superior temporal sulcus showing that the core system also assumes some language-related functions in the blind. These results indicate that, in individuals with no history of visual experience, areas of the core system for face perception may assume a role in aspects of voice perception that are relevant to social cognition and perception of others' emotions. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.
Auditory emotional cues enhance visual perception.
Zeelenberg, René; Bocanegra, Bruno R
2010-04-01
Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by emotional cues as compared to neutral cues. When the cue was presented visually we replicated the emotion-induced impairment found in other studies. Our results suggest emotional stimuli have a twofold effect on perception. They impair perception by reflexively attracting attention at the expense of competing stimuli. However, emotional stimuli also induce a nonspecific perceptual enhancement that carries over onto other stimuli when competition is reduced, for example, by presenting stimuli in different modalities. Copyright 2009 Elsevier B.V. All rights reserved.
Agyei, Seth B.; van der Weel, F. R. (Ruud); van der Meer, Audrey L. H.
2016-01-01
During infancy, smart perceptual mechanisms develop allowing infants to judge time-space motion dynamics more efficiently with age and locomotor experience. This emerging capacity may be vital to enable preparedness for upcoming events and to be able to navigate in a changing environment. Little is known about brain changes that support the development of prospective control and about processes, such as preterm birth, that may compromise it. As a function of perception of visual motion, this paper will describe behavioral and brain studies with young infants investigating the development of visual perception for prospective control. By means of the three visual motion paradigms of occlusion, looming, and optic flow, our research shows the importance of including behavioral data when studying the neural correlates of prospective control. PMID:26903908
Cognitive processing in the primary visual cortex: from perception to memory.
Supèr, Hans
2002-01-01
The primary visual cortex is the first cortical area of the visual system that receives information from the external visual world. Based on the receptive field characteristics of the neurons in this area, it has been assumed that the primary visual cortex is a pure sensory area extracting basic elements of the visual scene. This information is then subsequently further processed upstream in the higher-order visual areas and provides us with perception and storage of the visual environment. However, recent findings show that such neural implementations are observed in the primary visual cortex. These neural correlates are expressed by the modulated activity of the late response of a neuron to a stimulus, and most likely depend on recurrent interactions between several areas of the visual system. This favors the concept of a distributed nature of visual processing in perceptual organization.
Reframing the action and perception dissociation in DF: haptics matters, but how?
Whitwell, Robert L; Buckingham, Gavin
2013-02-01
Goodale and Milner's (1992) "vision-for-action" and "vision-for-perception" account of the division of labor between the dorsal and ventral "streams" has come to dominate contemporary views of the functional roles of these two pathways. Nevertheless, some lines of evidence for the model remain controversial. Recently, Thomas Schenk reexamined visual form agnosic patient DF's spared anticipatory grip scaling to object size, one of the principal empirical pillars of the model. Based on this new evidence, Schenk rejects the original interpretation of DF's spared ability that was based on segregated processing of object size and argues that DF's spared grip scaling relies on haptic feedback to calibrate visual egocentric cues that relate the posture of the hand to the visible edges of the goal-object. However, a careful consideration of the tasks that Schenk employed reveals some problems with his claim. We suspect that the core issues of this controversy will require a closer examination of the role that cognition plays in the operation of the dorsal and ventral streams in healthy controls and in patient DF.
Visual Motion Processing Subserves Faster Visuomotor Reaction in Badminton Players.
Hülsdünker, Thorben; Strüder, Heiko K; Mierau, Andreas
2017-06-01
Athletes participating in ball or racquet sports have to respond to visual stimuli under critical time pressure. Previous studies used visual contrast stimuli to determine visual perception and visuomotor reaction in athletes and nonathletes; however, ball and racquet sports are characterized by motion rather than contrast visual cues. Because visual contrast and motion signals are processed in different cortical regions, this study aimed to determine differences in perception and processing of visual motion between athletes and nonathletes. Twenty-five skilled badminton players and 28 age-matched nonathletic controls participated in this study. Using a 64-channel EEG system, we investigated visual motion perception/processing in the motion-sensitive middle temporal (MT) cortical area in response to radial motion of different velocities. In a simple visuomotor reaction task, visuomotor transformation in Brodmann area 6 (BA6) and BA4 as well as muscular activation (EMG onset) and visuomotor reaction time (VMRT) were investigated. Stimulus- and response-locked potentials were determined to differentiate between perceptual and motor-related processes. As compared with nonathletes, athletes showed earlier EMG onset times (217 vs 178 ms, P < 0.001), accompanied by a faster VMRT (274 vs 243 ms, P < 0.001). Furthermore, athletes showed an earlier stimulus-locked peak activation of MT (200 vs 182 ms, P = 0.002) and BA6 (161 vs 137 ms, P = 0.009). Response-locked peak activation in MT was later in athletes (-7 vs 26 ms, P < 0.001), whereas no group differences were observed in BA6 and BA4. Multiple regression analyses with stimulus- and response-locked cortical potentials predicted EMG onset (r = 0.83) and VMRT (r = 0.77). The athletes' superior visuomotor performance in response to visual motion is primarily related to visual perception and, to a minor degree, to motor-related processes.
Gilaie-Dotan, Sharon; Doron, Ravid
2017-06-01
Visual categories are associated with eccentricity biases in high-order visual cortex: Faces and reading with foveally-biased regions, while common objects and space with mid- and peripherally-biased regions. As face perception and reading are among the most challenging human visual skills, and are often regarded as the peak achievements of a distributed neural network supporting common objects perception, it is unclear why objects, which also rely on foveal vision to be processed, are associated with mid-peripheral rather than with a foveal bias. Here, we studied BN, a 9 y.o. boy who has normal basic-level vision, abnormal (limited) oculomotor pursuit and saccades, and shows developmental object and contour integration deficits but with no indication of prosopagnosia. Although we cannot infer causation from the data presented here, we suggest that normal pursuit and saccades could be critical for the development of contour integration and object perception. While faces and perhaps reading, when fixated upon, take up a small portion of central visual field and require only small eye movements to be properly processed, common objects typically prevail in mid-peripheral visual field and rely on longer-distance voluntary eye movements as saccades to be brought to fixation. While retinal information feeds into early visual cortex in an eccentricity orderly manner, we hypothesize that propagation of non-foveal information to mid and high-order visual cortex critically relies on circuitry involving eye movements. Limited or atypical eye movements, as in the case of BN, may hinder normal information flow to mid-eccentricity biased high-order visual cortex, adversely affecting its development and consequently inducing visual perceptual deficits predominantly for categories associated with these regions. Copyright © 2017 Elsevier Ltd. All rights reserved.
Li, G; Welander, U; Yoshiura, K; Shi, X-Q; McDavid, W D
2003-11-01
Two digital image processing methods, correction for X-ray attenuation and correction for attenuation and visual response, have been developed. The aim of the present study was to compare digital radiographs before and after correction for attenuation and correction for attenuation and visual response by means of a perceptibility curve test. Radiographs were exposed of an aluminium test object containing holes ranging from 0.03 mm to 0.30 mm with increments of 0.03 mm. Fourteen radiographs were exposed with the Dixi system (Planmeca Oy, Helsinki, Finland) and twelve radiographs were exposed with the F1 iOX system (Fimet Oy, Monninkylä, Finland) from low to high exposures covering the full exposure ranges of the systems. Radiographs obtained from the Dixi and F1 iOX systems were 12 bit and 8 bit images, respectively. Original radiographs were then processed for correction for attenuation and correction for attenuation and visual response. Thus, two series of radiographs were created. Ten viewers evaluated all the radiographs in the same random order under the same viewing conditions. The object detail having the lowest perceptible contrast was recorded for each observer. Perceptibility curves were plotted according to the mean of observer data. The perceptibility curves for processed radiographs obtained with the F1 iOX system are higher than those for originals in the exposure range up to the peak, where the curves are basically the same. For radiographs exposed with the Dixi system, perceptibility curves for processed radiographs are higher than those for originals for all exposures. Perceptibility curves show that for 8 bit radiographs obtained from the F1 iOX system, the contrast threshold was increased in processed radiographs up to the peak, while for 12 bit radiographs obtained with the Dixi system, the contrast threshold was increased in processed radiographs for all exposures. When comparisons were made between radiographs corrected for attenuation and corrected for attenuation and visual response, basically no differences were found. Radiographs processed for correction for attenuation and correction for attenuation and visual response may improve perception, especially for 12 bit originals.
Cicmil, Nela; Krug, Kristine
2015-01-01
Vision research has the potential to reveal fundamental mechanisms underlying sensory experience. Causal experimental approaches, such as electrical microstimulation, provide a unique opportunity to test the direct contributions of visual cortical neurons to perception and behaviour. But in spite of their importance, causal methods constitute a minority of the experiments used to investigate the visual cortex to date. We reconsider the function and organization of visual cortex according to results obtained from stimulation techniques, with a special emphasis on electrical stimulation of small groups of cells in awake subjects who can report their visual experience. We compare findings from humans and monkeys, striate and extrastriate cortex, and superficial versus deep cortical layers, and identify a number of revealing gaps in the ‘causal map′ of visual cortex. Integrating results from different methods and species, we provide a critical overview of the ways in which causal approaches have been used to further our understanding of circuitry, plasticity and information integration in visual cortex. Electrical stimulation not only elucidates the contributions of different visual areas to perception, but also contributes to our understanding of neuronal mechanisms underlying memory, attention and decision-making. PMID:26240421
Sata, Yoshimi; Inagaki, Masumi; Shirane, Seiko; Kaga, Makiko
2002-11-01
In order to objectively evaluate visual perception of patients with mental retardation (MR), the P300 event-related potentials (ERPs) for visual oddball tasks were recorded in 26 patients and 13 age-matched healthy volunteers. The latency and amplitude of visual P300 in response to the Japanese ideogram stimuli (a pair of familiar Kanji characters or unfamiliar Kanji characters) and a pair of meaningless complicated figures were measured. In almost all MR patients visual P300 was observed, however, the peak latency was significantly prolonged compared to control subjects. There was no significant difference of P300 latency among the three tasks. The distribution pattern of P300 in MR patients was different from that in the controls and the amplitudes in the frontal region was larger in MR patients. The latency decreased with age even in both groups. The developmental change of P300 latency corresponded to developmental age rather than the chronological age. These findings suggest that MR patients have impairment in processing of visual perception. Assessment of P300 latencies to the visual stimuli may be useful as an objective indicator of mental deficit.
CREMA-D: Crowd-sourced Emotional Multimodal Actors Dataset
Cao, Houwei; Cooper, David G.; Keutmann, Michael K.; Gur, Ruben C.; Nenkova, Ani; Verma, Ragini
2014-01-01
People convey their emotional state in their face and voice. We present an audio-visual data set uniquely suited for the study of multi-modal emotion expression and perception. The data set consists of facial and vocal emotional expressions in sentences spoken in a range of basic emotional states (happy, sad, anger, fear, disgust, and neutral). 7,442 clips of 91 actors with diverse ethnic backgrounds were rated by multiple raters in three modalities: audio, visual, and audio-visual. Categorical emotion labels and real-value intensity values for the perceived emotion were collected using crowd-sourcing from 2,443 raters. The human recognition of intended emotion for the audio-only, visual-only, and audio-visual data are 40.9%, 58.2% and 63.6% respectively. Recognition rates are highest for neutral, followed by happy, anger, disgust, fear, and sad. Average intensity levels of emotion are rated highest for visual-only perception. The accurate recognition of disgust and fear requires simultaneous audio-visual cues, while anger and happiness can be well recognized based on evidence from a single modality. The large dataset we introduce can be used to probe other questions concerning the audio-visual perception of emotion. PMID:25653738
García-Domene, M C; Luque, M J; Díez-Ajenjo, M A; Desco-Esteban, M C; Artigas, J M
2018-02-01
To analyse the relationship between the choroidal thickness and the visual perception of patients with high myopia but without retinal damage. All patients underwent ophthalmic evaluation including a slit lamp examination and dilated ophthalmoscopy, subjective refraction, best corrected visual acuity, axial length, optical coherence tomography, contrast sensitivity function and sensitivity of the visual pathways. We included eleven eyes of subjects with high myopia. There are statistical correlations between choroidal thickness and almost all the contrast sensitivity values. The sensitivity of magnocellular and koniocellular pathways is the most affected, and the homogeneity of the sensibility of the magnocellular pathway depends on the choroidal thickness; when the thickness decreases, the sensitivity impairment extends from the center to the periphery of the visual field. Patients with high myopia without any fundus changes have visual impairments. We have found that choroidal thickness correlates with perceptual parameters such as contrast sensitivity or mean defect and pattern standard deviation of the visual fields of some visual pathways. Our study shows that the magnocellular and koniocellular pathways are the most affected, so that these patients have impairment in motion perception and blue-yellow contrast perception. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Changing motor perception by sensorimotor conflicts and body ownership
Salomon, R.; Fernandez, N. B.; van Elk, M.; Vachicouras, N.; Sabatier, F.; Tychinskaya, A.; Llobera, J.; Blanke, O.
2016-01-01
Experimentally induced sensorimotor conflicts can result in a loss of the feeling of control over a movement (sense of agency). These findings are typically interpreted in terms of a forward model in which the predicted sensory consequences of the movement are compared with the observed sensory consequences. In the present study we investigated whether a mismatch between movements and their observed sensory consequences does not only result in a reduced feeling of agency, but may affect motor perception as well. Visual feedback of participants’ finger movements was manipulated using virtual reality to be anatomically congruent or incongruent to the performed movement. Participants made a motor perception judgment (i.e. which finger did you move?) or a visual perceptual judgment (i.e. which finger did you see moving?). Subjective measures of agency and body ownership were also collected. Seeing movements that were visually incongruent to the performed movement resulted in a lower accuracy for motor perception judgments, but not visual perceptual judgments. This effect was modified by rotating the virtual hand (Exp.2), but not by passively induced movements (Exp.3). Hence, sensorimotor conflicts can modulate the perception of one’s motor actions, causing viewed “alien actions” to be felt as one’s own. PMID:27225834
Perceptual Training Strongly Improves Visual Motion Perception in Schizophrenia
ERIC Educational Resources Information Center
Norton, Daniel J.; McBain, Ryan K.; Ongur, Dost; Chen, Yue
2011-01-01
Schizophrenia patients exhibit perceptual and cognitive deficits, including in visual motion processing. Given that cognitive systems depend upon perceptual inputs, improving patients' perceptual abilities may be an effective means of cognitive intervention. In healthy people, motion perception can be enhanced through perceptual learning, but it…
Gilaie-Dotan, Sharon
2016-03-01
A key question in visual neuroscience is the causal link between specific brain areas and perceptual functions; which regions are necessary for which visual functions? While the contribution of primary visual cortex and high-level visual regions to visual perception has been extensively investigated, the contribution of intermediate visual areas (e.g. V2/V3) to visual processes remains unclear. Here I review more than 20 visual functions (early, mid, and high-level) of LG, a developmental visual agnosic and prosopagnosic young adult, whose intermediate visual regions function in a significantly abnormal fashion as revealed through extensive fMRI and ERP investigations. While expectedly, some of LG's visual functions are significantly impaired, some of his visual functions are surprisingly normal (e.g. stereopsis, color, reading, biological motion). During the period of eight-year testing described here, LG trained on a perceptual learning paradigm that was successful in improving some but not all of his visual functions. Following LG's visual performance and taking into account additional findings in the field, I propose a framework for how different visual areas contribute to different visual functions, with an emphasis on intermediate visual regions. Thus, although rewiring and plasticity in the brain can occur during development to overcome and compensate for hindering developmental factors, LG's case seems to indicate that some visual functions are much less dependent on strict hierarchical flow than others, and can develop normally in spite of abnormal mid-level visual areas, thereby probably less dependent on intermediate visual regions. Copyright © 2015 Elsevier Ltd. All rights reserved.
Shape perception simultaneously up- and downregulates neural activity in the primary visual cortex.
Kok, Peter; de Lange, Floris P
2014-07-07
An essential part of visual perception is the grouping of local elements (such as edges and lines) into coherent shapes. Previous studies have shown that this grouping process modulates neural activity in the primary visual cortex (V1) that is signaling the local elements [1-4]. However, the nature of this modulation is controversial. Some studies find that shape perception reduces neural activity in V1 [2, 5, 6], while others report increased V1 activity during shape perception [1, 3, 4, 7-10]. Neurocomputational theories that cast perception as a generative process [11-13] propose that feedback connections carry predictions (i.e., the generative model), while feedforward connections signal the mismatch between top-down predictions and bottom-up inputs. Within this framework, the effect of feedback on early visual cortex may be either enhancing or suppressive, depending on whether the feedback signal is met by congruent bottom-up input. Here, we tested this hypothesis by quantifying the spatial profile of neural activity in V1 during the perception of illusory shapes using population receptive field mapping. We find that shape perception concurrently increases neural activity in regions of V1 that have a receptive field on the shape but do not receive bottom-up input and suppresses activity in regions of V1 that receive bottom-up input that is predicted by the shape. These effects were not modulated by task requirements. Together, these findings suggest that shape perception changes lower-order sensory representations in a highly specific and automatic manner, in line with theories that cast perception in terms of hierarchical generative models. Copyright © 2014 Elsevier Ltd. All rights reserved.
Multimodal emotion perception after anterior temporal lobectomy (ATL)
Milesi, Valérie; Cekic, Sezen; Péron, Julie; Frühholz, Sascha; Cristinzio, Chiara; Seeck, Margitta; Grandjean, Didier
2014-01-01
In the context of emotion information processing, several studies have demonstrated the involvement of the amygdala in emotion perception, for unimodal and multimodal stimuli. However, it seems that not only the amygdala, but several regions around it, may also play a major role in multimodal emotional integration. In order to investigate the contribution of these regions to multimodal emotion perception, five patients who had undergone unilateral anterior temporal lobe resection were exposed to both unimodal (vocal or visual) and audiovisual emotional and neutral stimuli. In a classic paradigm, participants were asked to rate the emotional intensity of angry, fearful, joyful, and neutral stimuli on visual analog scales. Compared with matched controls, patients exhibited impaired categorization of joyful expressions, whether the stimuli were auditory, visual, or audiovisual. Patients confused joyful faces with neutral faces, and joyful prosody with surprise. In the case of fear, unlike matched controls, patients provided lower intensity ratings for visual stimuli than for vocal and audiovisual ones. Fearful faces were frequently confused with surprised ones. When we controlled for lesion size, we no longer observed any overall difference between patients and controls in their ratings of emotional intensity on the target scales. Lesion size had the greatest effect on intensity perceptions and accuracy in the visual modality, irrespective of the type of emotion. These new findings suggest that a damaged amygdala, or a disrupted bundle between the amygdala and the ventral part of the occipital lobe, has a greater impact on emotion perception in the visual modality than it does in either the vocal or audiovisual one. We can surmise that patients are able to use the auditory information contained in multimodal stimuli to compensate for difficulty processing visually conveyed emotion. PMID:24839437
Maljaars, J P W; Noens, I L J; Scholte, E M; Verpoorten, R A W; van Berckelaer-Onnes, I A
2011-01-01
The ComFor study has indicated that individuals with intellectual disability (ID) and autism spectrum disorder (ASD) show enhanced visual local processing compared with individuals with ID only. Items of the ComFor with meaningless materials provided the best discrimination between the two samples. These results can be explained by the weak central coherence account. The main focus of the present study is to examine whether enhanced visual perception is also present in low-functioning deaf individuals with and without ASD compared with individuals with ID, and to evaluate the underlying cognitive style in deaf and hearing individuals with ASD. Different sorting tasks (selected from the ComFor) were administered from four subsamples: (1) individuals with ID (n = 68); (2) individuals with ID and ASD (n = 72); (3) individuals with ID and deafness (n = 22); and (4) individuals with ID, ASD and deafness (n = 15). Differences in performance on sorting tasks with meaningful and meaningless materials between the four subgroups were analysed. Age and level of functioning were taken into account. Analyses of covariance revealed that results of deaf individuals with ID and ASD are in line with the results of hearing individuals with ID and ASD. Both groups showed enhanced visual perception, especially on meaningless sorting tasks, when compared with hearing individuals with ID, but not compared with deaf individuals with ID. In ASD either with or without deafness, enhanced visual perception for meaningless information can be understood within the framework of the central coherence theory, whereas in deafness, enhancement in visual perception might be due to a more generally enhanced visual perception as a result of auditory deprivation. © 2010 The Authors. Journal of Intellectual Disability Research © 2010 Blackwell Publishing Ltd.
Visual cues and perceived reachability.
Gabbard, Carl; Ammar, Diala
2005-12-01
A rather consistent finding in studies of perceived (imagined) compared to actual movement in a reaching paradigm is the tendency to overestimate at midline. Explanations of such behavior have focused primarily on perceptions of postural constraints and the notion that individuals calibrate reachability in reference to multiple degrees of freedom, also known as the whole-body explanation. The present study examined the role of visual information in the form of binocular and monocular cues in perceived reachability. Right-handed participants judged the reachability of visual targets at midline with both eyes open, dominant eye occluded, and the non-dominant eye covered. Results indicated that participants were relatively accurate with condition responses not being significantly different in regard to total error. Analysis of the direction of error (mean bias) revealed effective accuracy across conditions with only a marginal distinction between monocular and binocular conditions. Therefore, within the task conditions of this experiment, it appears that binocular and monocular cues provide sufficient visual information for effective judgments of perceived reach at midline.
Samaha, Jason; Postle, Bradley R
2017-11-29
Adaptive behaviour depends on the ability to introspect accurately about one's own performance. Whether this metacognitive ability is supported by the same mechanisms across different tasks is unclear. We investigated the relationship between metacognition of visual perception and metacognition of visual short-term memory (VSTM). Experiments 1 and 2 required subjects to estimate the perceived or remembered orientation of a grating stimulus and rate their confidence. We observed strong positive correlations between individual differences in metacognitive accuracy between the two tasks. This relationship was not accounted for by individual differences in task performance or average confidence, and was present across two different metrics of metacognition and in both experiments. A model-based analysis of data from a third experiment showed that a cross-domain correlation only emerged when both tasks shared the same task-relevant stimulus feature. That is, metacognition for perception and VSTM were correlated when both tasks required orientation judgements, but not when the perceptual task was switched to require contrast judgements. In contrast with previous results comparing perception and long-term memory, which have largely provided evidence for domain-specific metacognitive processes, the current findings suggest that metacognition of visual perception and VSTM is supported by a domain-general metacognitive architecture, but only when both domains share the same task-relevant stimulus feature. © 2017 The Author(s).
NASA Technical Reports Server (NTRS)
Young, L. R.
1975-01-01
Preliminary tests and evaluation are presented of pilot performance during landing (flight paths) using computer generated images (video tapes). Psychophysiological factors affecting pilot visual perception were measured. A turning flight maneuver (pitch and roll) was specifically studied using a training device, and the scaling laws involved were determined. Also presented are medical studies (abstracts) on human response to gravity variations without visual cues, acceleration stimuli effects on the semicircular canals, and neurons affecting eye movements, and vestibular tests.
Grossberg, Stephen
2014-01-01
Neural models of perception clarify how visual illusions arise from adaptive neural processes. Illusions also provide important insights into how adaptive neural processes work. This article focuses on two illusions that illustrate a fundamental property of global brain organization; namely, that advanced brains are organized into parallel cortical processing streams with computationally complementary properties. That is, in order to process certain combinations of properties, each cortical stream cannot process complementary properties. Interactions between these streams, across multiple processing stages, overcome their complementary deficiencies to compute effective representations of the world, and to thereby achieve the property of complementary consistency. The two illusions concern how illusory depth can vary with brightness, and how apparent motion of illusory contours can occur. Illusory depth from brightness arises from the complementary properties of boundary and surface processes, notably boundary completion and surface-filling in, within the parvocellular form processing cortical stream. This illusion depends upon how surface contour signals from the V2 thin stripes to the V2 interstripes ensure complementary consistency of a unified boundary/surface percept. Apparent motion of illusory contours arises from the complementary properties of form and motion processes across the parvocellular and magnocellular cortical processing streams. This illusion depends upon how illusory contours help to complete boundary representations for object recognition, how apparent motion signals can help to form continuous trajectories for target tracking and prediction, and how formotion interactions from V2-to-MT enable completed object representations to be continuously tracked even when they move behind intermittently occluding objects through time. PMID:25389399
Serial dependence in the perception of attractiveness.
Xia, Ye; Leib, Allison Yamanashi; Whitney, David
2016-12-01
The perception of attractiveness is essential for choices of food, object, and mate preference. Like perception of other visual features, perception of attractiveness is stable despite constant changes of image properties due to factors like occlusion, visual noise, and eye movements. Recent results demonstrate that perception of low-level stimulus features and even more complex attributes like human identity are biased towards recent percepts. This effect is often called serial dependence. Some recent studies have suggested that serial dependence also exists for perceived facial attractiveness, though there is also concern that the reported effects are due to response bias. Here we used an attractiveness-rating task to test the existence of serial dependence in perceived facial attractiveness. Our results demonstrate that perceived face attractiveness was pulled by the attractiveness level of facial images encountered up to 6 s prior. This effect was not due to response bias and did not rely on the previous motor response. This perceptual pull increased as the difference in attractiveness between previous and current stimuli increased. Our results reconcile previously conflicting findings and extend previous work, demonstrating that sequential dependence in perception operates across different levels of visual analysis, even at the highest levels of perceptual interpretation.
Liu, Jianli; Lughofer, Edwin; Zeng, Xianyi
2015-01-01
Modeling human aesthetic perception of visual textures is important and valuable in numerous industrial domains, such as product design, architectural design, and decoration. Based on results from a semantic differential rating experiment, we modeled the relationship between low-level basic texture features and aesthetic properties involved in human aesthetic texture perception. First, we compute basic texture features from textural images using four classical methods. These features are neutral, objective, and independent of the socio-cultural context of the visual textures. Then, we conduct a semantic differential rating experiment to collect from evaluators their aesthetic perceptions of selected textural stimuli. In semantic differential rating experiment, eights pairs of aesthetic properties are chosen, which are strongly related to the socio-cultural context of the selected textures and to human emotions. They are easily understood and connected to everyday life. We propose a hierarchical feed-forward layer model of aesthetic texture perception and assign 8 pairs of aesthetic properties to different layers. Finally, we describe the generation of multiple linear and non-linear regression models for aesthetic prediction by taking dimensionality-reduced texture features and aesthetic properties of visual textures as dependent and independent variables, respectively. Our experimental results indicate that the relationships between each layer and its neighbors in the hierarchical feed-forward layer model of aesthetic texture perception can be fitted well by linear functions, and the models thus generated can successfully bridge the gap between computational texture features and aesthetic texture properties.
Tallon-Baudry, Catherine; Campana, Florence; Park, Hyeong-Dong; Babo-Rebelo, Mariana
2018-05-01
Why should a scientist whose aim is to unravel the neural mechanisms of perception consider brain-body interactions seriously? Brain-body interactions have traditionally been associated with emotion, effort, or stress, but not with the "cold" processes of perception and attention. Here, we review recent experimental evidence suggesting a different picture: the neural monitoring of bodily state, and in particular the neural monitoring of the heart, affects visual perception. The impact of spontaneous fluctuations of neural responses to heartbeats on visual detection is as large as the impact of explicit manipulations of spatial attention in perceptual tasks. However, we propose that the neural monitoring of visceral inputs plays a specific role in conscious perception, distinct from the role of attention. The neural monitoring of organs such as the heart or the gut would generate a subject-centered reference frame, from which the first-person perspective inherent to conscious perception can develop. In this view, conscious perception results from the integration of visual content with first-person perspective. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Julesz, Bela
1989-08-01
A quarter of a century ago I introduced two paradigms into psychology which in the intervening years have had a direct impact on the psychobiology of early vision and an indirect one on artificial intelligence (AI or machine vision). The first, the computer-generated random-dot stereogram (RDS) paradigm (Julesz, 1960) at its very inception posed a strategic question both for AI and neurophysiology. The finding that stereoscopic depth perception (stereopsis) is possible without the many enigmatic cues of monocular form recognition - as assumed previously - demonstrated that stereopsis with its basic problem of finding matches between corresponding random aggregates of dots in the left and right visual fields became ripe for modeling. Indeed, the binocular matching problem of stereopsis opened up an entire field of study, eventually leading to the computational models of David Marr (1982) and his coworkers. The fusion of RDS had an even greater impact on neurophysiologists - including Hubel and Wiesel (1962) - who realized that stereopsis must occur at an early stage, and can be studied easier than form perception. This insight recently culminated in the studies by Gian Poggio (1984) who found binocular-disparity - tuned neurons in the input stage to the visual cortex (layer IVB in V1) in the monkey that were selectively triggered by dynamic RDS. Thus the first paradigm led to a strategic insight: that with stereoscopic vision there is no camouflage, and as such was advantageous for our primate ancestors to evolve the cortical machinery of stereoscopic vision to capture camouflaged prey (insects) at a standstill. Amazingly, although stereopsis evolved relatively late in primates, it captured the very input stages of the visual cortex. (For a detailed review, see Julesz, 1986a)
Perception of 3-D location based on vision, touch, and extended touch
Giudice, Nicholas A.; Klatzky, Roberta L.; Bennett, Christopher R.; Loomis, Jack M.
2012-01-01
Perception of the near environment gives rise to spatial images in working memory that continue to represent the spatial layout even after cessation of sensory input. As the observer moves, these spatial images are continuously updated.This research is concerned with (1) whether spatial images of targets are formed when they are sensed using extended touch (i.e., using a probe to extend the reach of the arm) and (2) the accuracy with which such targets are perceived. In Experiment 1, participants perceived the 3-D locations of individual targets from a fixed origin and were then tested with an updating task involving blindfolded walking followed by placement of the hand at the remembered target location. Twenty-four target locations, representing all combinations of two distances, two heights, and six azimuths, were perceived by vision or by blindfolded exploration with the bare hand, a 1-m probe, or a 2-m probe. Systematic errors in azimuth were observed for all targets, reflecting errors in representing the target locations and updating. Overall, updating after visual perception was best, but the quantitative differences between conditions were small. Experiment 2 demonstrated that auditory information signifying contact with the target was not a factor. Overall, the results indicate that 3-D spatial images can be formed of targets sensed by extended touch and that perception by extended touch, even out to 1.75 m, is surprisingly accurate. PMID:23070234
Global motion perception is associated with motor function in 2-year-old children.
Thompson, Benjamin; McKinlay, Christopher J D; Chakraborty, Arijit; Anstice, Nicola S; Jacobs, Robert J; Paudel, Nabin; Yu, Tzu-Ying; Ansell, Judith M; Wouldes, Trecia A; Harding, Jane E
2017-09-29
The dorsal visual processing stream that includes V1, motion sensitive area V5 and the posterior parietal lobe, supports visually guided motor function. Two recent studies have reported associations between global motion perception, a behavioural measure of processing in V5, and motor function in pre-school and school aged children. This indicates a relationship between visual and motor development and also supports the use of global motion perception to assess overall dorsal stream function in studies of human neurodevelopment. We investigated whether associations between vision and motor function were present at 2 years of age, a substantially earlier stage of development. The Bayley III test of Infant and Toddler Development and measures of vision including visual acuity (Cardiff Acuity Cards), stereopsis (Lang stereotest) and global motion perception were attempted in 404 2-year-old children (±4 weeks). Global motion perception (quantified as a motion coherence threshold) was assessed by observing optokinetic nystagmus in response to random dot kinematograms of varying coherence. Linear regression revealed that global motion perception was modestly, but statistically significantly associated with Bayley III composite motor (r 2 =0.06, P<0.001, n=375) and gross motor scores (r 2 =0.06, p<0.001, n=375). The associations remained significant when language score was included in the regression model. In addition, when language score was included in the model, stereopsis was significantly associated with composite motor and fine motor scores, but unaided visual acuity was not statistically significantly associated with any of the motor scores. These results demonstrate that global motion perception and binocular vision are associated with motor function at an early stage of development. Global motion perception can be used as a partial measure of dorsal stream function from early childhood. Copyright © 2017 Elsevier B.V. All rights reserved.
Spatiotemporal Processing in Crossmodal Interactions for Perception of the External World: A Review
Hidaka, Souta; Teramoto, Wataru; Sugita, Yoichi
2015-01-01
Research regarding crossmodal interactions has garnered much interest in the last few decades. A variety of studies have demonstrated that multisensory information (vision, audition, tactile sensation, and so on) can perceptually interact with each other in the spatial and temporal domains. Findings regarding crossmodal interactions in the spatiotemporal domain (i.e., motion processing) have also been reported, with updates in the last few years. In this review, we summarize past and recent findings on spatiotemporal processing in crossmodal interactions regarding perception of the external world. A traditional view regarding crossmodal interactions holds that vision is superior to audition in spatial processing, but audition is dominant over vision in temporal processing. Similarly, vision is considered to have dominant effects over the other sensory modalities (i.e., visual capture) in spatiotemporal processing. However, recent findings demonstrate that sound could have a driving effect on visual motion perception. Moreover, studies regarding perceptual associative learning reported that, after association is established between a sound sequence without spatial information and visual motion information, the sound sequence could trigger visual motion perception. Other sensory information, such as motor action or smell, has also exhibited similar driving effects on visual motion perception. Additionally, recent brain imaging studies demonstrate that similar activation patterns could be observed in several brain areas, including the motion processing areas, between spatiotemporal information from different sensory modalities. Based on these findings, we suggest that multimodal information could mutually interact in spatiotemporal processing in the percept of the external world and that common perceptual and neural underlying mechanisms would exist for spatiotemporal processing. PMID:26733827
Alm, Magnus; Behne, Dawn
2015-01-01
Gender and age have been found to affect adults’ audio-visual (AV) speech perception. However, research on adult aging focuses on adults over 60 years, who have an increasing likelihood for cognitive and sensory decline, which may confound positive effects of age-related AV-experience and its interaction with gender. Observed age and gender differences in AV speech perception may also depend on measurement sensitivity and AV task difficulty. Consequently both AV benefit and visual influence were used to measure visual contribution for gender-balanced groups of young (20–30 years) and middle-aged adults (50–60 years) with task difficulty varied using AV syllables from different talkers in alternative auditory backgrounds. Females had better speech-reading performance than males. Whereas no gender differences in AV benefit or visual influence were observed for young adults, visually influenced responses were significantly greater for middle-aged females than middle-aged males. That speech-reading performance did not influence AV benefit may be explained by visual speech extraction and AV integration constituting independent abilities. Contrastingly, the gender difference in visually influenced responses in middle adulthood may reflect an experience-related shift in females’ general AV perceptual strategy. Although young females’ speech-reading proficiency may not readily contribute to greater visual influence, between young and middle-adulthood recurrent confirmation of the contribution of visual cues induced by speech-reading proficiency may gradually shift females AV perceptual strategy toward more visually dominated responses. PMID:26236274
Visual Detection, Identification, and Localization: An Annotated Bibliography.
ERIC Educational Resources Information Center
Lyman, Bernard
This annotated bibliography containing research on visual perception executed at photopic levels in artificial laboratory situations has been compiled to make information available that can be applied to scotopic perception of natural objects in natural situations. There are 407 reports or studies, published from 1945 through 1964, cited in this…
Visual Attention and Perception in Three-Dimensional Space
1992-01-01
Hughes & Zimba , 1987). Themarrows’ in the near-far condition (Fig. 1c) were actually wedges that pointed either toward or away from the subject, with their...1992). The increase in saccade latencies in the lower visual field. Perception and Psychophysics (in press). Hughes, H. C., & Zimba , L D. (1987
DOT National Transportation Integrated Search
2004-03-20
A means of quantifying the cluttering effects of symbols is needed to evaluate the impact of displaying an increasing volume of information on aviation displays such as head-up displays. Human visual perception has been successfully modeled by algori...
Perceived Competence of Children with Visual Impairments
ERIC Educational Resources Information Center
Shapiro, Deborah R.; Moffett, Aaron; Lieberman, Lauren; Dummer, Gail M.
2005-01-01
This study examined the perceptions of competence of 43 children with visual impairments who were attending a summer sports camp. It found there were meaningful differences in the perceived competence of the girls, but not the boys, after they attended the camp, and no differences in the perceptions of competence with age.
Binaural Perception in Young Infants.
ERIC Educational Resources Information Center
Bundy, Robert S.
This paper describes three experiments which demonstrated the presence of binaural perception abilities (the ability to use both ears) in 4-month-old but not in 2-month-old infants. All of the experiments employed a visual fixation habituation-dishabituation paradigm in which infants were given a series of visual fixation trials while binaural…
Movement Perception and Movement Production in Asperger's Syndrome
ERIC Educational Resources Information Center
Price, Kelly J.; Shiffrar, Maggie; Kerns, Kimberly A.
2012-01-01
To determine whether motor difficulties documented in Asperger's Syndrome (AS) are related to compromised visual abilities, this study examined perception and movement in response to dynamic visual environments. Fourteen males with AS and 16 controls aged 7-23 completed measures of motor skills, postural response to optic flow, and visual…
Enhanced Perceptual Functioning in Autism: An Update, and Eight Principles of Autistic Perception
ERIC Educational Resources Information Center
Mottron, Laurent; Dawson, Michelle; Soulieres, Isabelle; Hubert, Benedicte; Burack, Jake
2006-01-01
We propose an "Enhanced Perceptual Functioning" model encompassing the main differences between autistic and non-autistic social and non-social perceptual processing: locally oriented visual and auditory perception, enhanced low-level discrimination, use of a more posterior network in "complex" visual tasks, enhanced perception…
Dynamic Visual Perception and Reading Development in Chinese School Children
ERIC Educational Resources Information Center
Meng, Xiangzhi; Cheng-Lai, Alice; Zeng, Biao; Stein, John F.; Zhou, Xiaolin
2011-01-01
The development of reading skills may depend to a certain extent on the development of basic visual perception. The magnocellular theory of developmental dyslexia assumes that deficits in the magnocellular pathway, indicated by less sensitivity in perceiving dynamic sensory stimuli, are responsible for a proportion of reading difficulties…
Elevated arousal levels enhance contrast perception.
Kim, Dongho; Lokey, Savannah; Ling, Sam
2017-02-01
Our state of arousal fluctuates from moment to moment-fluctuations that can have profound impacts on behavior. Arousal has been proposed to play a powerful, widespread role in the brain, influencing processes as far ranging as perception, memory, learning, and decision making. Although arousal clearly plays a critical role in modulating behavior, the mechanisms underlying this modulation remain poorly understood. To address this knowledge gap, we examined the modulatory role of arousal on one of the cornerstones of visual perception: contrast perception. Using a reward-driven paradigm to manipulate arousal state, we discovered that elevated arousal state substantially enhances visual sensitivity, incurring a multiplicative modulation of contrast response. Contrast defines vision, determining whether objects appear visible or invisible to us, and these results indicate that one of the consequences of decreased arousal state is an impaired ability to visually process our environment.
Rashid, Mahbub; Khan, Nayma; Jones, Belinda
2016-01-01
This study compared physical and visual accessibilities and their associations with staff perception and interaction behaviors in 2 intensive care units (ICUs) with open-plan and racetrack layouts. For the study, physical and visual accessibilities were measured using the spatial analysis techniques of Space Syntax. Data on staff perception were collected from 81 clinicians using a questionnaire survey. The locations of 2233 interactions, and the location and length of another 339 interactions in these units were collected using systematic field observation techniques. According to the study, physical and visual accessibilities were different in the 2 ICUs, and clinicians' primary workspaces were physically and visually more accessible in the open-plan ICU. Physical and visual accessibilities affected how well clinicians' knew their peers and where their peers were located in these units. Physical and visual accessibilities also affected clinicians' perception of interaction and communication and of teamwork and collaboration in these units. Additionally, physical and visual accessibilities showed significant positive associations with interaction behaviors in these units, with the open-plan ICU showing stronger associations. However, physical accessibilities were less important than visual accessibilities in relation to interaction behaviors in these ICUs. The implications of these findings for ICU design are discussed.
Are neural correlates of visual consciousness retinotopic?
ffytche, Dominic H; Pins, Delphine
2003-11-14
Some visual neurons code what we see, their defining characteristic being a response profile which mirrors conscious percepts rather than veridical sensory attributes. One issue yet to be resolved is whether, within a given cortical area, conscious visual perception relates to diffuse activity across the entire population of such cells or focal activity within the sub-population mapping the location of the perceived stimulus. Here we investigate the issue in the human brain with fMRI, using a threshold stimulation technique to dissociate perceptual from non-perceptual activity. Our results point to a retinotopic organisation of perceptual activity in early visual areas, with independent perceptual activations for different regions of visual space.
Statistical wiring of thalamic receptive fields optimizes spatial sampling of the retinal image
Wang, Xin; Sommer, Friedrich T.; Hirsch, Judith A.
2014-01-01
Summary It is widely assumed that mosaics of retinal ganglion cells establish the optimal representation of visual space. However, relay cells in the visual thalamus often receive convergent input from several retinal afferents and, in cat, outnumber ganglion cells. To explore how the thalamus transforms the retinal image, we built a model of the retinothalamic circuit using experimental data and simple wiring rules. The model shows how the thalamus might form a resampled map of visual space with the potential to facilitate detection of stimulus position in the presence of sensor noise. Bayesian decoding conducted with the model provides support for this scenario. Despite its benefits, however, resampling introduces image blur, thus impairing edge perception. Whole-cell recordings obtained in vivo suggest that this problem is mitigated by arrangements of excitation and inhibition within the receptive field that effectively boost contrast borders, much like strategies used in digital image processing. PMID:24559681
A Portable Platform for Evaluation of Visual Performance in Glaucoma Patients
Rosen, Peter N.; Boer, Erwin R.; Gracitelli, Carolina P. B.; Abe, Ricardo Y.; Diniz-Filho, Alberto; Marvasti, Amir H.; Medeiros, Felipe A.
2015-01-01
Purpose To propose a new tablet-enabled test for evaluation of visual performance in glaucoma, the PERformance CEntered Portable Test (PERCEPT), and to evaluate its ability to predict history of falls and motor vehicle crashes. Design Cross-sectional study. Methods The study involved 71 patients with glaucomatous visual field defects on standard automated perimetry (SAP) and 59 control subjects. The PERCEPT was based on the concept of increasing visual task difficulty to improve detection of central visual field losses in glaucoma patients. Subjects had to perform a foveal 8-alternative-forced-choice orientation discrimination task, while detecting a simultaneously presented peripheral stimulus within a limited presentation time. Subjects also underwent testing with the Useful Field of View (UFOV) divided attention test. The ability to predict history of motor vehicle crashes and falls was investigated by odds ratios and incident-rate ratios, respectively. Results When adjusted for age, only the PERCEPT processing speed parameter showed significantly larger values in glaucoma compared to controls (difference: 243ms; P<0.001). PERCEPT results had a stronger association with history of motor vehicle crashes and falls than UFOV. Each 1 standard deviation increase in PERCEPT processing speed was associated with an odds ratio of 2.69 (P = 0.003) for predicting history of motor vehicle crashes and with an incident-rate ratio of 1.95 (P = 0.003) for predicting history of falls. Conclusion A portable platform for testing visual function was able to detect functional deficits in glaucoma, and its results were significantly associated with history of involvement in motor vehicle crashes and history of falls. PMID:26445501
Fast transfer of crossmodal time interval training.
Chen, Lihan; Zhou, Xiaolin
2014-06-01
Sub-second time perception is essential for many important sensory and perceptual tasks including speech perception, motion perception, motor coordination, and crossmodal interaction. This study investigates to what extent the ability to discriminate sub-second time intervals acquired in one sensory modality can be transferred to another modality. To this end, we used perceptual classification of visual Ternus display (Ternus in Psychol Forsch 7:81-136, 1926) to implicitly measure participants' interval perception in pre- and posttests and implemented an intra- or crossmodal sub-second interval discrimination training protocol in between the tests. The Ternus display elicited either an "element motion" or a "group motion" percept, depending on the inter-stimulus interval between the two visual frames. The training protocol required participants to explicitly compare the interval length between a pair of visual, auditory, or tactile stimuli with a standard interval or to implicitly perceive the length of visual, auditory, or tactile intervals by completing a non-temporal task (discrimination of auditory pitch or tactile intensity). Results showed that after fast explicit training of interval discrimination (about 15 min), participants improved their ability to categorize the visual apparent motion in Ternus displays, although the training benefits were mild for visual timing training. However, the benefits were absent for implicit interval training protocols. This finding suggests that the timing ability in one modality can be rapidly acquired and used to improve timing-related performance in another modality and that there may exist a central clock for sub-second temporal processing, although modality-specific perceptual properties may constrain the functioning of this clock.
Perception and understanding of intentions and actions: does gender matter?
Pavlova, Marina
2009-01-09
Perception of intentions and dispositions of others through body motion, body language, gestures and actions is of immense importance for a variety of daily-life situations and adaptive social behavior. This ability is of particular value because of the potential discrepancy between verbal and non-verbal communication levels. Recent data shows that some aspects of visual social perception are gender dependent. The present study asks whether and, if so, how the ability for perception and understanding of others' intentions and actions depends on perceivers' gender. With this purpose in mind, a visual event arrangement (EA) task was administered to female and male participants of two groups, adolescents aged 13-16 years and young adults. The main outcome of the study shows no difference in performance on the EA task between female and male participants in both groups. The findings are discussed in terms of gender-related differences in behavioral components and brain mechanisms engaged in visual social perception.
Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults.
Smayda, Kirsten E; Van Engen, Kristin J; Maddox, W Todd; Chandrasekaran, Bharath
2016-01-01
Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18-35) and thirty-three older adults (ages 60-90) to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger adults when both semantic and visual cues are available to the listener.
Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults
Smayda, Kirsten E.; Van Engen, Kristin J.; Maddox, W. Todd; Chandrasekaran, Bharath
2016-01-01
Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18–35) and thirty-three older adults (ages 60–90) to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger adults when both semantic and visual cues are available to the listener. PMID:27031343