Perception of biological motion from size-invariant body representations.
Lappe, Markus; Wittinghofer, Karin; de Lussanet, Marc H E
2015-01-01
The visual recognition of action is one of the socially most important and computationally demanding capacities of the human visual system. It combines visual shape recognition with complex non-rigid motion perception. Action presented as a point-light animation is a striking visual experience for anyone who sees it for the first time. Information about the shape and posture of the human body is sparse in point-light animations, but it is essential for action recognition. In the posturo-temporal filter model of biological motion perception posture information is picked up by visual neurons tuned to the form of the human body before body motion is calculated. We tested whether point-light stimuli are processed through posture recognition of the human body form by using a typical feature of form recognition, namely size invariance. We constructed a point-light stimulus that can only be perceived through a size-invariant mechanism. This stimulus changes rapidly in size from one image to the next. It thus disrupts continuity of early visuo-spatial properties but maintains continuity of the body posture representation. Despite this massive manipulation at the visuo-spatial level, size-changing point-light figures are spontaneously recognized by naive observers, and support discrimination of human body motion.
The paradoxical moon illusions.
Gilinsky, A S
1980-02-01
An adaptation theory of visual space is developed and applied to the data of a variety of studies of visual space perception. By distinguishing between the perceived distance of an object and that of the background or sky, the theory resolves the paradox of the moon illusions and relates both perceived size and perceived distance of the moon to the absolute level of spatial adaptation. The theory assumes that visual space expands or contracts in adjustment to changes in the sensory indicators of depth and provides a measure, A, of this adaptation-level. Changes in A have two effects--one on perceived size, one on perceived distance. Since A varies systematically as a function of angle of regard, availability of cues, and the total space-value, A is a measure of the moon illusions, and a practical index of individual differences by pilots and astronauts in the perception of the size and distance of objects on the ground and in the air.
Human Occipital and Parietal GABA Selectively Influence Visual Perception of Orientation and Size.
Song, Chen; Sandberg, Kristian; Andersen, Lau Møller; Blicher, Jakob Udby; Rees, Geraint
2017-09-13
GABA is the primary inhibitory neurotransmitter in human brain. The level of GABA varies substantially across individuals, and this variability is associated with interindividual differences in visual perception. However, it remains unclear whether the association between GABA level and visual perception reflects a general influence of visual inhibition or whether the GABA levels of different cortical regions selectively influence perception of different visual features. To address this, we studied how the GABA levels of parietal and occipital cortices related to interindividual differences in size, orientation, and brightness perception. We used visual contextual illusion as a perceptual assay since the illusion dissociates perceptual content from stimulus content and the magnitude of the illusion reflects the effect of visual inhibition. Across individuals, we observed selective correlations between the level of GABA and the magnitude of contextual illusion. Specifically, parietal GABA level correlated with size illusion magnitude but not with orientation or brightness illusion magnitude; in contrast, occipital GABA level correlated with orientation illusion magnitude but not with size or brightness illusion magnitude. Our findings reveal a region- and feature-dependent influence of GABA level on human visual perception. Parietal and occipital cortices contain, respectively, topographic maps of size and orientation preference in which neural responses to stimulus sizes and stimulus orientations are modulated by intraregional lateral connections. We propose that these lateral connections may underlie the selective influence of GABA on visual perception. SIGNIFICANCE STATEMENT GABA, the primary inhibitory neurotransmitter in human visual system, varies substantially across individuals. This interindividual variability in GABA level is linked to interindividual differences in many aspects of visual perception. However, the widespread influence of GABA raises the question of whether interindividual variability in GABA reflects an overall variability in visual inhibition and has a general influence on visual perception or whether the GABA levels of different cortical regions have selective influence on perception of different visual features. Here we report a region- and feature-dependent influence of GABA level on human visual perception. Our findings suggest that GABA level of a cortical region selectively influences perception of visual features that are topographically mapped in this region through intraregional lateral connections. Copyright © 2017 Song, Sandberg et al.
Perception of Auditory-Visual Distance Relations by 5-Month-Old Infants.
ERIC Educational Resources Information Center
Pickens, Jeffrey
1994-01-01
Sixty-four infants viewed side-by-side videotapes of toy trains (in four visual conditions) and listened to sounds at increasing or decreasing amplitude designed to match one of the videos. Results suggested that five-month olds were sensitive to auditory-visual distance relations and that change in size was an important visual depth cue. (MDM)
Goodhew, Stephanie C; Lawrence, Rebecca K; Edwards, Mark
2017-05-01
There are volumes of information available to process in visual scenes. Visual spatial attention is a critically important selection mechanism that prevents these volumes from overwhelming our visual system's limited-capacity processing resources. We were interested in understanding the effect of the size of the attended area on visual perception. The prevailing model of attended-region size across cognition, perception, and neuroscience is the zoom-lens model. This model stipulates that the magnitude of perceptual processing enhancement is inversely related to the size of the attended region, such that a narrow attended-region facilitates greater perceptual enhancement than a wider region. Yet visual processing is subserved by two major visual pathways (magnocellular and parvocellular) that operate with a degree of independence in early visual processing and encode contrasting visual information. Historically, testing of the zoom-lens has used measures of spatial acuity ideally suited to parvocellular processing. This, therefore, raises questions about the generality of the zoom-lens model to different aspects of visual perception. We found that while a narrow attended-region facilitated spatial acuity and the perception of high spatial frequency targets, it had no impact on either temporal acuity or the perception of low spatial frequency targets. This pattern also held up when targets were not presented centrally. This supports the notion that visual attended-region size has dissociable effects on magnocellular versus parvocellular mediated visual processing.
The effect of phasic auditory alerting on visual perception.
Petersen, Anders; Petersen, Annemarie Hilkjær; Bundesen, Claus; Vangkilde, Signe; Habekost, Thomas
2017-08-01
Phasic alertness refers to a short-lived change in the preparatory state of the cognitive system following an alerting signal. In the present study, we examined the effect of phasic auditory alerting on distinct perceptual processes, unconfounded by motor components. We combined an alerting/no-alerting design with a pure accuracy-based single-letter recognition task. Computational modeling based on Bundesen's Theory of Visual Attention was used to examine the effect of phasic alertness on visual processing speed and threshold of conscious perception. Results show that phasic auditory alertness affects visual perception by increasing the visual processing speed and lowering the threshold of conscious perception (Experiment 1). By manipulating the intensity of the alerting cue, we further observed a positive relationship between alerting intensity and processing speed, which was not seen for the threshold of conscious perception (Experiment 2). This was replicated in a third experiment, in which pupil size was measured as a physiological marker of alertness. Results revealed that the increase in processing speed was accompanied by an increase in pupil size, substantiating the link between alertness and processing speed (Experiment 3). The implications of these results are discussed in relation to a newly developed mathematical model of the relationship between levels of alertness and the speed with which humans process visual information. Copyright © 2017 Elsevier B.V. All rights reserved.
Nakamura, S; Shimojo, S
1998-10-01
The effects of the size and eccentricity of the visual stimulus upon visually induced perception of self-motion (vection) were examined with various sizes of central and peripheral visual stimulation. Analysis indicated the strength of vection increased linearly with the size of the area in which the moving pattern was presented, but there was no difference in vection strength between central and peripheral stimuli when stimulus sizes were the same. Thus, the effect of stimulus size is homogeneous across eccentricities in the visual field.
Chen, Juan; Sperandio, Irene; Goodale, Melvyn Alan
2018-03-19
Our brain integrates information from multiple modalities in the control of behavior. When information from one sensory source is compromised, information from another source can compensate for the loss. What is not clear is whether the nature of this multisensory integration and the re-weighting of different sources of sensory information are the same across different control systems. Here, we investigated whether proprioceptive distance information (position sense of body parts) can compensate for the loss of visual distance cues that support size constancy in perception (mediated by the ventral visual stream) [1, 2] versus size constancy in grasping (mediated by the dorsal visual stream) [3-6], in which the real-world size of an object is computed despite changes in viewing distance. We found that there was perfect size constancy in both perception and grasping in a full-viewing condition (lights on, binocular viewing) and that size constancy in both tasks was dramatically disrupted in the restricted-viewing condition (lights off; monocular viewing of the same but luminescent object through a 1-mm pinhole). Importantly, in the restricted-viewing condition, proprioceptive cues about viewing distance originating from the non-grasping limb (experiment 1) or the inclination of the torso and/or the elbow angle of the grasping limb (experiment 2) compensated for the loss of visual distance cues to enable a complete restoration of size constancy in grasping but only a modest improvement of size constancy in perception. This suggests that the weighting of different sources of sensory information varies as a function of the control system being used. Copyright © 2018 Elsevier Ltd. All rights reserved.
Chen, Juan; Sperandio, Irene; Goodale, Melvyn Alan
2015-01-01
Objects rarely appear in isolation in natural scenes. Although many studies have investigated how nearby objects influence perception in cluttered scenes (i.e., crowding), none has studied how nearby objects influence visually guided action. In Experiment 1, we found that participants could scale their grasp to the size of a crowded target even when they could not perceive its size, demonstrating for the first time that neurologically intact participants can use visual information that is not available to conscious report to scale their grasp to real objects in real scenes. In Experiments 2 and 3, we found that changing the eccentricity of the display and the orientation of the flankers had no effect on grasping but strongly affected perception. The differential effects of eccentricity and flanker orientation on perception and grasping show that the known differences in retinotopy between the ventral and dorsal streams are reflected in the way in which people deal with targets in cluttered scenes. © The Author(s) 2014.
Rhodes, Gillian; Jeffery, Linda; Boeing, Alexandra; Calder, Andrew J
2013-04-01
Despite the discovery of body-selective neural areas in occipitotemporal cortex, little is known about how bodies are visually coded. We used perceptual adaptation to determine how body identity is coded. Brief exposure to a body (e.g., anti-Rose) biased perception toward an identity with opposite properties (Rose). Moreover, the size of this aftereffect increased with adaptor extremity, as predicted by norm-based, opponent coding of body identity. A size change between adapt and test bodies minimized the effects of low-level, retinotopic adaptation. These results demonstrate that body identity, like face identity, is opponent coded in higher-level vision. More generally, they show that a norm-based multidimensional framework, which is well established for face perception, may provide a powerful framework for understanding body perception.
Robinson, Eric; Oldham, Melissa; Cuckson, Imogen; Brunstrom, Jeffrey M; Rogers, Peter J; Hardman, Charlotte A
2016-03-01
Portion sizes of many foods have increased in recent times. In three studies we examined the effect that repeated visual exposure to larger versus smaller food portion sizes has on perceptions of what constitutes a normal-sized food portion and measures of portion size selection. In studies 1 and 2 participants were visually exposed to images of large or small portions of spaghetti bolognese, before making evaluations about an image of an intermediate sized portion of the same food. In study 3 participants were exposed to images of large or small portions of a snack food before selecting a portion size of snack food to consume. Across the three studies, visual exposure to larger as opposed to smaller portion sizes resulted in participants considering a normal portion of food to be larger than a reference intermediate sized portion. In studies 1 and 2 visual exposure to larger portion sizes also increased the size of self-reported ideal meal size. In study 3 visual exposure to larger portion sizes of a snack food did not affect how much of that food participants subsequently served themselves and ate. Visual exposure to larger portion sizes may adjust visual perceptions of what constitutes a 'normal' sized portion. However, we did not find evidence that visual exposure to larger portions altered snack food intake. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Is it just motion that silences awareness of other visual changes?
Peirce, Jonathan W
2013-06-28
When an array of visual elements is changing color, size, or shape incoherently, the changes are typically quite visible even when the overall color, size, or shape statistics of the field may not have changed. When the dots also move, however, the changes become much less apparent; awareness of them is "silenced" (Suchow & Alvarez, 2011). This finding might indicate that the perception of motion is of particular importance to the visual system, such that it is given priority in processing over other forms of visual change. Here we test whether that is the case by examining the converse: whether awareness of motion signals can be silenced by potent coherent changes in color or size. We find that they can, and with very similar effects, indicating that motion is not critical for silencing. Suchow and Alvarez's dots always moved in the same direction with the same speed, causing them to be grouped as a single entity. We also tested whether this coherence was a necessary component of the silencing effect. It is not; when the dot speeds are randomly selected, such that no coherent motion is present, the silencing effect remains. It is clear that neither motion nor grouping is directly responsible for the silencing effect. Silencing can be generated from any potent visual change.
Turbidity in oil-in-water-emulsions - Key factors and visual perception.
Linke, C; Drusch, S
2016-11-01
The aim of the present study is to systematically describe the factors affecting turbidity in beverage emulsions and to get a better understanding of visual perception of turbidity. The sensory evaluation of the human visual perception of turbidity showed that humans are most sensitive to turbidity differences between two samples in the range between 1000 and 1500 NTU (ratio) (nephelometric turbidity units). At very high turbidity values >2000 TU in NTU (ratio) were needed to distinguish between samples that they were perceived significantly different. Particle size was the most important factor affecting turbidity. It was shown that a maximum turbidity occurs at a mean volume - surface diameter of 0.2μm for the oil droplet size. Additional parameters were the refractive index, the composition of the aqueous phase and the presence of excess emulsifier. In a concentration typical for a beverage emulsion a change in the refractive index of the oil phase may allow the alteration of turbidity by up to 30%. With the knowledge on visual perception of turbidity and the determining factors, turbidity can be tailored in product development according to the customer requirements and in quality control to define acceptable variations in optical appearance. Copyright © 2016. Published by Elsevier Ltd.
Audio–visual interactions for motion perception in depth modulate activity in visual area V3A
Ogawa, Akitoshi; Macaluso, Emiliano
2013-01-01
Multisensory signals can enhance the spatial perception of objects and events in the environment. Changes of visual size and auditory intensity provide us with the main cues about motion direction in depth. However, frequency changes in audition and binocular disparity in vision also contribute to the perception of motion in depth. Here, we presented subjects with several combinations of auditory and visual depth-cues to investigate multisensory interactions during processing of motion in depth. The task was to discriminate the direction of auditory motion in depth according to increasing or decreasing intensity. Rising or falling auditory frequency provided an additional within-audition cue that matched or did not match the intensity change (i.e. intensity-frequency (IF) “matched vs. unmatched” conditions). In two-thirds of the trials, a task-irrelevant visual stimulus moved either in the same or opposite direction of the auditory target, leading to audio–visual “congruent vs. incongruent” between-modalities depth-cues. Furthermore, these conditions were presented either with or without binocular disparity. Behavioral data showed that the best performance was observed in the audio–visual congruent condition with IF matched. Brain imaging results revealed maximal response in visual area V3A when all cues provided congruent and reliable depth information (i.e. audio–visual congruent, IF-matched condition including disparity cues). Analyses of effective connectivity revealed increased coupling from auditory cortex to V3A specifically in audio–visual congruent trials. We conclude that within- and between-modalities cues jointly contribute to the processing of motion direction in depth, and that they do so via dynamic changes of connectivity between visual and auditory cortices. PMID:23333414
Distance and Size Perception in Astronauts during Long-Duration Spaceflight
Clément, Gilles; Skinner, Anna; Lathan, Corinna
2013-01-01
Exposure to microgravity during spaceflight is known to elicit orientation illusions, errors in sensory localization, postural imbalance, changes in vestibulo-spinal and vestibulo-ocular reflexes, and space motion sickness. The objective of this experiment was to investigate whether an alteration in cognitive visual-spatial processing, such as the perception of distance and size of objects, is also taking place during prolonged exposure to microgravity. Our results show that astronauts on board the International Space Station exhibit biases in the perception of their environment. Objects’ heights and depths were perceived as taller and shallower, respectively, and distances were generally underestimated in orbit compared to Earth. These changes may occur because the perspective cues for depth are less salient in microgravity or the eye-height scaling of size is different when an observer is not standing on the ground. This finding has operational implications for human space exploration missions. PMID:25369884
Illusions of having small or large invisible bodies influence visual perception of object size
van der Hoort, Björn; Ehrsson, H. Henrik
2016-01-01
The size of our body influences the perceived size of the world so that objects appear larger to children than to adults. The mechanisms underlying this effect remain unclear. It has been difficult to dissociate visual rescaling of the external environment based on an individual’s visible body from visual rescaling based on a central multisensory body representation. To differentiate these potential causal mechanisms, we manipulated body representation without a visible body by taking advantage of recent developments in body representation research. Participants experienced the illusion of having a small or large invisible body while object-size perception was tested. Our findings show that the perceived size of test-objects was determined by the size of the invisible body (inverse relation), and by the strength of the invisible body illusion. These findings demonstrate how central body representation directly influences visual size perception, without the need for a visible body, by rescaling the spatial representation of the environment. PMID:27708344
The visual perception of size and distance.
DOT National Transportation Integrated Search
1962-07-01
The perception of absolute distance has been assumed to be important in the perception of the size of objects and the depth between them. A different hypothesis is proposed. It is asserted that perceived relative size and distance are the primary psy...
Metabolic rate and body size are linked with perception of temporal information☆
Healy, Kevin; McNally, Luke; Ruxton, Graeme D.; Cooper, Natalie; Jackson, Andrew L.
2013-01-01
Body size and metabolic rate both fundamentally constrain how species interact with their environment, and hence ultimately affect their niche. While many mechanisms leading to these constraints have been explored, their effects on the resolution at which temporal information is perceived have been largely overlooked. The visual system acts as a gateway to the dynamic environment and the relative resolution at which organisms are able to acquire and process visual information is likely to restrict their ability to interact with events around them. As both smaller size and higher metabolic rates should facilitate rapid behavioural responses, we hypothesized that these traits would favour perception of temporal change over finer timescales. Using critical flicker fusion frequency, the lowest frequency of flashing at which a flickering light source is perceived as constant, as a measure of the maximum rate of temporal information processing in the visual system, we carried out a phylogenetic comparative analysis of a wide range of vertebrates that supported this hypothesis. Our results have implications for the evolution of signalling systems and predator–prey interactions, and, combined with the strong influence that both body mass and metabolism have on a species' ecological niche, suggest that time perception may constitute an important and overlooked dimension of niche differentiation. PMID:24109147
Contributions of visual and embodied expertise to body perception.
Reed, Catherine L; Nyberg, Andrew A; Grubb, Jefferson D
2012-01-01
Recent research has demonstrated that our perception of the human body differs from that of inanimate objects. This study investigated whether the visual perception of the human body differs from that of other animate bodies and, if so, whether that difference could be attributed to visual experience and/or embodied experience. To dissociate differential effects of these two types of expertise, inversion effects (recognition of inverted stimuli is slower and less accurate than recognition of upright stimuli) were compared for two types of bodies in postures that varied in typicality: humans in human postures (human-typical), humans in dog postures (human-atypical), dogs in dog postures (dog-typical), and dogs in human postures (dog-atypical). Inversion disrupts global configural processing. Relative changes in the size and presence of inversion effects reflect changes in visual processing. Both visual and embodiment expertise predict larger inversion effects for human over dog postures because we see humans more and we have experience producing human postures. However, our design that crosses body type and typicality leads to distinct predictions for visual and embodied experience. Visual expertise predicts an interaction between typicality and orientation: greater inversion effects should be found for typical over atypical postures regardless of body type. Alternatively, embodiment expertise predicts a body, typicality, and orientation interaction: larger inversion effects should be found for all human postures but only for atypical dog postures because humans can map their bodily experience onto these postures. Accuracy data supported embodiment expertise with the three-way interaction. However, response-time data supported contributions of visual expertise with larger inversion effects for typical over atypical postures. Thus, both types of expertise affect the visual perception of bodies.
Affective and physiological correlates of the perception of unimodal and bimodal emotional stimuli.
Rosa, Pedro J; Oliveira, Jorge; Alghazzawi, Daniyal; Fardoun, Habib; Gamito, Pedro
2017-08-01
Despite the multisensory nature of perception, previous research on emotions has been focused on unimodal emotional cues with visual stimuli. To the best of our knowledge, there is no evidence on the extent to which incongruent emotional cues from visual and auditory sensory channels affect pupil size. To investigate the effects of audiovisual emotional information perception on the physiological and affective response, but also to determine the impact of mismatched cues in emotional perception on these physiological indexes. Pupil size, electrodermal activity and affective subjective responses were recorded while 30 participants were exposed to visual and auditory stimuli with varied emotional content in three different experimental conditions: pictures and sounds presented alone (unimodal), emotionally matched audio-visual stimuli (bimodal congruent) and emotionally mismatched audio-visual stimuli (bimodal incongruent). The data revealed no effect of emotional incongruence on physiological and affective responses. On the other hand, pupil size covaried with skin conductance response (SCR), but the subjective experience was partially dissociated from autonomic responses. Emotional stimuli are able to trigger physiological responses regardless of valence, sensory modality or level of emotional congruence.
The Effect of Temporal Perception on Weight Perception
Kambara, Hiroyuki; Shin, Duk; Kawase, Toshihiro; Yoshimura, Natsue; Akahane, Katsuhito; Sato, Makoto; Koike, Yasuharu
2013-01-01
A successful catch of a falling ball requires an accurate estimation of the timing for when the ball hits the hand. In a previous experiment in which participants performed ball-catching task in virtual reality environment, we accidentally found that the weight of a falling ball was perceived differently when the timing of ball load force to the hand was shifted from the timing expected from visual information. Although it is well known that spatial information of an object, such as size, can easily deceive our perception of its heaviness, the relationship between temporal information and perceived heaviness is still not clear. In this study, we investigated the effect of temporal factors on weight perception. We conducted ball-catching experiments in a virtual environment where the timing of load force exertion was shifted away from the visual contact timing (i.e., time when the ball hit the hand in the display). We found that the ball was perceived heavier when force was applied earlier than visual contact and lighter when force was applied after visual contact. We also conducted additional experiments in which participants were conditioned to one of two constant time offsets prior to testing weight perception. After performing ball-catching trials with 60 ms advanced or delayed load force exertion, participants’ subjective judgment on the simultaneity of visual contact and force exertion changed, reflecting a shift in perception of time offset. In addition, timing of catching motion initiation relative to visual contact changed, reflecting a shift in estimation of force timing. We also found that participants began to perceive the ball as lighter after conditioning to 60 ms advanced offset and heavier after the 60 ms delayed offset. These results suggest that perceived heaviness depends not on the actual time offset between force exertion and visual contact but on the subjectively perceived time offset between them and/or estimation error in force timing. PMID:23450805
Stekelenburg, Jeroen J; Keetels, Mirjam; Vroomen, Jean
2018-05-01
Numerous studies have demonstrated that the vision of lip movements can alter the perception of auditory speech syllables (McGurk effect). While there is ample evidence for integration of text and auditory speech, there are only a few studies on the orthographic equivalent of the McGurk effect. Here, we examined whether written text, like visual speech, can induce an illusory change in the perception of speech sounds on both the behavioural and neural levels. In a sound categorization task, we found that both text and visual speech changed the identity of speech sounds from an /aba/-/ada/ continuum, but the size of this audiovisual effect was considerably smaller for text than visual speech. To examine at which level in the information processing hierarchy these multisensory interactions occur, we recorded electroencephalography in an audiovisual mismatch negativity (MMN, a component of the event-related potential reflecting preattentive auditory change detection) paradigm in which deviant text or visual speech was used to induce an illusory change in a sequence of ambiguous sounds halfway between /aba/ and /ada/. We found that only deviant visual speech induced an MMN, but not deviant text, which induced a late P3-like positive potential. These results demonstrate that text has much weaker effects on sound processing than visual speech does, possibly because text has different biological roots than visual speech. © 2018 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Interocular induction of illusory size perception.
Song, Chen; Schwarzkopf, D Samuel; Rees, Geraint
2011-03-11
The perceived size of objects not only depends on their physical size but also on the surroundings in which they appear. For example, an object surrounded by small items looks larger than a physically identical object surrounded by big items (Ebbinghaus illusion), and a physically identical but distant object looks larger than an object that appears closer in space (Ponzo illusion). Activity in human primary visual cortex (V1) reflects the perceived rather than the physical size of objects, indicating an involvement of V1 in illusory size perception. Here we investigate the role of eye-specific signals in two common size illusions in order to provide further information about the mechanisms underlying illusory size perception. We devised stimuli so that an object and its spatial context associated with illusory size perception could be presented together to one eye or separately to two eyes. We found that the Ponzo illusion had an equivalent magnitude whether the objects and contexts were presented to the same or different eyes, indicating that it may be largely mediated by binocular neurons. In contrast, the Ebbinghaus illusion became much weaker when objects and their contexts were presented to different eyes, indicating important contributions to the illusion from monocular neurons early in the visual pathway. Our findings show that two well-known size illusions - the Ponzo illusion and the Ebbinghaus illusion - are mediated by different neuronal populations, and suggest that the underlying neural mechanisms associated with illusory size perception differ and can be dependent on monocular channels in the early visual pathway.
Goodhew, Stephanie C; Shen, Elizabeth; Edwards, Mark
2016-08-01
An important but often neglected aspect of attention is how changes in the attentional spotlight size impact perception. The zoom-lens model predicts that a small ("focal") attentional spotlight enhances all aspects of perception relative to a larger ("diffuse" spotlight). However, based on the physiological properties of the two major classes of visual cells (magnocellular and parvocellular neurons) we predicted trade-offs in spatial and temporal acuity as a function of spotlight size. Contrary to both of these accounts, however, across two experiments we found that attentional spotlight size affected spatial acuity, such that spatial acuity was enhanced for a focal relative to a diffuse spotlight, whereas the same modulations in spotlight size had no impact on temporal acuity. This likely reflects the function of attention: to induce the high spatial resolution of the fovea in periphery, where spatial resolution is poor but temporal resolution is good. It is adaptive, therefore, for the attentional spotlight to enhance spatial acuity, whereas enhancing temporal acuity does not confer the same benefit.
Learning to Recognize Patterns: Changes in the Visual Field with Familiarity
NASA Astrophysics Data System (ADS)
Bebko, James M.; Uchikawa, Keiji; Saida, Shinya; Ikeda, Mitsuo
1995-01-01
Two studies were conducted to investigate changes which take place in the visual information processing of novel stimuli as they become familiar. Japanese writing characters (Hiragana and Kanji) which were unfamiliar to two native English speaking subjects were presented using a moving window technique to restrict their visual fields. Study time for visual recognition was recorded across repeated sessions, and with varying visual field restrictions. The critical visual field was defined as the size of the visual field beyond which further increases did not improve the speed of recognition performance. In the first study, when the Hiragana patterns were novel, subjects needed to see about half of the entire pattern simultaneously to maintain optimal performance. However, the critical visual field size decreased as familiarity with the patterns increased. These results were replicated in the second study with more complex Kanji characters. In addition, the critical field size decreased as pattern complexity decreased. We propose a three component model of pattern perception. In the first stage a representation of the stimulus must be constructed by the subject, and restricting of the visual field interferes dramatically with this component when stimuli are unfamiliar. With increased familiarity, subjects become able to reconstruct a previous representation from very small, unique segments of the pattern, analogous to the informativeness areas hypothesized by Loftus and Mackworth [J. Exp. Psychol., 4 (1978) 565].
Top-down control of visual perception: attention in natural vision.
Rolls, Edmund T
2008-01-01
Top-down perceptual influences can bias (or pre-empt) perception. In natural scenes, the receptive fields of neurons in the inferior temporal visual cortex (IT) shrink to become close to the size of objects. This facilitates the read-out of information from the ventral visual system, because the information is primarily about the object at the fovea. Top-down attentional influences are much less evident in natural scenes than when objects are shown against blank backgrounds, though are still present. It is suggested that the reduced receptive-field size in natural scenes, and the effects of top-down attention contribute to change blindness. The receptive fields of IT neurons in complex scenes, though including the fovea, are frequently asymmetric around the fovea, and it is proposed that this is the solution the IT uses to represent multiple objects and their relative spatial positions in a scene. Networks that implement probabilistic decision-making are described, and it is suggested that, when in perceptual systems they take decisions (or 'test hypotheses'), they influence lower-level networks to bias visual perception. Finally, it is shown that similar processes extend to systems involved in the processing of emotion-provoking sensory stimuli, in that word-level cognitive states provide top-down biasing that reaches as far down as the orbitofrontal cortex, where, at the first stage of affective representations, olfactory, taste, flavour, and touch processing is biased (or pre-empted) in humans.
Enhanced visual performance in obsessive compulsive personality disorder.
Ansari, Zohreh; Fadardi, Javad Salehi
2016-12-01
Visual performance is considered as commanding modality in human perception. We tested whether Obsessive-compulsive personality disorder (OCPD) people do differently in visual performance tasks than people without OCPD. One hundred ten students of Ferdowsi University of Mashhad and non-student participants were tested by Structured Clinical Interview for DSM-IV Axis II Personality Disorders (SCID-II), among whom 18 (mean age = 29.55; SD = 5.26; 84% female) met the criteria for OCPD classification; controls were 20 persons (mean age = 27.85; SD = 5.26; female = 84%), who did not met the OCPD criteria. Both groups were tested on a modified Flicker task for two dimensions of visual performance (i.e., visual acuity: detecting the location of change, complexity, and size; and visual contrast sensitivity). The OCPD group had responded more accurately on pairs related to size, complexity, and contrast, but spent more time to detect a change on pairs related to complexity and contrast. The OCPD individuals seem to have more accurate visual performance than non-OCPD controls. The findings support the relationship between personality characteristics and visual performance within the framework of top-down processing model. © 2016 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
Etchemendy, Pablo E; Spiousas, Ignacio; Vergara, Ramiro
2018-01-01
In a recently published work by our group [ Scientific Reports, 7, 7189 (2017)], we performed experiments of visual distance perception in two dark rooms with extremely different reverberation times: one anechoic ( T ∼ 0.12 s) and the other reverberant ( T ∼ 4 s). The perceived distance of the targets was systematically greater in the reverberant room when contrasted to the anechoic chamber. Participants also provided auditorily perceived room-size ratings which were greater for the reverberant room. Our hypothesis was that distance estimates are affected by room size, resulting in farther responses for the room perceived larger. Of much importance to the task was the subjects' ability to infer room size from reverberation. In this article, we report a postanalysis showing that participants having musical expertise were better able to extract and translate reverberation cues into room-size information than nonmusicians. However, the degree to which musical expertise affects visual distance estimates remains unclear.
Interocular induction of illusory size perception
2011-01-01
Background The perceived size of objects not only depends on their physical size but also on the surroundings in which they appear. For example, an object surrounded by small items looks larger than a physically identical object surrounded by big items (Ebbinghaus illusion), and a physically identical but distant object looks larger than an object that appears closer in space (Ponzo illusion). Activity in human primary visual cortex (V1) reflects the perceived rather than the physical size of objects, indicating an involvement of V1 in illusory size perception. Here we investigate the role of eye-specific signals in two common size illusions in order to provide further information about the mechanisms underlying illusory size perception. Results We devised stimuli so that an object and its spatial context associated with illusory size perception could be presented together to one eye or separately to two eyes. We found that the Ponzo illusion had an equivalent magnitude whether the objects and contexts were presented to the same or different eyes, indicating that it may be largely mediated by binocular neurons. In contrast, the Ebbinghaus illusion became much weaker when objects and their contexts were presented to different eyes, indicating important contributions to the illusion from monocular neurons early in the visual pathway. Conclusions Our findings show that two well-known size illusions - the Ponzo illusion and the Ebbinghaus illusion - are mediated by different neuronal populations, and suggest that the underlying neural mechanisms associated with illusory size perception differ and can be dependent on monocular channels in the early visual pathway. PMID:21396093
Perception of ensemble statistics requires attention.
Jackson-Nielsen, Molly; Cohen, Michael A; Pitts, Michael A
2017-02-01
To overcome inherent limitations in perceptual bandwidth, many aspects of the visual world are represented as summary statistics (e.g., average size, orientation, or density of objects). Here, we investigated the relationship between summary (ensemble) statistics and visual attention. Recently, it was claimed that one ensemble statistic in particular, color diversity, can be perceived without focal attention. However, a broader debate exists over the attentional requirements of conscious perception, and it is possible that some form of attention is necessary for ensemble perception. To test this idea, we employed a modified inattentional blindness paradigm and found that multiple types of summary statistics (color and size) often go unnoticed without attention. In addition, we found attentional costs in dual-task situations, further implicating a role for attention in statistical perception. Overall, we conclude that while visual ensembles may be processed efficiently, some amount of attention is necessary for conscious perception of ensemble statistics. Copyright © 2016 Elsevier Inc. All rights reserved.
Perceptual Averaging in Individuals with Autism Spectrum Disorder.
Corbett, Jennifer E; Venuti, Paola; Melcher, David
2016-01-01
There is mounting evidence that observers rely on statistical summaries of visual information to maintain stable and coherent perception. Sensitivity to the mean (or other prototypical value) of a visual feature (e.g., mean size) appears to be a pervasive process in human visual perception. Previous studies in individuals diagnosed with Autism Spectrum Disorder (ASD) have uncovered characteristic patterns of visual processing that suggest they may rely more on enhanced local representations of individual objects instead of computing such perceptual averages. To further explore the fundamental nature of abstract statistical representation in visual perception, we investigated perceptual averaging of mean size in a group of 12 high-functioning individuals diagnosed with ASD using simplified versions of two identification and adaptation tasks that elicited characteristic perceptual averaging effects in a control group of neurotypical participants. In Experiment 1, participants performed with above chance accuracy in recalling the mean size of a set of circles ( mean task ) despite poor accuracy in recalling individual circle sizes ( member task ). In Experiment 2, their judgments of single circle size were biased by mean size adaptation. Overall, these results suggest that individuals with ASD perceptually average information about sets of objects in the surrounding environment. Our results underscore the fundamental nature of perceptual averaging in vision, and further our understanding of how autistic individuals make sense of the external environment.
Saturation in Phosphene Size with Increasing Current Levels Delivered to Human Visual Cortex.
Bosking, William H; Sun, Ping; Ozker, Muge; Pei, Xiaomei; Foster, Brett L; Beauchamp, Michael S; Yoshor, Daniel
2017-07-26
Electrically stimulating early visual cortex results in a visual percept known as a phosphene. Although phosphenes can be evoked by a wide range of electrode sizes and current amplitudes, they are invariably described as small. To better understand this observation, we electrically stimulated 93 electrodes implanted in the visual cortex of 13 human subjects who reported phosphene size while stimulation current was varied. Phosphene size increased as the stimulation current was initially raised above threshold, but then rapidly reached saturation. Phosphene size also depended on the location of the stimulated site, with size increasing with distance from the foveal representation. We developed a model relating phosphene size to the amount of activated cortex and its location within the retinotopic map. First, a sigmoidal curve was used to predict the amount of activated cortex at a given current. Second, the amount of active cortex was converted to degrees of visual angle by multiplying by the inverse cortical magnification factor for that retinotopic location. This simple model accurately predicted phosphene size for a broad range of stimulation currents and cortical locations. The unexpected saturation in phosphene sizes suggests that the functional architecture of cerebral cortex may impose fundamental restrictions on the spread of artificially evoked activity and this may be an important consideration in the design of cortical prosthetic devices. SIGNIFICANCE STATEMENT Understanding the neural basis for phosphenes, the visual percepts created by electrical stimulation of visual cortex, is fundamental to the development of a visual cortical prosthetic. Our experiments in human subjects implanted with electrodes over visual cortex show that it is the activity of a large population of cells spread out across several millimeters of tissue that supports the perception of a phosphene. In addition, we describe an important feature of the production of phosphenes by electrical stimulation: phosphene size saturates at a relatively low current level. This finding implies that, with current methods, visual prosthetics will have a limited dynamic range available to control the production of spatial forms and that more advanced stimulation methods may be required. Copyright © 2017 the authors 0270-6474/17/377188-10$15.00/0.
ERIC Educational Resources Information Center
Gao, Tao; Gao, Zaifeng; Li, Jie; Sun, Zhongqiang; Shen, Mowei
2011-01-01
Mainstream theories of visual perception assume that visual working memory (VWM) is critical for integrating online perceptual information and constructing coherent visual experiences in changing environments. Given the dynamic interaction between online perception and VWM, we propose that how visual information is processed during visual…
Endogenous modulation of human visual cortex activity improves perception at twilight.
Cordani, Lorenzo; Tagliazucchi, Enzo; Vetter, Céline; Hassemer, Christian; Roenneberg, Till; Stehle, Jörg H; Kell, Christian A
2018-04-10
Perception, particularly in the visual domain, is drastically influenced by rhythmic changes in ambient lighting conditions. Anticipation of daylight changes by the circadian system is critical for survival. However, the neural bases of time-of-day-dependent modulation in human perception are not yet understood. We used fMRI to study brain dynamics during resting-state and close-to-threshold visual perception repeatedly at six times of the day. Here we report that resting-state signal variance drops endogenously at times coinciding with dawn and dusk, notably in sensory cortices only. In parallel, perception-related signal variance in visual cortices decreases and correlates negatively with detection performance, identifying an anticipatory mechanism that compensates for the deteriorated visual signal quality at dawn and dusk. Generally, our findings imply that decreases in spontaneous neural activity improve close-to-threshold perception.
Emotion and Perception: The Role of Affective Information
Zadra, Jonathan R.; Clore, Gerald L.
2011-01-01
Visual perception and emotion are traditionally considered separate domains of study. In this article, however, we review research showing them to be less separable that usually assumed. In fact, emotions routinely affect how and what we see. Fear, for example, can affect low-level visual processes, sad moods can alter susceptibility to visual illusions, and goal-directed desires can change the apparent size of goal-relevant objects. In addition, the layout of the physical environment, including the apparent steepness of a hill and the distance to the ground from a balcony can both be affected by emotional states. We propose that emotions provide embodied information about the costs and benefits of anticipated action, information that can be used automatically and immediately, circumventing the need for cogitating on the possible consequences of potential actions. Emotions thus provide a strong motivating influence on how the environment is perceived. PMID:22039565
Developmental study of visual perception of handwriting movement: influence of motor competencies?
Bidet-Ildei, Christel; Orliaguet, Jean-Pierre
2008-07-25
This paper investigates the influence of motor competencies for the visual perception of human movements in 6-10 years old children. To this end, we compared the kinematics of actual performed and perceptual preferred handwriting movements. The two children's tasks were (1) to write the letter e on a digitizer (handwriting task) and (2) to adjust the velocity of an e displayed on a screen so that it would correspond to "their preferred velocity" (perceptive task). In both tasks, the size of the letter (from 3.4 to 54.02 cm) was different on each trial. Results showed that irrespective of age and task, total movement time conforms to the isochrony principle, i.e., the tendency to maintain constant the duration of movement across changes of amplitude. However, concerning movement speed, there is no developmental correspondence between results obtained in the motor and the perceptive tasks. In handwriting task, movement time decreased with age but no effect of age was observed in the perceptive task. Therefore, perceptual preference of handwriting movement in children could not be strictly interpreted in terms of motor-perceptual coupling.
Costa, Thiago L; Costa, Marcelo F; Magalhães, Adsson; Rêgo, Gabriel G; Nagy, Balázs V; Boggio, Paulo S; Ventura, Dora F
2015-02-19
Recent research suggests that V1 plays an active role in the judgment of size and distance. Nevertheless, no research has been performed using direct brain stimulation to address this issue. We used transcranial direct-current stimulation (tDCS) to directly modulate the early stages of cortical visual processing while measuring size and distance perception with a psychophysical scaling method of magnitude estimation in a repeated-measures design. The subjects randomly received anodal, cathodal, and sham tDCS in separate sessions starting with size or distance judgment tasks. Power functions were fit to the size judgment data, whereas logarithmic functions were fit to distance judgment data. Slopes and R(2) were compared with separate repeated-measures analyses of variance with two factors: task (size vs. distance) and tDCS (anodal vs. cathodal vs. sham). Anodal tDCS significantly decreased slopes, apparently interfering with size perception. No effects were found for distance perception. Consistent with previous studies, the results of the size task appeared to reflect a prothetic continuum, whereas the results of the distance task seemed to reflect a metathetic continuum. The differential effects of tDCS on these tasks may support the hypothesis that different physiological mechanisms underlie judgments on these two continua. The results further suggest the complex involvement of the early visual cortex in size judgment tasks that go beyond the simple representation of low-level stimulus properties. This supports predictive coding models and experimental findings that suggest that higher-order visual areas may inhibit incoming information from the early visual cortex through feedback connections when complex tasks are performed. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Pooresmaeili, Arezoo; Arrighi, Roberto; Biagi, Laura; Morrone, Maria Concetta
2016-01-01
In natural scenes, objects rarely occur in isolation but appear within a spatiotemporal context. Here, we show that the perceived size of a stimulus is significantly affected by the context of the scene: brief previous presentation of larger or smaller adapting stimuli at the same region of space changes the perceived size of a test stimulus, with larger adapting stimuli causing the test to appear smaller than veridical and vice versa. In a human fMRI study, we measured the blood oxygen level-dependent activation (BOLD) responses of the primary visual cortex (V1) to the contours of large-diameter stimuli and found that activation closely matched the perceptual rather than the retinal stimulus size: the activated area of V1 increased or decreased, depending on the size of the preceding stimulus. A model based on local inhibitory V1 mechanisms simulated the inward or outward shifts of the stimulus contours and hence the perceptual effects. Our findings suggest that area V1 is actively involved in reshaping our perception to match the short-term statistics of the visual scene. PMID:24089504
Audiovisual Interval Size Estimation Is Associated with Early Musical Training.
Abel, Mary Kathryn; Li, H Charles; Russo, Frank A; Schlaug, Gottfried; Loui, Psyche
2016-01-01
Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched) stimuli. Participants' ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants' ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception.
Audiovisual Interval Size Estimation Is Associated with Early Musical Training
Abel, Mary Kathryn; Li, H. Charles; Russo, Frank A.; Schlaug, Gottfried; Loui, Psyche
2016-01-01
Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched) stimuli. Participants’ ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants’ ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception. PMID:27760134
Modeling visual clutter perception using proto-object segmentation
Yu, Chen-Ping; Samaras, Dimitris; Zelinsky, Gregory J.
2014-01-01
We introduce the proto-object model of visual clutter perception. This unsupervised model segments an image into superpixels, then merges neighboring superpixels that share a common color cluster to obtain proto-objects—defined here as spatially extended regions of coherent features. Clutter is estimated by simply counting the number of proto-objects. We tested this model using 90 images of realistic scenes that were ranked by observers from least to most cluttered. Comparing this behaviorally obtained ranking to a ranking based on the model clutter estimates, we found a significant correlation between the two (Spearman's ρ = 0.814, p < 0.001). We also found that the proto-object model was highly robust to changes in its parameters and was generalizable to unseen images. We compared the proto-object model to six other models of clutter perception and demonstrated that it outperformed each, in some cases dramatically. Importantly, we also showed that the proto-object model was a better predictor of clutter perception than an actual count of the number of objects in the scenes, suggesting that the set size of a scene may be better described by proto-objects than objects. We conclude that the success of the proto-object model is due in part to its use of an intermediate level of visual representation—one between features and objects—and that this is evidence for the potential importance of a proto-object representation in many common visual percepts and tasks. PMID:24904121
Perspective Space as a Model for Distance and Size Perception.
Erkelens, Casper J
2017-01-01
In the literature, perspective space has been introduced as a model of visual space. Perspective space is grounded on the perspective nature of visual space during both binocular and monocular vision. A single parameter, that is, the distance of the vanishing point, transforms the geometry of physical space into that of perspective space. The perspective-space model predicts perceived angles, distances, and sizes. The model is compared with other models for distance and size perception. Perspective space predicts that perceived distance and size as a function of physical distance are described by hyperbolic functions. Alternatively, power functions have been widely used to describe perceived distance and size. Comparison of power and hyperbolic functions shows that both functions are equivalent within the range of distances that have been judged in experiments. Two models describing perceived distance on the ground plane appear to be equivalent with the perspective-space model too. The conclusion is that perspective space unifies a number of models of distance and size perception.
Perspective Space as a Model for Distance and Size Perception
2017-01-01
In the literature, perspective space has been introduced as a model of visual space. Perspective space is grounded on the perspective nature of visual space during both binocular and monocular vision. A single parameter, that is, the distance of the vanishing point, transforms the geometry of physical space into that of perspective space. The perspective-space model predicts perceived angles, distances, and sizes. The model is compared with other models for distance and size perception. Perspective space predicts that perceived distance and size as a function of physical distance are described by hyperbolic functions. Alternatively, power functions have been widely used to describe perceived distance and size. Comparison of power and hyperbolic functions shows that both functions are equivalent within the range of distances that have been judged in experiments. Two models describing perceived distance on the ground plane appear to be equivalent with the perspective-space model too. The conclusion is that perspective space unifies a number of models of distance and size perception. PMID:29225765
Perceiving groups: The people perception of diversity and hierarchy.
Phillips, L Taylor; Slepian, Michael L; Hughes, Brent L
2018-05-01
The visual perception of individuals has received considerable attention (visual person perception), but little social psychological work has examined the processes underlying the visual perception of groups of people (visual people perception). Ensemble-coding is a visual mechanism that automatically extracts summary statistics (e.g., average size) of lower-level sets of stimuli (e.g., geometric figures), and also extends to the visual perception of groups of faces. Here, we consider whether ensemble-coding supports people perception, allowing individuals to form rapid, accurate impressions about groups of people. Across nine studies, we demonstrate that people visually extract high-level properties (e.g., diversity, hierarchy) that are unique to social groups, as opposed to individual persons. Observers rapidly and accurately perceived group diversity and hierarchy, or variance across race, gender, and dominance (Studies 1-3). Further, results persist when observers are given very short display times, backward pattern masks, color- and contrast-controlled stimuli, and absolute versus relative response options (Studies 4a-7b), suggesting robust effects supported specifically by ensemble-coding mechanisms. Together, we show that humans can rapidly and accurately perceive not only individual persons, but also emergent social information unique to groups of people. These people perception findings demonstrate the importance of visual processes for enabling people to perceive social groups and behave effectively in group-based social interactions. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
The perceptual homunculus: the perception of the relative proportions of the human body.
Linkenauger, Sally A; Wong, Hong Yu; Geuss, Michael; Stefanucci, Jeanine K; McCulloch, Kathleen C; Bülthoff, Heinrich H; Mohler, Betty J; Proffitt, Dennis R
2015-02-01
Given that observing one's body is ubiquitous in experience, it is natural to assume that people accurately perceive the relative sizes of their body parts. This assumption is mistaken. In a series of studies, we show that there are dramatic systematic distortions in the perception of bodily proportions, as assessed by visual estimation tasks, where participants were asked to compare the lengths of two body parts. These distortions are not evident when participants estimate the extent of a body part relative to a noncorporeal object or when asked to estimate noncorporal objects that are the same length as their body parts. Our results reveal a radical asymmetry in the perception of corporeal and noncorporeal relative size estimates. Our findings also suggest that people visually perceive the relative size of their body parts as a function of each part's relative tactile sensitivity and physical size.
Does my step look big in this? A visual illusion leads to safer stepping behaviour.
Elliott, David B; Vale, Anna; Whitaker, David; Buckley, John G
2009-01-01
Tripping is a common factor in falls and a typical safety strategy to avoid tripping on steps or stairs is to increase foot clearance over the step edge. In the present study we asked whether the perceived height of a step could be increased using a visual illusion and whether this would lead to the adoption of a safer stepping strategy, in terms of greater foot clearance over the step edge. The study also addressed the controversial question of whether motor actions are dissociated from visual perception. 21 young, healthy subjects perceived the step to be higher in a configuration of the horizontal-vertical illusion compared to a reverse configuration (p = 0.01). During a simple stepping task, maximum toe elevation changed by an amount corresponding to the size of the visual illusion (p<0.001). Linear regression analyses showed highly significant associations between perceived step height and maximum toe elevation for all conditions. The perceived height of a step can be manipulated using a simple visual illusion, leading to the adoption of a safer stepping strategy in terms of greater foot clearance over a step edge. In addition, the strong link found between perception of a visual illusion and visuomotor action provides additional support to the view that the original, controversial proposal by Goodale and Milner (1992) of two separate and distinct visual streams for perception and visuomotor action should be re-evaluated.
Visual perception of writing and pointing movements.
Méary, David; Chary, Catherine; Palluel-Germain, Richard; Orliaguet, Jean-Pierre
2005-01-01
Studies of movement production have shown that the relationship between the amplitude of a movement and its duration varies according to the type of gesture. In the case of pointing movements the duration increases as a function of distance and width of the target (Fitts' law), whereas for writing movements the duration tends to remain constant across changes in trajectory length (isochrony principle). We compared the visual perception of these two categories of movement. The participants judged the speed of a light spot that portrayed the motion of the end-point of a hand-held pen (pointing or writing). For the two types of gesture we used 8 stimulus sizes (from 2.5 cm to 20 cm) and 32 durations (from 0.2 s to 1.75 s). Viewing each combination of size and duration, participants had to indicate whether the movement speed seemed "fast", "slow", or "correct". Results showed that the participants' perceptual preferences were in agreement with the rules of movement production. The stimulus size was more influential in the pointing condition than in the writing condition. We consider that this finding reflects the influence of common representational resources for perceptual judgment and movement production.
Front-Presented Looming Sound Selectively Alters the Perceived Size of a Visual Looming Object.
Yamasaki, Daiki; Miyoshi, Kiyofumi; Altmann, Christian F; Ashida, Hiroshi
2018-07-01
In spite of accumulating evidence for the spatial rule governing cross-modal interaction according to the spatial consistency of stimuli, it is still unclear whether 3D spatial consistency (i.e., front/rear of the body) of stimuli also regulates audiovisual interaction. We investigated how sounds with increasing/decreasing intensity (looming/receding sound) presented from the front and rear space of the body impact the size perception of a dynamic visual object. Participants performed a size-matching task (Experiments 1 and 2) and a size adjustment task (Experiment 3) of visual stimuli with increasing/decreasing diameter, while being exposed to a front- or rear-presented sound with increasing/decreasing intensity. Throughout these experiments, we demonstrated that only the front-presented looming sound caused overestimation of the spatially consistent looming visual stimulus in size, but not of the spatially inconsistent and the receding visual stimulus. The receding sound had no significant effect on vision. Our results revealed that looming sound alters dynamic visual size perception depending on the consistency in the approaching quality and the front-rear spatial location of audiovisual stimuli, suggesting that the human brain differently processes audiovisual inputs based on their 3D spatial consistency. This selective interaction between looming signals should contribute to faster detection of approaching threats. Our findings extend the spatial rule governing audiovisual interaction into 3D space.
[Peculiarities of visual perception of dentition and smile aesthetic parameters].
Riakhovskiĭ, A N; Usanova, E V
2007-01-01
As the result of the studies it was determined in which limits the dentition central line displacement from the face middle line and the change of smile line tilt angle become noticeable for visual perception. And also how much visual perception of the dentition aesthetic parameters were differed in doctors with different experience, dental technicians and patients.
The Role of Prediction In Perception: Evidence From Interrupted Visual Search
Mereu, Stefania; Zacks, Jeffrey M.; Kurby, Christopher A.; Lleras, Alejandro
2014-01-01
Recent studies of rapid resumption—an observer’s ability to quickly resume a visual search after an interruption—suggest that predictions underlie visual perception. Previous studies showed that when the search display changes unpredictably after the interruption, rapid resumption disappears. This conclusion is at odds with our everyday experience, where the visual system seems to be quite efficient despite continuous changes of the visual scene; however, in the real world, changes can typically be anticipated based on previous knowledge. The present study aimed to evaluate whether changes to the visual display can be incorporated into the perceptual hypotheses, if observers are allowed to anticipate such changes. Results strongly suggest that an interrupted visual search can be rapidly resumed even when information in the display has changed after the interruption, so long as participants not only can anticipate them, but also are aware that such changes might occur. PMID:24820440
Testing effects in visual short-term memory: The case of an object's size.
Makovski, Tal
2018-05-29
In many daily activities, we need to form and retain temporary representations of an object's size. Typically, such visual short-term memory (VSTM) representations follow perception and are considered reliable. Here, participants were asked to hold in mind a single simple object for a short duration and to reproduce its size by adjusting the length and width of a test probe. Experiment 1 revealed two powerful findings: First, similar to a recently reported perceptual illusion, participants greatly overestimated the size of open objects - ones with missing boundaries - relative to the same-size fully closed objects. This finding confirms that object boundaries are critical for size perception and memory. Second, and in contrast to perception, even the size of the closed objects was largely overestimated. Both inflation effects were substantial and were replicated and extended in Experiments 2-5. Experiments 6-8 used a different testing procedure to examine whether the overestimation effects are due to inflation of size in VSTM representations or to biases introduced during the reproduction phase. These data showed that while the overestimation of the open objects was repeated, the overestimation of the closed objects was not. Taken together, these findings suggest that similar to perception, only the size representation of open objects is inflated in VSTM. Importantly, they demonstrate the considerable impact of the testing procedure on VSTM tasks and further question the use of reproduction procedures for measuring VSTM.
Bruno, Nicola; Uccelli, Stefano; Viviani, Eva; de'Sperati, Claudio
2016-10-01
According to a previous report, the visual coding of size does not obey Weber's law when aimed at guiding a grasp (Ganel et al., 2008a). This result has been interpreted as evidence for a fundamental difference between sensory processing in vision-for-perception, which needs to compress a wide range of physical objects to a restricted range of percepts, and vision-for-action when applied to the much narrower range of graspable and reachable objects. We compared finger aperture in a motor task (precision grip) and perceptual task (cross modal matching or "manual estimation" of the object's size). Crucially, we tested the whole range of graspable objects. We report that both grips and estimations clearly violate Weber's law with medium-to-large objects, but are essentially consistent with Weber's law with smaller objects. These results differ from previous characterizations of perception-action dissociations in the precision of representations of object size. Implications for current functional interpretations of the dorsal and ventral processing streams in the human visual system are discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.
A rodent model for the study of invariant visual object recognition
Zoccolan, Davide; Oertelt, Nadja; DiCarlo, James J.; Cox, David D.
2009-01-01
The human visual system is able to recognize objects despite tremendous variation in their appearance on the retina resulting from variation in view, size, lighting, etc. This ability—known as “invariant” object recognition—is central to visual perception, yet its computational underpinnings are poorly understood. Traditionally, nonhuman primates have been the animal model-of-choice for investigating the neuronal substrates of invariant recognition, because their visual systems closely mirror our own. Meanwhile, simpler and more accessible animal models such as rodents have been largely overlooked as possible models of higher-level visual functions, because their brains are often assumed to lack advanced visual processing machinery. As a result, little is known about rodents' ability to process complex visual stimuli in the face of real-world image variation. In the present work, we show that rats possess more advanced visual abilities than previously appreciated. Specifically, we trained pigmented rats to perform a visual task that required them to recognize objects despite substantial variation in their appearance, due to changes in size, view, and lighting. Critically, rats were able to spontaneously generalize to previously unseen transformations of learned objects. These results provide the first systematic evidence for invariant object recognition in rats and argue for an increased focus on rodents as models for studying high-level visual processing. PMID:19429704
How we perceive our own retina.
Kirschfeld, Kuno
2017-10-25
Ever since the days of René Descartes, in the seventeenth century, the search for the relationship between subjective perception and neural activity has been an ongoing challenge. In neuroscience, an approach to the problem via the visual system has produced a paradigm using perceptual suppression, changing with time. Cortical areas in which the neural activity was modulated in temporal correlation with this percept could be traced. Although these areas may lead directly to perception, such temporal correlation of neural activity does not suffice as ultimate proof that they actually do so. In this article, I will use a different method to show that, for the perception of our own retina, any brain area leading directly to this perception also needs to represent the retina without distortion. Furthermore, I will demonstrate that the phenomenon of size constancy must be realized in this area. © 2017 The Authors.
Multimodal emotion perception after anterior temporal lobectomy (ATL)
Milesi, Valérie; Cekic, Sezen; Péron, Julie; Frühholz, Sascha; Cristinzio, Chiara; Seeck, Margitta; Grandjean, Didier
2014-01-01
In the context of emotion information processing, several studies have demonstrated the involvement of the amygdala in emotion perception, for unimodal and multimodal stimuli. However, it seems that not only the amygdala, but several regions around it, may also play a major role in multimodal emotional integration. In order to investigate the contribution of these regions to multimodal emotion perception, five patients who had undergone unilateral anterior temporal lobe resection were exposed to both unimodal (vocal or visual) and audiovisual emotional and neutral stimuli. In a classic paradigm, participants were asked to rate the emotional intensity of angry, fearful, joyful, and neutral stimuli on visual analog scales. Compared with matched controls, patients exhibited impaired categorization of joyful expressions, whether the stimuli were auditory, visual, or audiovisual. Patients confused joyful faces with neutral faces, and joyful prosody with surprise. In the case of fear, unlike matched controls, patients provided lower intensity ratings for visual stimuli than for vocal and audiovisual ones. Fearful faces were frequently confused with surprised ones. When we controlled for lesion size, we no longer observed any overall difference between patients and controls in their ratings of emotional intensity on the target scales. Lesion size had the greatest effect on intensity perceptions and accuracy in the visual modality, irrespective of the type of emotion. These new findings suggest that a damaged amygdala, or a disrupted bundle between the amygdala and the ventral part of the occipital lobe, has a greater impact on emotion perception in the visual modality than it does in either the vocal or audiovisual one. We can surmise that patients are able to use the auditory information contained in multimodal stimuli to compensate for difficulty processing visually conveyed emotion. PMID:24839437
The perception of depth from binocular disparity.
DOT National Transportation Integrated Search
1963-05-01
This study was concerned with the factors involved in the perception of depth from a binocular disparity. A binocularly observed configuration of constant convergences, constant visual size, and having constant binocular disparities was made to appea...
Influences of selective adaptation on perception of audiovisual speech
Dias, James W.; Cook, Theresa C.; Rosenblum, Lawrence D.
2016-01-01
Research suggests that selective adaptation in speech is a low-level process dependent on sensory-specific information shared between the adaptor and test-stimuli. However, previous research has only examined how adaptors shift perception of unimodal test stimuli, either auditory or visual. In the current series of experiments, we investigated whether adaptation to cross-sensory phonetic information can influence perception of integrated audio-visual phonetic information. We examined how selective adaptation to audio and visual adaptors shift perception of speech along an audiovisual test continuum. This test-continuum consisted of nine audio-/ba/-visual-/va/ stimuli, ranging in visual clarity of the mouth. When the mouth was clearly visible, perceivers “heard” the audio-visual stimulus as an integrated “va” percept 93.7% of the time (e.g., McGurk & MacDonald, 1976). As visibility of the mouth became less clear across the nine-item continuum, the audio-visual “va” percept weakened, resulting in a continuum ranging in audio-visual percepts from /va/ to /ba/. Perception of the test-stimuli was tested before and after adaptation. Changes in audiovisual speech perception were observed following adaptation to visual-/va/ and audiovisual-/va/, but not following adaptation to auditory-/va/, auditory-/ba/, or visual-/ba/. Adaptation modulates perception of integrated audio-visual speech by modulating the processing of sensory-specific information. The results suggest that auditory and visual speech information are not completely integrated at the level of selective adaptation. PMID:27041781
The moon illusion and size-distance scaling--evidence for shared neural patterns.
Weidner, Ralph; Plewan, Thorsten; Chen, Qi; Buchner, Axel; Weiss, Peter H; Fink, Gereon R
2014-08-01
A moon near to the horizon is perceived larger than a moon at the zenith, although--obviously--the moon does not change its size. In this study, the neural mechanisms underlying the "moon illusion" were investigated using a virtual 3-D environment and fMRI. Illusory perception of an increased moon size was associated with increased neural activity in ventral visual pathway areas including the lingual and fusiform gyri. The functional role of these areas was further explored in a second experiment. Left V3v was found to be involved in integrating retinal size and distance information, thus indicating that the brain regions that dynamically integrate retinal size and distance play a key role in generating the moon illusion.
The use of visual cues in gravity judgements on parabolic motion.
Jörges, Björn; Hagenfeld, Lena; López-Moliner, Joan
2018-06-21
Evidence suggests that humans rely on an earth gravity prior for sensory-motor tasks like catching or reaching. Even under earth-discrepant conditions, this prior biases perception and action towards assuming a gravitational downwards acceleration of 9.81 m/s 2 . This can be particularly detrimental in interactions with virtual environments employing earth-discrepant gravity conditions for their visual presentation. The present study thus investigates how well humans discriminate visually presented gravities and which cues they use to extract gravity from the visual scene. To this end, we employed a Two-Interval Forced-Choice Design. In Experiment 1, participants had to judge which of two presented parabolas had the higher underlying gravity. We used two initial vertical velocities, two horizontal velocities and a constant target size. Experiment 2 added a manipulation of the reliability of the target size. Experiment 1 shows that participants have generally high discrimination thresholds for visually presented gravities, with weber fractions of 13 to beyond 30%. We identified the rate of change of the elevation angle (ẏ) and the visual angle (θ) as major cues. Experiment 2 suggests furthermore that size variability has a small influence on discrimination thresholds, while at the same time larger size variability increases reliance on ẏ and decreases reliance on θ. All in all, even though we use all available information, humans display low precision when extracting the governing gravity from a visual scene, which might further impact our capabilities of adapting to earth-discrepant gravity conditions with visual information alone. Copyright © 2018. Published by Elsevier Ltd.
Odors Bias Time Perception in Visual and Auditory Modalities
Yue, Zhenzhu; Gao, Tianyu; Chen, Lihan; Wu, Jiashuang
2016-01-01
Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of attentional deployment between the inducers (odors) and emotionally neutral stimuli (visual dots and sound beeps). PMID:27148143
Body ownership promotes visual awareness.
van der Hoort, Björn; Reingardt, Maria; Ehrsson, H Henrik
2017-08-17
The sense of ownership of one's body is important for survival, e.g., in defending the body against a threat. However, in addition to affecting behavior, it also affects perception of the world. In the case of visuospatial perception, it has been shown that the sense of ownership causes external space to be perceptually scaled according to the size of the body. Here, we investigated the effect of ownership on another fundamental aspect of visual perception: visual awareness. In two binocular rivalry experiments, we manipulated the sense of ownership of a stranger's hand through visuotactile stimulation while that hand was one of the rival stimuli. The results show that ownership, but not mere visuotactile stimulation, increases the dominance of the hand percept. This effect is due to a combination of longer perceptual dominance durations and shorter suppression durations. Together, these results suggest that the sense of body ownership promotes visual awareness.
Body ownership promotes visual awareness
Reingardt, Maria; Ehrsson, H Henrik
2017-01-01
The sense of ownership of one’s body is important for survival, e.g., in defending the body against a threat. However, in addition to affecting behavior, it also affects perception of the world. In the case of visuospatial perception, it has been shown that the sense of ownership causes external space to be perceptually scaled according to the size of the body. Here, we investigated the effect of ownership on another fundamental aspect of visual perception: visual awareness. In two binocular rivalry experiments, we manipulated the sense of ownership of a stranger’s hand through visuotactile stimulation while that hand was one of the rival stimuli. The results show that ownership, but not mere visuotactile stimulation, increases the dominance of the hand percept. This effect is due to a combination of longer perceptual dominance durations and shorter suppression durations. Together, these results suggest that the sense of body ownership promotes visual awareness. PMID:28826500
Auditory environmental context affects visual distance perception.
Etchemendy, Pablo E; Abregú, Ezequiel; Calcagno, Esteban R; Eguia, Manuel C; Vechiatti, Nilda; Iasi, Federico; Vergara, Ramiro O
2017-08-03
In this article, we show that visual distance perception (VDP) is influenced by the auditory environmental context through reverberation-related cues. We performed two VDP experiments in two dark rooms with extremely different reverberation times: an anechoic chamber and a reverberant room. Subjects assigned to the reverberant room perceived the targets farther than subjects assigned to the anechoic chamber. Also, we found a positive correlation between the maximum perceived distance and the auditorily perceived room size. We next performed a second experiment in which the same subjects of Experiment 1 were interchanged between rooms. We found that subjects preserved the responses from the previous experiment provided they were compatible with the present perception of the environment; if not, perceived distance was biased towards the auditorily perceived boundaries of the room. Results of both experiments show that the auditory environment can influence VDP, presumably through reverberation cues related to the perception of room size.
Ventral and Dorsal Visual Stream Contributions to the Perception of Object Shape and Object Location
Zachariou, Valentinos; Klatzky, Roberta; Behrmann, Marlene
2017-01-01
Growing evidence suggests that the functional specialization of the two cortical visual pathways may not be as distinct as originally proposed. Here, we explore possible contributions of the dorsal “where/how” visual stream to shape perception and, conversely, contributions of the ventral “what” visual stream to location perception in human adults. Participants performed a shape detection task and a location detection task while undergoing fMRI. For shape detection, comparable BOLD activation in the ventral and dorsal visual streams was observed, and the magnitude of this activation was correlated with behavioral performance. For location detection, cortical activation was significantly stronger in the dorsal than ventral visual pathway and did not correlate with the behavioral outcome. This asymmetry in cortical profile across tasks is particularly noteworthy given that the visual input was identical and that the tasks were matched for difficulty in performance. We confirmed the asymmetry in a subsequent psychophysical experiment in which participants detected changes in either object location or shape, while ignoring the other, task-irrelevant dimension. Detection of a location change was slowed by an irrelevant shape change matched for difficulty, but the reverse did not hold. We conclude that both ventral and dorsal visual streams contribute to shape perception, but that location processing appears to be essentially a function of the dorsal visual pathway. PMID:24001005
Altvater-Mackensen, Nicole; Mani, Nivedita; Grossmann, Tobias
2016-02-01
Recent studies suggest that infants' audiovisual speech perception is influenced by articulatory experience (Mugitani et al., 2008; Yeung & Werker, 2013). The current study extends these findings by testing if infants' emerging ability to produce native sounds in babbling impacts their audiovisual speech perception. We tested 44 6-month-olds on their ability to detect mismatches between concurrently presented auditory and visual vowels and related their performance to their productive abilities and later vocabulary size. Results show that infants' ability to detect mismatches between auditory and visually presented vowels differs depending on the vowels involved. Furthermore, infants' sensitivity to mismatches is modulated by their current articulatory knowledge and correlates with their vocabulary size at 12 months of age. This suggests that-aside from infants' ability to match nonnative audiovisual cues (Pons et al., 2009)-their ability to match native auditory and visual cues continues to develop during the first year of life. Our findings point to a potential role of salient vowel cues and productive abilities in the development of audiovisual speech perception, and further indicate a relation between infants' early sensitivity to audiovisual speech cues and their later language development. PsycINFO Database Record (c) 2016 APA, all rights reserved.
Perception of Visual Speed While Moving
ERIC Educational Resources Information Center
Durgin, Frank H.; Gigone, Krista; Scott, Rebecca
2005-01-01
During self-motion, the world normally appears stationary. In part, this may be due to reductions in visual motion signals during self-motion. In 8 experiments, the authors used magnitude estimation to characterize changes in visual speed perception as a result of biomechanical self-motion alone (treadmill walking), physical translation alone…
Neural dynamics of motion processing and speed discrimination.
Chey, J; Grossberg, S; Mingolla, E
1998-09-01
A neural network model of visual motion perception and speed discrimination is presented. The model shows how a distributed population code of speed tuning, that realizes a size-speed correlation, can be derived from the simplest mechanisms whereby activations of multiple spatially short-range filters of different size are transformed into speed-turned cell responses. These mechanisms use transient cell responses to moving stimuli, output thresholds that covary with filter size, and competition. These mechanisms are proposed to occur in the V1-->MT cortical processing stream. The model reproduces empirically derived speed discrimination curves and simulates data showing how visual speed perception and discrimination can be affected by stimulus contrast, duration, dot density and spatial frequency. Model motion mechanisms are analogous to mechanisms that have been used to model 3-D form and figure-ground perception. The model forms the front end of a larger motion processing system that has been used to simulate how global motion capture occurs, and how spatial attention is drawn to moving forms. It provides a computational foundation for an emerging neural theory of 3-D form and motion perception.
Perception of the average size of multiple objects in chimpanzees (Pan troglodytes).
Imura, Tomoko; Kawakami, Fumito; Shirai, Nobu; Tomonaga, Masaki
2017-08-30
Humans can extract statistical information, such as the average size of a group of objects or the general emotion of faces in a crowd without paying attention to any individual object or face. To determine whether summary perception is unique to humans, we investigated the evolutional origins of this ability by assessing whether chimpanzees, which are closely related to humans, can also determine the average size of multiple visual objects. Five chimpanzees and 18 humans were able to choose the array in which the average size was larger, when presented with a pair of arrays, each containing 12 circles of different or the same sizes. Furthermore, both species were more accurate in judging the average size of arrays consisting of 12 circles of different or the same sizes than they were in judging the average size of arrays consisting of a single circle. Our findings could not be explained by the use of a strategy in which the chimpanzee detected the largest or smallest circle among those in the array. Our study provides the first evidence that chimpanzees can perceive the average size of multiple visual objects. This indicates that the ability to compute the statistical properties of a complex visual scene is not unique to humans, but is shared between both species. © 2017 The Authors.
Perception of the average size of multiple objects in chimpanzees (Pan troglodytes)
Kawakami, Fumito; Shirai, Nobu; Tomonaga, Masaki
2017-01-01
Humans can extract statistical information, such as the average size of a group of objects or the general emotion of faces in a crowd without paying attention to any individual object or face. To determine whether summary perception is unique to humans, we investigated the evolutional origins of this ability by assessing whether chimpanzees, which are closely related to humans, can also determine the average size of multiple visual objects. Five chimpanzees and 18 humans were able to choose the array in which the average size was larger, when presented with a pair of arrays, each containing 12 circles of different or the same sizes. Furthermore, both species were more accurate in judging the average size of arrays consisting of 12 circles of different or the same sizes than they were in judging the average size of arrays consisting of a single circle. Our findings could not be explained by the use of a strategy in which the chimpanzee detected the largest or smallest circle among those in the array. Our study provides the first evidence that chimpanzees can perceive the average size of multiple visual objects. This indicates that the ability to compute the statistical properties of a complex visual scene is not unique to humans, but is shared between both species. PMID:28835550
Dissatisfaction with own body makes patients with eating disorders more sensitive to pain
Yamamotova, Anna; Bulant, Josef; Bocek, Vaclav; Papezova, Hana
2017-01-01
Body image represents a multidimensional concept including body image evaluation and perception of body appearance. Disturbances of body image perception are considered to be one of the central aspects of anorexia nervosa and bulimia nervosa. There is growing evidence that body image distortion can be associated with changes in pain perception. The aim of our study was to examine the associations between body image perception, body dissatisfaction, and nociception in women with eating disorders and age-matched healthy control women. We measured body dissatisfaction and pain sensitivity in 61 patients with Diagnostic and Statistical Manual of Mental Disorders-Fourth Edition diagnoses of eating disorders (31 anorexia nervosa and 30 bulimia nervosa) and in 30 healthy women. Thermal pain threshold latencies were evaluated using an analgesia meter and body image perception and body dissatisfaction were assessed using Anamorphic Micro software (digital pictures of their own body distorted into larger-body and thinner-body images). Patients with eating disorders overestimated their body size in comparison with healthy controls, but the two groups did not differ in body dissatisfaction. In anorexia and bulimia patient groups, body dissatisfaction (calculated in pixels as desired size/true image size) correlated with pain threshold latencies (r=0.55, p=0.001), while between body image perception (determined as estimation size/true image size) and pain threshold, no correlation was found. Thus, we demonstrated that in patients with eating disorders, pain perception is significantly associated with emotional contrary to sensory (visual) processing of one’s own body image. The more the patients desired to be thin, the more pain-sensitive they were. Our findings based on some shared mechanisms of body dissatisfaction and pain perception support the significance of negative emotions specific for eating disorders and contribute to better understanding of the psychosomatic characteristics of this spectrum of illnesses. PMID:28761371
Measuring the effect of attention on simple visual search.
Palmer, J; Ames, C T; Lindsey, D T
1993-02-01
Set-size in visual search may be due to 1 or more of 3 factors: sensory processes such as lateral masking between stimuli, attentional processes limiting the perception of individual stimuli, or attentional processes affecting the decision rules for combining information from multiple stimuli. These possibilities were evaluated in tasks such as searching for a longer line among shorter lines. To evaluate sensory contributions, display set-size effects were compared with cuing conditions that held sensory phenomena constant. Similar effects for the display and cue manipulations suggested that sensory processes contributed little under the conditions of this experiment. To evaluate the contribution of decision processes, the set-size effects were modeled with signal detection theory. In these models, a decision effect alone was sufficient to predict the set-size effects without any attentional limitation due to perception.
The visual perception of spatial extent.
DOT National Transportation Integrated Search
1963-09-01
This study was concerned with the manner in which perceived depth and perceived frontoparallel size varied with physical distance and hence with each other. An equation expressing the relation between perceived frontoparallel size and physical depth ...
Poor shape perception is the reason reaches-to-grasp are visually guided online.
Lee, Young-Lim; Crabtree, Charles E; Norman, J Farley; Bingham, Geoffrey P
2008-08-01
Both judgment studies and studies of feedforward reaching have shown that the visual perception of object distance, size, and shape are inaccurate. However, feedback has been shown to calibrate feedfoward reaches-to-grasp to make them accurate with respect to object distance and size. We now investigate whether shape perception (in particular, the aspect ratio of object depth to width) can be calibrated in the context of reaches-to-grasp. We used cylindrical objects with elliptical cross-sections of varying eccentricity. Our participants reached to grasp the width or the depth of these objects with the index finger and thumb. The maximum grasp aperture and the terminal grasp aperture were used to evaluate perception. Both occur before the hand has contacted an object. In Experiments 1 and 2, we investigated whether perceived shape is recalibrated by distorted haptic feedback. Although somewhat equivocal, the results suggest that it is not. In Experiment 3, we tested the accuracy of feedforward grasping with respect to shape with haptic feedback to allow calibration. Grasping was inaccurate in ways comparable to findings in shape perception judgment studies. In Experiment 4, we hypothesized that online guidance is needed for accurate grasping. Participants reached to grasp either with or without vision of the hand. The result was that the former was accurate, whereas the latter was not. We conclude that shape perception is not calibrated by feedback from reaches-to-grasp and that online visual guidance is required for accurate grasping because shape perception is poor.
Cho, Hwi-Young; Kim, Kitae; Lee, Byounghee; Jung, Jinhwa
2015-03-01
[Purpose] This study investigated a brain wave and visual perception changes in stroke subjects using neurofeedback (NFB) training. [Subjects] Twenty-seven stroke subjects were randomly allocated to the NFB (n = 13) group and the control group (n=14). [Methods] Two expert therapists provided the NFB and CON groups with traditional rehabilitation therapy in 30 thirst-minute sessions over the course of 6 weeks. NFB training was provided only to the NFB group. The CON group received traditional rehabilitation therapy only. Before and after the 6-week intervention, a brain wave test and motor free visual perception test (MVPT) were performed. [Results] Both groups showed significant differences in their relative beta wave values and attention concentration quotients. Moreover, the NFB group showed a significant difference in MVPT visual discrimination, form constancy, visual memory, visual closure, spatial relation, raw score, and processing time. [Conclusion] This study demonstrated that NFB training is more effective for increasing concentration and visual perception changes than traditional rehabilitation. In further studies, detailed and diverse investigations should be performed considering the number and characteristics of subjects, and the NFB training period.
ERIC Educational Resources Information Center
Habraken, Clarisse L.
1996-01-01
Highlights the need to reinvigorate chemistry education by means of the visual-spatial approach, an approach wholly in conformance with the way modern chemistry is thought about and practiced. Discusses the changing world, multiple intelligences, imagery, chemistry's pictorial language, and perceptions in chemistry. Presents suggestions on how to…
Size Constancy in Bat Biosonar? Perceptual Interaction of Object Aperture and Distance
Heinrich, Melina; Wiegrebe, Lutz
2013-01-01
Perception and encoding of object size is an important feature of sensory systems. In the visual system object size is encoded by the visual angle (visual aperture) on the retina, but the aperture depends on the distance of the object. As object distance is not unambiguously encoded in the visual system, higher computational mechanisms are needed. This phenomenon is termed “size constancy”. It is assumed to reflect an automatic re-scaling of visual aperture with perceived object distance. Recently, it was found that in echolocating bats, the ‘sonar aperture’, i.e., the range of angles from which sound is reflected from an object back to the bat, is unambiguously perceived and neurally encoded. Moreover, it is well known that object distance is accurately perceived and explicitly encoded in bat sonar. Here, we addressed size constancy in bat biosonar, recruiting virtual-object techniques. Bats of the species Phyllostomus discolor learned to discriminate two simple virtual objects that only differed in sonar aperture. Upon successful discrimination, test trials were randomly interspersed using virtual objects that differed in both aperture and distance. It was tested whether the bats spontaneously assigned absolute width information to these objects by combining distance and aperture. The results showed that while the isolated perceptual cues encoding object width, aperture, and distance were all perceptually well resolved by the bats, the animals did not assign absolute width information to the test objects. This lack of sonar size constancy may result from the bats relying on different modalities to extract size information at different distances. Alternatively, it is conceivable that familiarity with a behaviorally relevant, conspicuous object is required for sonar size constancy, as it has been argued for visual size constancy. Based on the current data, it appears that size constancy is not necessarily an essential feature of sonar perception in bats. PMID:23630598
Size constancy in bat biosonar? Perceptual interaction of object aperture and distance.
Heinrich, Melina; Wiegrebe, Lutz
2013-01-01
Perception and encoding of object size is an important feature of sensory systems. In the visual system object size is encoded by the visual angle (visual aperture) on the retina, but the aperture depends on the distance of the object. As object distance is not unambiguously encoded in the visual system, higher computational mechanisms are needed. This phenomenon is termed "size constancy". It is assumed to reflect an automatic re-scaling of visual aperture with perceived object distance. Recently, it was found that in echolocating bats, the 'sonar aperture', i.e., the range of angles from which sound is reflected from an object back to the bat, is unambiguously perceived and neurally encoded. Moreover, it is well known that object distance is accurately perceived and explicitly encoded in bat sonar. Here, we addressed size constancy in bat biosonar, recruiting virtual-object techniques. Bats of the species Phyllostomus discolor learned to discriminate two simple virtual objects that only differed in sonar aperture. Upon successful discrimination, test trials were randomly interspersed using virtual objects that differed in both aperture and distance. It was tested whether the bats spontaneously assigned absolute width information to these objects by combining distance and aperture. The results showed that while the isolated perceptual cues encoding object width, aperture, and distance were all perceptually well resolved by the bats, the animals did not assign absolute width information to the test objects. This lack of sonar size constancy may result from the bats relying on different modalities to extract size information at different distances. Alternatively, it is conceivable that familiarity with a behaviorally relevant, conspicuous object is required for sonar size constancy, as it has been argued for visual size constancy. Based on the current data, it appears that size constancy is not necessarily an essential feature of sonar perception in bats.
Agyei, Seth B.; van der Weel, F. R. (Ruud); van der Meer, Audrey L. H.
2016-01-01
During infancy, smart perceptual mechanisms develop allowing infants to judge time-space motion dynamics more efficiently with age and locomotor experience. This emerging capacity may be vital to enable preparedness for upcoming events and to be able to navigate in a changing environment. Little is known about brain changes that support the development of prospective control and about processes, such as preterm birth, that may compromise it. As a function of perception of visual motion, this paper will describe behavioral and brain studies with young infants investigating the development of visual perception for prospective control. By means of the three visual motion paradigms of occlusion, looming, and optic flow, our research shows the importance of including behavioral data when studying the neural correlates of prospective control. PMID:26903908
Manipulating stereopsis and vergence in an outdoor setting: moon, sky and horizon.
Enright, J T
1989-01-01
A simple stimulus generator has been constructed that permits a small illuminated target to be seen with variable inter-ocular disparity, when superimposed upon the binocular view of an outdoor landscape. This device was applied to several questions involving perception of size, distance and orientation, with the following results: (1) when the apparent distance to an "artificial moon", as perceived through stereopsis, is decreased by about 50-fold (from near horizon to about 60 m), its apparent size is reduced by only a miniscule amount (8% on average); hence, the moon illusion is probably not due to compensation--conscious or subconscious--for its apparent distance; (2) those changes in apparent size known as convergence micropsia vary as a function of the visual surround; for a vergence change of 1 deg, greater perceived change in size of a small target arises when a landscape is seen nearby than with empty sky as surround; (3) when a target is shown somewhat above the horizon against an empty sky, it must be viewed with divergence of the visual axes (image positions for "hyper-infinite" distance), in order to be perceived as vertically above objects on the skyline; this effect implies a strong backward tilt to the apparent vertical and probably reflects an attempt to "null out" the perceptual consequences of the convergence that typically occurs during downward saccades.
Attention changes perceived size of moving visual patterns.
Anton-Erxleben, Katharina; Henrich, Christian; Treue, Stefan
2007-08-23
Spatial attention shifts receptive fields in monkey extrastriate visual cortex toward the focus of attention (S. Ben Hamed, J. R. Duhamel, F. Bremmer, & W. Graf, 2002; C. E. Connor, J. L. Gallant, D. C. Preddie, & D. C. Van Essen, 1996; C. E. Connor, D. C. Preddie, J. L. Gallant, & D. C. Van Essen, 1997; T. Womelsdorf, K. Anton-Erxleben, F. Pieper, & S. Treue, 2006). This distortion in the retinotopic distribution of receptive fields might cause distortions in spatial perception such as an increase of the perceived size of attended stimuli. Here we test for such an effect in human subjects by measuring the point of subjective equality (PSE) for the perceived size of a neutral and an attended stimulus when drawing automatic attention to one of two spatial locations. We found a significant increase in perceived size of attended stimuli. Depending on the absolute stimulus size, this effect ranged from 4% to 12% and was more pronounced for smaller than for larger stimuli. In our experimental design, an attentional effect on task difficulty or a cue bias might influence the PSE measure. We performed control experiments and indeed found such effects, but they could only account for part of the observed results. Our findings demonstrate that the allocation of transient spatial attention onto a visual stimulus increases its perceived size and additionally biases subjects to select this stimulus for a perceptual judgment.
Owning an overweight or underweight body: distinguishing the physical, experienced and virtual body.
Piryankova, Ivelina V; Wong, Hong Yu; Linkenauger, Sally A; Stinson, Catherine; Longo, Matthew R; Bülthoff, Heinrich H; Mohler, Betty J
2014-01-01
Our bodies are the most intimately familiar objects we encounter in our perceptual environment. Virtual reality provides a unique method to allow us to experience having a very different body from our own, thereby providing a valuable method to explore the plasticity of body representation. In this paper, we show that women can experience ownership over a whole virtual body that is considerably smaller or larger than their physical body. In order to gain a better understanding of the mechanisms underlying body ownership, we use an embodiment questionnaire, and introduce two new behavioral response measures: an affordance estimation task (indirect measure of body size) and a body size estimation task (direct measure of body size). Interestingly, after viewing the virtual body from first person perspective, both the affordance and the body size estimation tasks indicate a change in the perception of the size of the participant's experienced body. The change is biased by the size of the virtual body (overweight or underweight). Another novel aspect of our study is that we distinguish between the physical, experienced and virtual bodies, by asking participants to provide affordance and body size estimations for each of the three bodies separately. This methodological point is important for virtual reality experiments investigating body ownership of a virtual body, because it offers a better understanding of which cues (e.g. visual, proprioceptive, memory, or a combination thereof) influence body perception, and whether the impact of these cues can vary between different setups.
Owning an Overweight or Underweight Body: Distinguishing the Physical, Experienced and Virtual Body
Piryankova, Ivelina V.; Wong, Hong Yu; Linkenauger, Sally A.; Stinson, Catherine; Longo, Matthew R.; Bülthoff, Heinrich H.; Mohler, Betty J.
2014-01-01
Our bodies are the most intimately familiar objects we encounter in our perceptual environment. Virtual reality provides a unique method to allow us to experience having a very different body from our own, thereby providing a valuable method to explore the plasticity of body representation. In this paper, we show that women can experience ownership over a whole virtual body that is considerably smaller or larger than their physical body. In order to gain a better understanding of the mechanisms underlying body ownership, we use an embodiment questionnaire, and introduce two new behavioral response measures: an affordance estimation task (indirect measure of body size) and a body size estimation task (direct measure of body size). Interestingly, after viewing the virtual body from first person perspective, both the affordance and the body size estimation tasks indicate a change in the perception of the size of the participant's experienced body. The change is biased by the size of the virtual body (overweight or underweight). Another novel aspect of our study is that we distinguish between the physical, experienced and virtual bodies, by asking participants to provide affordance and body size estimations for each of the three bodies separately. This methodological point is important for virtual reality experiments investigating body ownership of a virtual body, because it offers a better understanding of which cues (e.g. visual, proprioceptive, memory, or a combination thereof) influence body perception, and whether the impact of these cues can vary between different setups. PMID:25083784
Distortions of Subjective Time Perception Within and Across Senses
van Wassenhove, Virginie; Buonomano, Dean V.; Shimojo, Shinsuke; Shams, Ladan
2008-01-01
Background The ability to estimate the passage of time is of fundamental importance for perceptual and cognitive processes. One experience of time is the perception of duration, which is not isomorphic to physical duration and can be distorted by a number of factors. Yet, the critical features generating these perceptual shifts in subjective duration are not understood. Methodology/Findings We used prospective duration judgments within and across sensory modalities to examine the effect of stimulus predictability and feature change on the perception of duration. First, we found robust distortions of perceived duration in auditory, visual and auditory-visual presentations despite the predictability of the feature changes in the stimuli. For example, a looming disc embedded in a series of steady discs led to time dilation, whereas a steady disc embedded in a series of looming discs led to time compression. Second, we addressed whether visual (auditory) inputs could alter the perception of duration of auditory (visual) inputs. When participants were presented with incongruent audio-visual stimuli, the perceived duration of auditory events could be shortened or lengthened by the presence of conflicting visual information; however, the perceived duration of visual events was seldom distorted by the presence of auditory information and was never perceived shorter than their actual durations. Conclusions/Significance These results support the existence of multisensory interactions in the perception of duration and, importantly, suggest that vision can modify auditory temporal perception in a pure timing task. Insofar as distortions in subjective duration can neither be accounted for by the unpredictability of an auditory, visual or auditory-visual event, we propose that it is the intrinsic features of the stimulus that critically affect subjective time distortions. PMID:18197248
Visual contribution to the multistable perception of speech.
Sato, Marc; Basirat, Anahita; Schwartz, Jean-Luc
2007-11-01
The multistable perception of speech, or verbal transformation effect, refers to perceptual changes experienced while listening to a speech form that is repeated rapidly and continuously. In order to test whether visual information from the speaker's articulatory gestures may modify the emergence and stability of verbal auditory percepts, subjects were instructed to report any perceptual changes during unimodal, audiovisual, and incongruent audiovisual presentations of distinct repeated syllables. In a first experiment, the perceptual stability of reported auditory percepts was significantly modulated by the modality of presentation. In a second experiment, when audiovisual stimuli consisting of a stable audio track dubbed with a video track that alternated between congruent and incongruent stimuli were presented, a strong correlation between the timing of perceptual transitions and the timing of video switches was found. Finally, a third experiment showed that the vocal tract opening onset event provided by the visual input could play the role of a bootstrap mechanism in the search for transformations. Altogether, these results demonstrate the capacity of visual information to control the multistable perception of speech in its phonetic content and temporal course. The verbal transformation effect thus provides a useful experimental paradigm to explore audiovisual interactions in speech perception.
Mechanisms of migraine aura revealed by functional MRI in human visual cortex
Hadjikhani, Nouchine; Sanchez del Rio, Margarita; Wu, Ona; Schwartz, Denis; Bakker, Dick; Fischl, Bruce; Kwong, Kenneth K.; Cutrer, F. Michael; Rosen, Bruce R.; Tootell, Roger B. H.; Sorensen, A. Gregory; Moskowitz, Michael A.
2001-01-01
Cortical spreading depression (CSD) has been suggested to underlie migraine visual aura. However, it has been challenging to test this hypothesis in human cerebral cortex. Using high-field functional MRI with near-continuous recording during visual aura in three subjects, we observed blood oxygenation level-dependent (BOLD) signal changes that demonstrated at least eight characteristics of CSD, time-locked to percept/onset of the aura. Initially, a focal increase in BOLD signal (possibly reflecting vasodilation), developed within extrastriate cortex (area V3A). This BOLD change progressed contiguously and slowly (3.5 ± 1.1 mm/min) over occipital cortex, congruent with the retinotopy of the visual percept. Following the same retinotopic progression, the BOLD signal then diminished (possibly reflecting vasoconstriction after the initial vasodilation), as did the BOLD response to visual activation. During periods with no visual stimulation, but while the subject was experiencing scintillations, BOLD signal followed the retinotopic progression of the visual percept. These data strongly suggest that an electrophysiological event such as CSD generates the aura in human visual cortex. PMID:11287655
Confinement has no effect on visual space perception: The results of the Mars-500 experiment.
Sikl, Radovan; Simeček, Michal
2014-02-01
People confined to a closed space live in a visual environment that differs from a natural open-space environment in several respects. The view is restricted to no more than a few meters, and nearby objects cannot be perceived relative to the position of a horizon. Thus, one might expect to find changes in visual space perception as a consequence of the prolonged experience of confinement. The subjects in our experimental study were participants of the Mars-500 project and spent nearly a year and a half isolated from the outside world during a simulated mission to Mars. The participants were presented with a battery of computer-based psychophysical tests examining their performance on various 3-D perception tasks, and we monitored changes in their perceptual performance throughout their confinement. Contrary to our expectations, no serious effect of the confinement on the crewmembers' 3-D perception was observed in any experiment. Several interpretations of these findings are discussed, including the possibilities that (1) the crewmembers' 3-D perception really did not change significantly, (2) changes in 3-D perception were manifested in the precision rather than the accuracy of perceptual judgments, and/or (3) the experimental conditions and the group sample were problematic.
Seeing Is the Hardest Thing to See: Using Illusions to Teach Visual Perception
ERIC Educational Resources Information Center
Riener, Cedar
2015-01-01
This chapter describes three examples of using illusions to teach visual perception. The illusions present ways for students to change their perspective regarding how their eyes work and also offer opportunities to question assumptions regarding their approach to knowledge.
Visual Equivalence and Amodal Completion in Cuttlefish
Lin, I-Rong; Chiao, Chuan-Chin
2017-01-01
Modern cephalopods are notably the most intelligent invertebrates and this is accompanied by keen vision. Despite extensive studies investigating the visual systems of cephalopods, little is known about their visual perception and object recognition. In the present study, we investigated the visual processing of the cuttlefish Sepia pharaonis, including visual equivalence and amodal completion. Cuttlefish were trained to discriminate images of shrimp and fish using the operant conditioning paradigm. After cuttlefish reached the learning criteria, a series of discrimination tasks were conducted. In the visual equivalence experiment, several transformed versions of the training images, such as images reduced in size, images reduced in contrast, sketches of the images, the contours of the images, and silhouettes of the images, were used. In the amodal completion experiment, partially occluded views of the original images were used. The results showed that cuttlefish were able to treat the training images of reduced size and sketches as the visual equivalence. Cuttlefish were also capable of recognizing partially occluded versions of the training image. Furthermore, individual differences in performance suggest that some cuttlefish may be able to recognize objects when visual information was partly removed. These findings support the hypothesis that the visual perception of cuttlefish involves both visual equivalence and amodal completion. The results from this research also provide insights into the visual processing mechanisms used by cephalopods. PMID:28220075
Size-Sensitive Perceptual Representations Underlie Visual and Haptic Object Recognition
Craddock, Matt; Lawson, Rebecca
2009-01-01
A variety of similarities between visual and haptic object recognition suggests that the two modalities may share common representations. However, it is unclear whether such common representations preserve low-level perceptual features or whether transfer between vision and haptics is mediated by high-level, abstract representations. Two experiments used a sequential shape-matching task to examine the effects of size changes on unimodal and crossmodal visual and haptic object recognition. Participants felt or saw 3D plastic models of familiar objects. The two objects presented on a trial were either the same size or different sizes and were the same shape or different but similar shapes. Participants were told to ignore size changes and to match on shape alone. In Experiment 1, size changes on same-shape trials impaired performance similarly for both visual-to-visual and haptic-to-haptic shape matching. In Experiment 2, size changes impaired performance on both visual-to-haptic and haptic-to-visual shape matching and there was no interaction between the cost of size changes and direction of transfer. Together the unimodal and crossmodal matching results suggest that the same, size-specific perceptual representations underlie both visual and haptic object recognition, and indicate that crossmodal memory for objects must be at least partly based on common perceptual representations. PMID:19956685
Influence of visual path information on human heading perception during rotation.
Li, Li; Chen, Jing; Peng, Xiaozhe
2009-03-31
How does visual path information influence people's perception of their instantaneous direction of self-motion (heading)? We have previously shown that humans can perceive heading without direct access to visual path information. Here we vary two key parameters for estimating heading from optic flow, the field of view (FOV) and the depth range of environmental points, to investigate the conditions under which visual path information influences human heading perception. The display simulated an observer traveling on a circular path. Observers used a joystick to rotate their line of sight until deemed aligned with true heading. Four FOV sizes (110 x 94 degrees, 48 x 41 degrees, 16 x 14 degrees, 8 x 7 degrees) and depth ranges (6-50 m, 6-25 m, 6-12.5 m, 6-9 m) were tested. Consistent with our computational modeling results, heading bias increased with the reduction of FOV or depth range when the display provided a sequence of velocity fields but no direct path information. When the display provided path information, heading bias was not influenced as much by the reduction of FOV or depth range. We conclude that human heading and path perception involve separate visual processes. Path helps heading perception when the display does not contain enough optic-flow information for heading estimation during rotation.
[Visual representation of natural scenes in flicker changes].
Nakashima, Ryoichi; Yokosawa, Kazuhiko
2010-08-01
Coherence theory in scene perception (Rensink, 2002) assumes the retention of volatile object representations on which attention is not focused. On the other hand, visual memory theory in scene perception (Hollingworth & Henderson, 2002) assumes that robust object representations are retained. In this study, we hypothesized that the difference between these two theories is derived from the difference of the experimental tasks that they are based on. In order to verify this hypothesis, we examined the properties of visual representation by using a change detection and memory task in a flicker paradigm. We measured the representations when participants were instructed to search for a change in a scene, and compared them with the intentional memory representations. The visual representations were retained in visual long-term memory even in the flicker paradigm, and were as robust as the intentional memory representations. However, the results indicate that the representations are unavailable for explicitly localizing a scene change, but are available for answering the recognition test. This suggests that coherence theory and visual memory theory are compatible.
Audiovisual associations alter the perception of low-level visual motion
Kafaligonul, Hulusi; Oluk, Can
2015-01-01
Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role. PMID:25873869
Small numbers are sensed directly, high numbers constructed from size and density.
Zimmermann, Eckart
2018-04-01
Two theories compete to explain how we estimate the numerosity of visual object sets. The first suggests that the apparent numerosity is derived from an analysis of more low-level features like size and density of the set. The second theory suggests that numbers are sensed directly. Consistent with the latter claim is the existence of neurons in parietal cortex which are specialized for processing the numerosity of elements in the visual scene. However, recent evidence suggests that only low numbers can be sensed directly whereas the perception of high numbers is supported by the analysis of low-level features. Processing of low and high numbers, being located at different levels of the neural hierarchy should involve different receptive field sizes. Here, I tested this idea with visual adaptation. I measured the spatial spread of number adaptation for low and high numerosities. A focused adaptation spread of high numerosities suggested the involvement of early neural levels where receptive fields are comparably small and the broad spread for low numerosities was consistent with processing of number neurons which have larger receptive fields. These results provide evidence for the claim that different mechanism exist generating the perception of visual numerosity. Whereas low numbers are sensed directly as a primary visual attribute, the estimation of high numbers however likely depends on the area size over which the objects are spread. Copyright © 2017 Elsevier B.V. All rights reserved.
Visual motion integration for perception and pursuit
NASA Technical Reports Server (NTRS)
Stone, L. S.; Beutter, B. R.; Lorenceau, J.
2000-01-01
To examine the relationship between visual motion processing for perception and pursuit, we measured the pursuit eye-movement and perceptual responses to the same complex-motion stimuli. We show that humans can both perceive and pursue the motion of line-figure objects, even when partial occlusion makes the resulting image motion vastly different from the underlying object motion. Our results show that both perception and pursuit can perform largely accurate motion integration, i.e. the selective combination of local motion signals across the visual field to derive global object motion. Furthermore, because we manipulated perceived motion while keeping image motion identical, the observed parallel changes in perception and pursuit show that the motion signals driving steady-state pursuit and perception are linked. These findings disprove current pursuit models whose control strategy is to minimize retinal image motion, and suggest a new framework for the interplay between visual cortex and cerebellum in visuomotor control.
Conscious visual memory with minimal attention.
Pinto, Yair; Vandenbroucke, Annelinde R; Otten, Marte; Sligte, Ilja G; Seth, Anil K; Lamme, Victor A F
2017-02-01
Is conscious visual perception limited to the locations that a person attends? The remarkable phenomenon of change blindness, which shows that people miss nearly all unattended changes in a visual scene, suggests the answer is yes. However, change blindness is found after visual interference (a mask or a new scene), so that subjects have to rely on working memory (WM), which has limited capacity, to detect the change. Before such interference, however, a much larger capacity store, called fragile memory (FM), which is easily overwritten by newly presented visual information, is present. Whether these different stores depend equally on spatial attention is central to the debate on the role of attention in conscious vision. In 2 experiments, we found that minimizing spatial attention almost entirely erases visual WM, as expected. Critically, FM remains largely intact. Moreover, minimally attended FM responses yield accurate metacognition, suggesting that conscious memory persists with limited spatial attention. Together, our findings help resolve the fundamental issue of how attention affects perception: Both visual consciousness and memory can be supported by only minimal attention. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Perception-action dissociation generalizes to the size-inertia illusion.
Platkiewicz, Jonathan; Hayward, Vincent
2014-04-01
Two objects of similar visual aspects and of equal mass, but of different sizes, generally do not elicit the same percept of heaviness in humans. The larger object is consistently felt to be lighter than the smaller, an effect known as the "size-weight illusion." When asked to repeatedly lift the two objects, the grip forces were observed to adapt rapidly to the true object weight while the size-weight illusion persisted, a phenomenon interpreted as a dissociation between perception and action. We investigated whether the same phenomenon can be observed if the mass of an object is available to participants through inertial rather than gravitational cues and if the number and statistics of the stimuli is such that participants cannot remember each individual stimulus. We compared the responses of 10 participants in 2 experimental conditions, where they manipulated 33 objects having uncorrelated masses and sizes, supported by a frictionless, air-bearing slide that could be oriented vertically or horizontally. We also analyzed the participants' anticipatory motor behavior by measuring the grip force before motion onset. We found that the perceptual illusory effect was quantitatively the same in the two conditions and observed that both visual size and haptic mass had a negligible effect on the anticipatory gripping control of the participants in the gravitational and inertial conditions, despite the enormous differences in the mechanics of the two conditions and the large set of uncorrelated stimuli.
The Comparison of Visual Working Memory Representations with Perceptual Inputs
Hyun, Joo-seok; Woodman, Geoffrey F.; Vogel, Edward K.; Hollingworth, Andrew
2008-01-01
The human visual system can notice differences between memories of previous visual inputs and perceptions of new visual inputs, but the comparison process that detects these differences has not been well characterized. This study tests the hypothesis that differences between the memory of a stimulus array and the perception of a new array are detected in a manner that is analogous to the detection of simple features in visual search tasks. That is, just as the presence of a task-relevant feature in visual search can be detected in parallel, triggering a rapid shift of attention to the object containing the feature, the presence of a memory-percept difference along a task-relevant dimension can be detected in parallel, triggering a rapid shift of attention to the changed object. Supporting evidence was obtained in a series of experiments that examined manual reaction times, saccadic reaction times, and event-related potential latencies. However, these experiments also demonstrated that a slow, limited-capacity process must occur before the observer can make a manual change-detection response. PMID:19653755
Illusion in reality: visual perception in displays
NASA Astrophysics Data System (ADS)
Kaufman, Lloyd; Kaufman, James H.
2001-06-01
Research into visual perception ultimately affects display design. Advance in display technology affects, in turn, our study of perception. Although this statement is too general to provide controversy, this paper present a real-life example that may prompt display engineers to make greater use of basic knowledge of visual perception, and encourage those who study perception to track more closely leading edge display technology. Our real-life example deals with an ancient problem, the moon illusion: why does the horizon moon appear so large while the elevated moon look so small. This was a puzzle for many centuries. Physical explanations, such as refraction by the atmosphere, are incorrect. The difference in apparent size may be classified as a misperception, so the answer must lie in the general principles of visual perception. The factors underlying the moon illusion must be the same factors as those that enable us to perceive the sizes of ordinary objects in visual space. Progress toward solving the problem has been irregular, since methods for actually measuring the illusion under a wide range of conditions were lacking. An advance in display technology made possible a serious and methodologically controlled study of the illusion. This technology was the first heads-up display. In this paper we will describe how the heads-up display concept made it possible to test several competing theories of the moon illusion, and how it led to an explanation that stood for nearly 40 years. We also consider the criticisms of that explanation and how the optics of the heads-up display also played a role in providing data for the critics. Finally, we will describe our own advance on the original methodology. This advance was motivated by previously unrelated principles of space perception. We used a stereoscopic heads up display to test alternative hypothesis about the illusion and to discrimate between two classes of mutually contradictory theories. At its core, the explanation for the moon illusion has implications for the design of virtual reality displays. Howe do we scale disparity at great distances to reflect depth between points at those distances. We conjecture that one yardstick involved in that scaling is provided by oculomotor cues operating at near distances. Without the presence of such a yardstick it is not possible to account for depth at long distances. As we shall explain, size and depth constancy should both fail in virtual reality display where all of the visual information is optically in one plane. We suggest ways to study this problem, and also means by which displays may be designed to present information at different optical distances.
Some distinguishing characteristics of contour and texture phenomena in images
NASA Technical Reports Server (NTRS)
Jobson, Daniel J.
1992-01-01
The development of generalized contour/texture discrimination techniques is a central element necessary for machine vision recognition and interpretation of arbitrary images. Here, the visual perception of texture, selected studies of texture analysis in machine vision, and diverse small samples of contour and texture are all used to provide insights into the fundamental characteristics of contour and texture. From these, an experimental discrimination scheme is developed and tested on a battery of natural images. The visual perception of texture defined fine texture as a subclass which is interpreted as shading and is distinct from coarse figural similarity textures. Also, perception defined the smallest scale for contour/texture discrimination as eight to nine visual acuity units. Three contour/texture discrimination parameters were found to be moderately successful for this scale discrimination: (1) lightness change in a blurred version of the image, (2) change in lightness change in the original image, and (3) percent change in edge counts relative to local maximum.
Liu, Sheng; Angelaki, Dora E.
2009-01-01
Visual and vestibular signals converge onto the dorsal medial superior temporal area (MSTd) of the macaque extrastriate visual cortex, which is thought to be involved in multisensory heading perception for spatial navigation. Peripheral otolith information, however, is ambiguous and cannot distinguish linear accelerations experienced during self-motion from those due to changes in spatial orientation relative to gravity. Here we show that, unlike peripheral vestibular sensors but similar to lobules 9 and 10 of the cerebellar vermis (nodulus and uvula), MSTd neurons respond selectively to heading and not to changes in orientation relative to gravity. In support of a role in heading perception, MSTd vestibular responses are also dominated by velocity-like temporal dynamics, which might optimize sensory integration with visual motion information. Unlike the cerebellar vermis, however, MSTd neurons also carry a spatial orientation-independent rotation signal from the semicircular canals, which could be useful in compensating for the effects of head rotation on the processing of optic flow. These findings show that vestibular signals in MSTd are appropriately processed to support a functional role in multisensory heading perception. PMID:19605631
Hong, Yoon Hee; Lim, Tae-Sung; Yong, Suk Woo; Moon, So Young
2010-08-15
In cases of unilateral posterior cerebral artery (PCA) infarction, abnormal visual perception in the ipsilateral visual field, which is usually believed to be intact, is not met frequently and may confuse doctors during evaluation. Recently, we observed two patients who presented with contralateral hemianopsia accompanied by ipsilateral visual illusions after acute unilateral PCA infarctions. Their visual illusion was characterized by zooming in, macropsia or micropsia. These symptoms appeared to be related to deficits in size constancy. Lesions of both patients commonly involved the ipsilateral forceps major. The consistent presentation observed in these two patients suggests that dominance of size constancy can be located in the left hemisphere in some individuals. Copyright (c) 2010 Elsevier B.V. All rights reserved.
McClain, A D; van den Bos, W; Matheson, D; Desai, M; McClure, S M; Robinson, T N
2014-05-01
The Delboeuf Illusion affects perceptions of the relative sizes of concentric shapes. This study was designed to extend research on the application of the Delboeuf illusion to food on a plate by testing whether a plate's rim width and coloring influence perceptual bias to affect perceived food portion size. Within-subjects experimental design. Experiment 1 tested the effect of rim width on perceived food portion size. Experiment 2 tested the effect of rim coloring on perceived food portion size. In both experiments, participants observed a series of photographic images of paired, side-by-side plates varying in designs and amounts of food. From each pair, participants were asked to select the plate that contained more food. Multilevel logistic regression examined the effects of rim width and coloring on perceived food portion size. Experiment 1: participants overestimated the diameter of food portions by 5% and the visual area of food portions by 10% on plates with wider rims compared with plates with very thin rims (P<0.0001). The effect of rim width was greater with larger food portion sizes. Experiment 2: participants overestimated the diameter of food portions by 1.5% and the visual area of food portions by 3% on plates with rim coloring compared with plates with no coloring (P=0.01). The effect of rim coloring was greater with smaller food portion sizes. The Delboeuf illusion applies to food on a plate. Participants overestimated food portion size on plates with wider and colored rims. These findings may help design plates to influence perceptions of food portion sizes.
Harrison, Neil R; Witheridge, Sian; Makin, Alexis; Wuerger, Sophie M; Pegna, Alan J; Meyer, Georg F
2015-11-01
Motion is represented by low-level signals, such as size-expansion in vision or loudness changes in the auditory modality. The visual and auditory signals from the same object or event may be integrated and facilitate detection. We explored behavioural and electrophysiological correlates of congruent and incongruent audio-visual depth motion in conditions where auditory level changes, visual expansion, and visual disparity cues were manipulated. In Experiment 1 participants discriminated auditory motion direction whilst viewing looming or receding, 2D or 3D, visual stimuli. Responses were faster and more accurate for congruent than for incongruent audio-visual cues, and the congruency effect (i.e., difference between incongruent and congruent conditions) was larger for visual 3D cues compared to 2D cues. In Experiment 2, event-related potentials (ERPs) were collected during presentation of the 2D and 3D, looming and receding, audio-visual stimuli, while participants detected an infrequent deviant sound. Our main finding was that audio-visual congruity was affected by retinal disparity at an early processing stage (135-160ms) over occipito-parietal scalp. Topographic analyses suggested that similar brain networks were activated for the 2D and 3D congruity effects, but that cortical responses were stronger in the 3D condition. Differences between congruent and incongruent conditions were observed between 140-200ms, 220-280ms, and 350-500ms after stimulus onset. Copyright © 2015 Elsevier Ltd. All rights reserved.
A neural basis for the spatial suppression of visual motion perception
Liu, Liu D; Haefner, Ralf M; Pack, Christopher C
2016-01-01
In theory, sensory perception should be more accurate when more neurons contribute to the representation of a stimulus. However, psychophysical experiments that use larger stimuli to activate larger pools of neurons sometimes report impoverished perceptual performance. To determine the neural mechanisms underlying these paradoxical findings, we trained monkeys to discriminate the direction of motion of visual stimuli that varied in size across trials, while simultaneously recording from populations of motion-sensitive neurons in cortical area MT. We used the resulting data to constrain a computational model that explained the behavioral data as an interaction of three main mechanisms: noise correlations, which prevented stimulus information from growing with stimulus size; neural surround suppression, which decreased sensitivity for large stimuli; and a read-out strategy that emphasized neurons with receptive fields near the stimulus center. These results suggest that paradoxical percepts reflect tradeoffs between sensitivity and noise in neuronal populations. DOI: http://dx.doi.org/10.7554/eLife.16167.001 PMID:27228283
NASA Technical Reports Server (NTRS)
Busquets, Anthony M.; Parrish, Russell V.; Williams, Steven P.
1991-01-01
High-fidelity color pictorial displays that incorporate depth cues in the display elements are currently available. Depth cuing applied to advanced head-down flight display concepts potentially enhances the pilot's situational awareness and improves task performance. Depth cues provided by stereopsis exhibit constraints that must be fully understood so depth cuing enhancements can be adequately realized and exploited. A fundamental issue (the goal of this investigation) is whether the use of head-down stereoscopic displays in flight applications degrade the real-world depth perception of pilots using such displays. Stereoacuity tests are used in this study as the measure of interest. Eight pilots flew repeated simulated landing approaches using both nonstereo and stereo 3-D head-down pathway-in-the-sky displays. At this decision height of each approach (where the pilot changes to an out-the-window view to obtain real-world visual references) the pilots changed to a stereoacuity test that used real objects. Statistical analysis of stereoacuity measures (data for a control condition of no exposure to any electronic flight display compared with data for changes from nonstereo and from stereo displays) reveals no significant differences for any of the conditions. Therefore, changing from short-term exposure to a head-down stereo display has no more effect on real-world relative depth perception than does changing from a nonstereo display. However, depth perception effects based on sized and distance judgements and on long-term exposure remain issues to be investigated.
Higashiyama, A
1992-03-01
Three experiments investigated anisotropic perception of visual angle outdoors. In Experiment 1, scales for vertical and horizontal visual angles ranging from 20 degrees to 80 degrees were constructed with the method of angle production (in which the subject reproduced a visual angle with a protractor) and the method of distance production (in which the subject produced a visual angle by adjusting viewing distance). In Experiment 2, scales for vertical and horizontal visual angles of 5 degrees-30 degrees were constructed with the method of angle production and were compared with scales for orientation in the frontal plane. In Experiment 3, vertical and horizontal visual angles of 3 degrees-80 degrees were judged with the method of verbal estimation. The main results of the experiments were as follows: (1) The obtained angles for visual angle are described by a quadratic equation, theta' = a + b theta + c theta 2 (where theta is the visual angle; theta', the obtained angle; a, b, and c, constants). (2) The linear coefficient b is larger than unity and is steeper for vertical direction than for horizontal direction. (3) The quadratic coefficient c is generally smaller than zero and is negatively larger for vertical direction than for horizontal direction. And (4) the obtained angle for visual angle is larger than that for orientation. From these results, it was possible to predict the horizontal-vertical illusion, over-constancy of size, and the moon illusion.
Centanni, Tracy M; Norton, Elizabeth S; Park, Anne; Beach, Sara D; Halverson, Kelly; Ozernov-Palchik, Ola; Gaab, Nadine; Gabrieli, John DE
2018-03-05
A functional region of left fusiform gyrus termed "the visual word form area" (VWFA) develops during reading acquisition to respond more strongly to printed words than to other visual stimuli. Here, we examined responses to letters among 5- and 6-year-old early kindergarten children (N = 48) with little or no school-based reading instruction who varied in their reading ability. We used functional magnetic resonance imaging (fMRI) to measure responses to individual letters, false fonts, and faces in left and right fusiform gyri. We then evaluated whether signal change and size (spatial extent) of letter-sensitive cortex (greater activation for letters versus faces) and letter-specific cortex (greater activation for letters versus false fonts) in these regions related to (a) standardized measures of word-reading ability and (b) signal change and size of face-sensitive cortex (fusiform face area or FFA; greater activation for faces versus letters). Greater letter specificity, but not letter sensitivity, in left fusiform gyrus correlated positively with word reading scores. Across children, in the left fusiform gyrus, greater size of letter-sensitive cortex correlated with lesser size of FFA. These findings are the first to suggest that in beginning readers, development of letter responsivity in left fusiform cortex is associated with both better reading ability and also a reduction of the size of left FFA that may result in right-hemisphere dominance for face perception. © 2018 John Wiley & Sons Ltd.
Pyramid algorithms as models of human cognition
NASA Astrophysics Data System (ADS)
Pizlo, Zygmunt; Li, Zheng
2003-06-01
There is growing body of experimental evidence showing that human perception and cognition involves mechanisms that can be adequately modeled by pyramid algorithms. The main aspect of those mechanisms is hierarchical clustering of information: visual images, spatial relations, and states as well as transformations of a problem. In this paper we review prior psychophysical and simulation results on visual size transformation, size discrimination, speed-accuracy tradeoff, figure-ground segregation, and the traveling salesman problem. We also present our new results on graph search and on the 15-puzzle.
Selective Use of Optical Variables to Control Forward Speed
NASA Technical Reports Server (NTRS)
Johnson, Walter W.; Awe, Cynthia A.; Hart, Sandra G. (Technical Monitor)
1994-01-01
Previous work on the perception and control of simulated vehicle speed has examined the contributions of optical flow rate (angular visual speed) and texture, or edge rate (frequency of passing terrain objects or markings) on the perception and control of forward speed. However, these studies have not examined the ability to selectively use edge rate or flow rate. The two studies reported here show that subjects found it very difficult to arbitrarily direct attention to one or the other of these variables; but that the ability to selectively use these variables is linked to the visual contextual information about the relative validity (linkage with speed) of the two variables. The selectivity also resulted in different velocity adaptation levels for events in which flow rate and edge rate specified forward speed. Finally, the role of visual context in directing attention was further buttressed by the finding that the incorrect perception of changes in ground texture density tended to be coupled with incorrect perceptions of changes in forward speed.
Lee, D H; Mehta, M D
2003-06-01
Effective risk communication in transfusion medicine is important for health-care consumers, but understanding the numerical magnitude of risks can be difficult. The objective of this study was to determine the effect of a visual risk communication tool on the knowledge and perception of transfusion risk. Laypeople were randomly assigned to receive transfusion risk information with either a written or a visual presentation format for communicating and comparing the probabilities of transfusion risks relative to other hazards. Knowledge of transfusion risk was ascertained with a multiple-choice quiz and risk perception was ascertained by psychometric scaling and principal components analysis. Two-hundred subjects were recruited and randomly assigned. Risk communication with both written and visual presentation formats increased knowledge of transfusion risk and decreased the perceived dread and severity of transfusion risk. Neither format changed the perceived knowledge and control of transfusion risk, nor the perceived benefit of transfusion. No differences in knowledge or risk perception outcomes were detected between the groups randomly assigned to written or visual presentation formats. Risk communication that incorporates risk comparisons in either written or visual presentation formats can improve knowledge and reduce the perception of transfusion risk in laypeople.
Gomez Baquero, David; Koppel, Kadri; Chambers, Delores; Hołda, Karolina; Głogowski, Robert; Chambers, Edgar
2018-05-23
Sensory analysis of pet foods has been emerging as an important field of study for the pet food industry over the last few decades. Few studies have been conducted on understanding the pet owners’ perception of pet foods. The objective of this study is to gain a deeper understanding on the perception of the visual characteristics of dry dog foods by dog owners in different consumer segments. A total of 120 consumers evaluated the appearance of 30 dry dog food samples with varying visual characteristics. The consumers rated the acceptance of the samples and associated each one with a list of positive and negative beliefs. Cluster Analysis, ANOVA and Correspondence Analysis were used to analyze the consumer responses. The acceptability of the appearance of dry dog foods was affected by the number of different kibbles present, color(s), shape(s), and size(s) of the kibbles in the product. Three consumer clusters were identified. Consumers rated highest single-kibble samples of medium sizes, traditional shapes, and brown colors. Participants disliked extra-small or extra-large kibble sizes, shapes with high-dimensional contrast, and kibbles of light brown color. These findings can help dry dog food manufacturers to meet consumers’ needs with increasing benefits to the pet food and commodity industries.
The extreme relativity of perception: A new contextual effect modulates human resolving power.
Namdar, Gal; Ganel, Tzvi; Algom, Daniel
2016-04-01
The authors report the discovery of a new effect of context that modulates human resolving power with respect to an individual stimulus. They show that the size of the difference threshold or the just noticeable difference around a standard stimulus depends on the range of the other standards tested simultaneously for resolution within the same experimental session. The larger this range, the poorer the resolving power for a given standard. The authors term this effect the range of standards effect (RSE). They establish this result both in the visual domain for the perception of linear extent, and in the somatosensory domain for the perception of weight. They discuss the contingent nature of stimulus resolution in perception and psychophysics and contrast it with the immunity to contextual influences of visually guided action. (c) 2016 APA, all rights reserved).
Effect of field of view and monocular viewing on angular size judgements in an outdoor scene
NASA Technical Reports Server (NTRS)
Denz, E. A.; Palmer, E. A.; Ellis, S. R.
1980-01-01
Observers typically overestimate the angular size of distant objects. Significantly, overestimations are greater in outdoor settings than in aircraft visual-scene simulators. The effect of field of view and monocular and binocular viewing conditions on angular size estimation in an outdoor field was examined. Subjects adjusted the size of a variable triangle to match the angular size of a standard triangle set at three greater distances. Goggles were used to vary the field of view from 11.5 deg to 90 deg for both monocular and binocular viewing. In addition, an unrestricted monocular and binocular viewing condition was used. It is concluded that neither restricted fields of view similar to those present in visual simulators nor the restriction of monocular viewing causes a significant loss in depth perception in outdoor settings. Thus, neither factor should significantly affect the depth realism of visual simulators.
Visual motion perception predicts driving hazard perception ability.
Lacherez, Philippe; Au, Sandra; Wood, Joanne M
2014-02-01
To examine the basis of previous findings of an association between indices of driving safety and visual motion sensitivity and to examine whether this association could be explained by low-level changes in visual function. A total of 36 visually normal participants (aged 19-80 years) completed a battery of standard vision tests including visual acuity, contrast sensitivity and automated visual fields and two tests of motion perception including sensitivity for movement of a drifting Gabor stimulus and sensitivity for displacement in a random dot kinematogram (Dmin ). Participants also completed a hazard perception test (HPT), which measured participants' response times to hazards embedded in video recordings of real-world driving, which has been shown to be linked to crash risk. Dmin for the random dot stimulus ranged from -0.88 to -0.12 log minutes of arc, and the minimum drift rate for the Gabor stimulus ranged from 0.01 to 0.35 cycles per second. Both measures of motion sensitivity significantly predicted response times on the HPT. In addition, while the relationship involving the HPT and motion sensitivity for the random dot kinematogram was partially explained by the other visual function measures, the relationship with sensitivity for detection of the drifting Gabor stimulus remained significant even after controlling for these variables. These findings suggest that motion perception plays an important role in the visual perception of driving-relevant hazards independent of other areas of visual function and should be further explored as a predictive test of driving safety. Future research should explore the causes of reduced motion perception to develop better interventions to improve road safety. © 2012 The Authors. Acta Ophthalmologica © 2012 Acta Ophthalmologica Scandinavica Foundation.
Faivre, Nathan; Dönz, Jonathan; Scandola, Michele; Dhanis, Herberto; Bello Ruiz, Javier; Bernasconi, Fosco; Salomon, Roy; Blanke, Olaf
2017-01-04
Vision is known to be shaped by context, defined by environmental and bodily signals. In the Taylor illusion, the size of an afterimage projected on one's hand changes according to proprioceptive signals conveying hand position. Here, we assessed whether the Taylor illusion does not just depend on the physical hand position, but also on bodily self-consciousness as quantified through illusory hand ownership. Relying on the somatic rubber hand illusion, we manipulated hand ownership, such that participants embodied a rubber hand placed next to their own hand. We found that an afterimage projected on the participant's hand drifted depending on illusory ownership between the participants' two hands, showing an implication of self-representation during the Taylor illusion. Oscillatory power analysis of electroencephalographic signals showed that illusory hand ownership was stronger in participants with stronger α suppression over left sensorimotor cortex, whereas the Taylor illusion correlated with higher β/γ power over frontotemporal regions. Higher γ connectivity between left sensorimotor and inferior parietal cortex was also found during illusory hand ownership. These data show that afterimage drifts in the Taylor illusion do not only depend on the physical hand position but also on subjective ownership, which itself is based on the synchrony of somatosensory signals from the two hands. The effect of ownership on afterimage drifts is associated with β/γ power and γ connectivity between frontoparietal regions and the visual cortex. Together, our results suggest that visual percepts are not only influenced by bodily context but are self-grounded, mapped on a self-referential frame. Vision is influenced by the body: in the Taylor illusion, the size of an afterimage projected on one's hand changes according to tactile and proprioceptive signals conveying hand position. Here, we report a new phenomenon revealing that the perception of afterimages depends not only on bodily signals, but also on the sense of self. Relying on the rubber hand illusion, we manipulated hand ownership, so that participants embodied a rubber hand placed next to their own hand. We found that visual afterimages projected on the participant's hand drifted laterally, only when the rubber hand was embodied. Electroencephalography revealed spectral dissociations between somatic and visual effects, and higher γ connectivity along the dorsal visual pathways when the rubber hand was embodied. Copyright © 2017 the authors 0270-6474/17/370011-12$15.00/0.
Role of a texture gradient in the perception of relative size.
Tozawa, Junko
2010-01-01
Two theories regarding the role of a texture gradient in the perception of the relative size of objects are compared. Relational theory states that relative size is directly specified by the projective ratio of the numbers of texture elements spanned by objects. Distance calibration theory assumes that relative size is a product of visual angle and distance, once the distance is specified by the texture. Experiment 1 involved three variables: background (no texture, texture gradient patterns), the ratio of heights of the comparison stimulus to a standard (three levels), and angular vertical separation of the standard stimulus below the horizon (two levels). The effect of the retinal length of the comparison stimulus was examined in experiment 2. In both experiments, participants judged both the apparent size and distance of a comparison stimulus relative to a standard stimulus. Results suggest that the cues selected by observers to judge relative size were to some degree different from those used to judge relative distance. Relative size was strongly affected by a texture gradient and the retinal length of a comparison stimulus whereas relative distance perception was affected by relative height. When dominant cues that specify size are different from those which specify distance, relational theory might provide a better account of relative size perception than distance calibration theory.
NASA Astrophysics Data System (ADS)
Yamamoto, Shoji; Hosokawa, Natsumi; Yokoya, Mayu; Tsumura, Norimichi
2016-12-01
In this paper, we investigated the consistency of visual perception for the change of reflection images in an augmented reality setting. Reflection images with distortion and magnification were generated by changing the capture position of the environment map. Observers evaluated the distortion and magnification in reflection images where the reflected objects were arranged symmetrically or asymmetrically. Our results confirmed that the observers' visual perception was more sensitive to changes in distortion than in magnification in the reflection images. Moreover, the asymmetrical arrangement of reflected objects effectively expands the acceptable range of distortion compared with the symmetrical arrangement.
Examining the Effect of Age on Visual-Vestibular Self-Motion Perception Using a Driving Paradigm.
Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L
2017-05-01
Previous psychophysical research has examined how younger adults and non-human primates integrate visual and vestibular cues to perceive self-motion. However, there is much to be learned about how multisensory self-motion perception changes with age, and how these changes affect performance on everyday tasks involving self-motion. Evidence suggests that older adults display heightened multisensory integration compared with younger adults; however, few previous studies have examined this for visual-vestibular integration. To explore age differences in the way that visual and vestibular cues contribute to self-motion perception, we had younger and older participants complete a basic driving task containing visual and vestibular cues. We compared their performance against a previously established control group that experienced visual cues alone. Performance measures included speed, speed variability, and lateral position. Vestibular inputs resulted in more precise speed control among older adults, but not younger adults, when traversing curves. Older adults demonstrated more variability in lateral position when vestibular inputs were available versus when they were absent. These observations align with previous evidence of age-related differences in multisensory integration and demonstrate that they may extend to visual-vestibular integration. These findings may have implications for vehicle and simulator design when considering older users.
Depth reversals in stereoscopic displays driven by apparent size
NASA Astrophysics Data System (ADS)
Sacher, Gunnar; Hayes, Amy; Thornton, Ian M.; Sereno, Margaret E.; Malony, Allen D.
1998-04-01
In visual scenes, depth information is derived from a variety of monocular and binocular cues. When in conflict, a monocular cue is sometimes able to override the binocular information. We examined the accuracy of relative depth judgments in orthographic, stereoscopic displays and found that perceived relative size can override binocular disparity as a depth cue in a situation where the relative size information is itself generated from disparity information, not from retinal size difference. A size discrimination task confirmed the assumption that disparity information was perceived and used to generate apparent size differences. The tendency for the apparent size cue to override disparity information can be modulated by varying the strength of the apparent size cue. In addition, an analysis of reaction times provides supporting evidence for this novel depth reversal effect. We believe that human perception must be regarded as an important component of stereoscopic applications. Hence, if applications are to be effective and accurate, it is necessary to take into account the richness and complexity of the human visual perceptual system that interacts with them. We discuss implications of this and similar research for human performance in virtual environments, the design of visual presentations for virtual worlds, and the design of visualization tools.
McClain, Arianna; van den Bos, Wouter; Matheson, Donna; Desai, Manisha; McClure, Samuel M.; Robinson, Thomas N.
2013-01-01
OBJECTIVE The Delboeuf Illusion affects perceptions of the relative sizes of concentric shapes. This study was designed to extend research on the application of the Delboeuf illusion to food on a plate by testing whether a plate’s rim width and coloring influence perceptual bias to affect perceived food portion size. DESIGN AND METHODS Within-subjects experimental design. Experiment 1 tested the effect of rim width on perceived food portion size. Experiment 2 tested the effect of rim coloring on perceived food portion size. In both experiments, participants observed a series of photographic images of paired, side-by-side plates varying in designs and amounts of food. From each pair, participants were asked to select the plate that contained more food. Multi-level logistic regression examined the effects of rim width and coloring on perceived food portion size. RESULTS Experiment 1: Participants overestimated the diameter of food portions by 5% and the visual area of food portions by 10% on plates with wider rims compared to plates with very thin rims (P<0.0001). The effect of rim width was greater with larger food portion sizes. Experiment 2: Participants overestimated the diameter of food portions by 1.5% and the visual area of food portions by 3% on plates with rim coloring compared to plates with no coloring (P=0.01). The effect of rim coloring was greater with smaller food portion sizes. CONCLUSION The Delboeuf illusion applies to food on a plate. Participants overestimated food portion size on plates with wider and colored rims. These findings may help design plates to influence perceptions of food portion sizes. PMID:24005858
Haptic perception and body representation in lateral and medial occipito-temporal cortices.
Costantini, Marcello; Urgesi, Cosimo; Galati, Gaspare; Romani, Gian Luca; Aglioti, Salvatore M
2011-04-01
Although vision is the primary sensory modality that humans and other primates use to identify objects in the environment, we can recognize crucial object features (e.g., shape, size) using the somatic modality. Previous studies have shown that the occipito-temporal areas dedicated to the visual processing of object forms, faces and bodies also show category-selective responses when the preferred stimuli are haptically explored out of view. Visual processing of human bodies engages specific areas in lateral (extrastriate body area, EBA) and medial (fusiform body area, FBA) occipito-temporal cortex. This study aimed at exploring the relative involvement of EBA and FBA in the haptic exploration of body parts. During fMRI scanning, participants were asked to haptically explore either real-size fake body parts or objects. We found a selective activation of right and left EBA, but not of right FBA, while participants haptically explored body parts as compared to real objects. This suggests that EBA may integrate visual body representations with somatosensory information regarding body parts and form a multimodal representation of the body. Furthermore, both left and right EBA showed a comparable level of body selectivity during haptic perception and visual imagery. However, right but not left EBA was more activated during haptic exploration than visual imagery of body parts, ruling out that the response to haptic body exploration was entirely due to the use of visual imagery. Overall, the results point to the existence of different multimodal body representations in the occipito-temporal cortex which are activated during perception and imagery of human body parts. Copyright © 2011 Elsevier Ltd. All rights reserved.
Taking a Hands-On Approach: Apparent Grasping Ability Scales the Perception of Object Size
ERIC Educational Resources Information Center
Linkenauger, Sally A.; Witt, Jessica K.; Proffitt, Dennis R.
2011-01-01
We examined whether the apparent size of an object is scaled to the morphology of the relevant body part with which one intends to act on it. To be specific, we tested if the visually perceived size of graspable objects is scaled to the extent of apparent grasping ability for the individual. Previous research has shown that right-handed…
Kim, Heejung; Hahm, Jarang; Lee, Hyekyoung; Kang, Eunjoo; Kang, Hyejin; Lee, Dong Soo
2015-05-01
The human brain naturally integrates audiovisual information to improve speech perception. However, in noisy environments, understanding speech is difficult and may require much effort. Although the brain network is supposed to be engaged in speech perception, it is unclear how speech-related brain regions are connected during natural bimodal audiovisual or unimodal speech perception with counterpart irrelevant noise. To investigate the topological changes of speech-related brain networks at all possible thresholds, we used a persistent homological framework through hierarchical clustering, such as single linkage distance, to analyze the connected component of the functional network during speech perception using functional magnetic resonance imaging. For speech perception, bimodal (audio-visual speech cue) or unimodal speech cues with counterpart irrelevant noise (auditory white-noise or visual gum-chewing) were delivered to 15 subjects. In terms of positive relationship, similar connected components were observed in bimodal and unimodal speech conditions during filtration. However, during speech perception by congruent audiovisual stimuli, the tighter couplings of left anterior temporal gyrus-anterior insula component and right premotor-visual components were observed than auditory or visual speech cue conditions, respectively. Interestingly, visual speech is perceived under white noise by tight negative coupling in the left inferior frontal region-right anterior cingulate, left anterior insula, and bilateral visual regions, including right middle temporal gyrus, right fusiform components. In conclusion, the speech brain network is tightly positively or negatively connected, and can reflect efficient or effortful processes during natural audiovisual integration or lip-reading, respectively, in speech perception.
Gestalt-like constraints produce veridical (Euclidean) percepts of 3D indoor scenes
Kwon, TaeKyu; Li, Yunfeng; Sawada, Tadamasa; Pizlo, Zygmunt
2015-01-01
This study, which was influenced a lot by Gestalt ideas, extends our prior work on the role of a priori constraints in the veridical perception of 3D shapes to the perception of 3D scenes. Our experiments tested how human subjects perceive the layout of a naturally-illuminated indoor scene that contains common symmetrical 3D objects standing on a horizontal floor. In one task, the subject was asked to draw a top view of a scene that was viewed either monocularly or binocularly. The top views the subjects reconstructed were configured accurately except for their overall size. These size errors varied from trial to trial, and were shown most-likely to result from the presence of a response bias. There was little, if any, evidence of systematic distortions of the subjects’ perceived visual space, the kind of distortions that have been reported in numerous experiments run under very unnatural conditions. This shown, we proceeded to use Foley’s (Vision Research 12 (1972) 323–332) isosceles right triangle experiment to test the intrinsic geometry of visual space directly. This was done with natural viewing, with the impoverished viewing conditions Foley had used, as well as with a number of intermediate viewing conditions. Our subjects produced very accurate triangles when the viewing conditions were natural, but their performance deteriorated systematically as the viewing conditions were progressively impoverished. Their perception of visual space became more compressed as their natural visual environment was degraded. Once this was shown, we developed a computational model that emulated the most salient features of our psychophysical results. We concluded that human observers see 3D scenes veridically when they view natural 3D objects within natural 3D environments. PMID:26525845
The economics of motion perception and invariants of visual sensitivity.
Gepshtein, Sergei; Tyukin, Ivan; Kubovy, Michael
2007-06-21
Neural systems face the challenge of optimizing their performance with limited resources, just as economic systems do. Here, we use tools of neoclassical economic theory to explore how a frugal visual system should use a limited number of neurons to optimize perception of motion. The theory prescribes that vision should allocate its resources to different conditions of stimulation according to the degree of balance between measurement uncertainties and stimulus uncertainties. We find that human vision approximately follows the optimal prescription. The equilibrium theory explains why human visual sensitivity is distributed the way it is and why qualitatively different regimes of apparent motion are observed at different speeds. The theory offers a new normative framework for understanding the mechanisms of visual sensitivity at the threshold of visibility and above the threshold and predicts large-scale changes in visual sensitivity in response to changes in the statistics of stimulation and system goals.
ERIC Educational Resources Information Center
Rhodes, Gillian; Jeffery, Linda; Boeing, Alexandra; Calder, Andrew J.
2013-01-01
Despite the discovery of body-selective neural areas in occipitotemporal cortex, little is known about how bodies are visually coded. We used perceptual adaptation to determine how body identity is coded. Brief exposure to a body (e.g., anti-Rose) biased perception toward an identity with opposite properties (Rose). Moreover, the size of this…
A homogeneous field for light adaptation.
DOT National Transportation Integrated Search
1966-09-01
Visual judgments of size, distance, slant, etc. in the flying situation are often made under reduced cue conditions, especially during night flying. In the experimental study of spatial perception under these conditions, experiments often require lon...
Spering, Miriam; Carrasco, Marisa
2012-01-01
Feature-based attention enhances visual processing and improves perception, even for visual features that we are not aware of. Does feature-based attention also modulate motor behavior in response to visual information that does or does not reach awareness? Here we compare the effect of feature-based attention on motion perception and smooth pursuit eye movements in response to moving dichoptic plaids–stimuli composed of two orthogonally-drifting gratings, presented separately to each eye–in human observers. Monocular adaptation to one grating prior to the presentation of both gratings renders the adapted grating perceptually weaker than the unadapted grating and decreases the level of awareness. Feature-based attention was directed to either the adapted or the unadapted grating’s motion direction or to both (neutral condition). We show that observers were better in detecting a speed change in the attended than the unattended motion direction, indicating that they had successfully attended to one grating. Speed change detection was also better when the change occurred in the unadapted than the adapted grating, indicating that the adapted grating was perceptually weaker. In neutral conditions, perception and pursuit in response to plaid motion were dissociated: While perception followed one grating’s motion direction almost exclusively (component motion), the eyes tracked the average of both gratings (pattern motion). In attention conditions, perception and pursuit were shifted towards the attended component. These results suggest that attention affects perception and pursuit similarly even though only the former reflects awareness. The eyes can track an attended feature even if observers do not perceive it. PMID:22649238
Spering, Miriam; Carrasco, Marisa
2012-05-30
Feature-based attention enhances visual processing and improves perception, even for visual features that we are not aware of. Does feature-based attention also modulate motor behavior in response to visual information that does or does not reach awareness? Here we compare the effect of feature-based attention on motion perception and smooth-pursuit eye movements in response to moving dichoptic plaids--stimuli composed of two orthogonally drifting gratings, presented separately to each eye--in human observers. Monocular adaptation to one grating before the presentation of both gratings renders the adapted grating perceptually weaker than the unadapted grating and decreases the level of awareness. Feature-based attention was directed to either the adapted or the unadapted grating's motion direction or to both (neutral condition). We show that observers were better at detecting a speed change in the attended than the unattended motion direction, indicating that they had successfully attended to one grating. Speed change detection was also better when the change occurred in the unadapted than the adapted grating, indicating that the adapted grating was perceptually weaker. In neutral conditions, perception and pursuit in response to plaid motion were dissociated: While perception followed one grating's motion direction almost exclusively (component motion), the eyes tracked the average of both gratings (pattern motion). In attention conditions, perception and pursuit were shifted toward the attended component. These results suggest that attention affects perception and pursuit similarly even though only the former reflects awareness. The eyes can track an attended feature even if observers do not perceive it.
Differential effects of delay upon visually and haptically guided grasping and perceptual judgments.
Pettypiece, Charles E; Culham, Jody C; Goodale, Melvyn A
2009-05-01
Experiments with visual illusions have revealed a dissociation between the systems that mediate object perception and those responsible for object-directed action. More recently, an experiment on a haptic version of the visual size-contrast illusion has provided evidence for the notion that the haptic modality shows a similar dissociation when grasping and estimating the size of objects in real-time. Here we present evidence suggesting that the similarities between the two modalities begin to break down once a delay is introduced between when people feel the target object and when they perform the grasp or estimation. In particular, when grasping after a delay in a haptic paradigm, people scale their grasps differently when the target is presented with a flanking object of a different size (although the difference does not reflect a size-contrast effect). When estimating after a delay, however, it appears that people ignore the size of the flanking objects entirely. This does not fit well with the results commonly found in visual experiments. Thus, introducing a delay reveals important differences in the way in which haptic and visual memories are stored and accessed.
Stacey, Paula C.; Kitterick, Pádraig T.; Morris, Saffron D.; Sumner, Christian J.
2017-01-01
Understanding what is said in demanding listening situations is assisted greatly by looking at the face of a talker. Previous studies have observed that normal-hearing listeners can benefit from this visual information when a talker's voice is presented in background noise. These benefits have also been observed in quiet listening conditions in cochlear-implant users, whose device does not convey the informative temporal fine structure cues in speech, and when normal-hearing individuals listen to speech processed to remove these informative temporal fine structure cues. The current study (1) characterised the benefits of visual information when listening in background noise; and (2) used sine-wave vocoding to compare the size of the visual benefit when speech is presented with or without informative temporal fine structure. The accuracy with which normal-hearing individuals reported words in spoken sentences was assessed across three experiments. The availability of visual information and informative temporal fine structure cues was varied within and across the experiments. The results showed that visual benefit was observed using open- and closed-set tests of speech perception. The size of the benefit increased when informative temporal fine structure cues were removed. This finding suggests that visual information may play an important role in the ability of cochlear-implant users to understand speech in many everyday situations. Models of audio-visual integration were able to account for the additional benefit of visual information when speech was degraded and suggested that auditory and visual information was being integrated in a similar way in all conditions. The modelling results were consistent with the notion that audio-visual benefit is derived from the optimal combination of auditory and visual sensory cues. PMID:27085797
Change Blindness Phenomena for Virtual Reality Display Systems.
Steinicke, Frank; Bruder, Gerd; Hinrichs, Klaus; Willemsen, Pete
2011-09-01
In visual perception, change blindness describes the phenomenon that persons viewing a visual scene may apparently fail to detect significant changes in that scene. These phenomena have been observed in both computer-generated imagery and real-world scenes. Several studies have demonstrated that change blindness effects occur primarily during visual disruptions such as blinks or saccadic eye movements. However, until now the influence of stereoscopic vision on change blindness has not been studied thoroughly in the context of visual perception research. In this paper, we introduce change blindness techniques for stereoscopic virtual reality (VR) systems, providing the ability to substantially modify a virtual scene in a manner that is difficult for observers to perceive. We evaluate techniques for semiimmersive VR systems, i.e., a passive and active stereoscopic projection system as well as an immersive VR system, i.e., a head-mounted display, and compare the results to those of monoscopic viewing conditions. For stereoscopic viewing conditions, we found that change blindness phenomena occur with the same magnitude as in monoscopic viewing conditions. Furthermore, we have evaluated the potential of the presented techniques for allowing abrupt, and yet significant, changes of a stereoscopically displayed virtual reality environment.
Relative size perception at a distance is best at eye level
NASA Technical Reports Server (NTRS)
Bertamini, M.; Yang, T. L.; Proffitt, D. R.; Kaiser, M. K. (Principal Investigator)
1998-01-01
Relative size judgments were collected for two objects at 30.5 m and 23.8 from the observer in order to assess how performance depends on the relationship between the size of the objects and the eye level of the observer. In three experiments in an indoor hallway and in one experiment outdoors, accuracy was higher for objects in the neighborhood of eye level. We consider these results in the light of two hypotheses. One proposes that observers localize the horizon as a reference for judging relative size, and the other proposes that observers perceive the general neighborhood of the horizon and then employ a height-in-visual-field heuristic. The finding that relative size judgments are best around the horizon implies that information that is independent of distance perception is used in perceiving size.
Saccadic Corollary Discharge Underlies Stable Visual Perception
Berman, Rebecca A.; Joiner, Wilsaan M.; Wurtz, Robert H.
2016-01-01
Saccadic eye movements direct the high-resolution foveae of our retinas toward objects of interest. With each saccade, the image jumps on the retina, causing a discontinuity in visual input. Our visual perception, however, remains stable. Philosophers and scientists over centuries have proposed that visual stability depends upon an internal neuronal signal that is a copy of the neuronal signal driving the eye movement, now referred to as a corollary discharge (CD) or efference copy. In the old world monkey, such a CD circuit for saccades has been identified extending from superior colliculus through MD thalamus to frontal cortex, but there is little evidence that this circuit actually contributes to visual perception. We tested the influence of this CD circuit on visual perception by first training macaque monkeys to report their perceived eye direction, and then reversibly inactivating the CD as it passes through the thalamus. We found that the monkey's perception changed; during CD inactivation, there was a difference between where the monkey perceived its eyes to be directed and where they were actually directed. Perception and saccade were decoupled. We established that the perceived eye direction at the end of the saccade was not derived from proprioceptive input from eye muscles, and was not altered by contextual visual information. We conclude that the CD provides internal information contributing to the brain's creation of perceived visual stability. More specifically, the CD might provide the internal saccade vector used to unite separate retinal images into a stable visual scene. SIGNIFICANCE STATEMENT Visual stability is one of the most remarkable aspects of human vision. The eyes move rapidly several times per second, displacing the retinal image each time. The brain compensates for this disruption, keeping our visual perception stable. A major hypothesis explaining this stability invokes a signal within the brain, a corollary discharge, that informs visual regions of the brain when and where the eyes are about to move. Such a corollary discharge circuit for eye movements has been identified in macaque monkey. We now show that selectively inactivating this brain circuit alters the monkey's visual perception. We conclude that this corollary discharge provides a critical signal that can be used to unite jumping retinal images into a consistent visual scene. PMID:26740647
Effects of simulator motion and visual characteristics on rotorcraft handling qualities evaluations
NASA Technical Reports Server (NTRS)
Mitchell, David G.; Hart, Daniel C.
1993-01-01
The pilot's perceptions of aircraft handling qualities are influenced by a combination of the aircraft dynamics, the task, and the environment under which the evaluation is performed. When the evaluation is performed in a groundbased simulator, the characteristics of the simulation facility also come into play. Two studies were conducted on NASA Ames Research Center's Vertical Motion Simulator to determine the effects of simulator characteristics on perceived handling qualities. Most evaluations were conducted with a baseline set of rotorcraft dynamics, using a simple transfer-function model of an uncoupled helicopter, under different conditions of visual time delays and motion command washout filters. Differences in pilot opinion were found as the visual and motion parameters were changed, reflecting a change in the pilots' perceptions of handling qualities, rather than changes in the aircraft model itself. The results indicate a need for tailoring the motion washout dynamics to suit the task. Visual-delay data are inconclusive but suggest that it may be better to allow some time delay in the visual path to minimize the mismatch between visual and motion, rather than eliminate the visual delay entirely through lead compensation.
Fautrelle, L; Barbieri, G; Ballay, Y; Bonnetblanc, F
2011-10-27
The time required to complete a fast and accurate movement is a function of its amplitude and the target size. This phenomenon refers to the well known speed-accuracy trade-off. Some interpretations have suggested that the speed-accuracy trade-off is already integrated into the movement planning phase. More specifically, pointing movements may be planned to minimize the variance of the final hand position. However, goal-directed movements can be altered at any time, if for instance, the target location is changed during execution. Thus, one possible limitation of these interpretations may be that they underestimate feedback processes. To further investigate this hypothesis we designed an experiment in which the speed-accuracy trade-off was unexpectedly varied at the hand movement onset by modifying separately the target distance or size, or by modifying both of them simultaneously. These pointing movements were executed from an upright standing position. Our main results showed that the movement time increased when there was a change to the size or location of the target. In addition, the terminal variability of finger position did not change. In other words, it showed that the movement velocity is modulated according to the target size and distance during motor programming or during the final approach, independently of the final variability of the hand position. It suggests that when the speed-accuracy trade-off is unexpectedly modified, terminal feedbacks based on intermediate representations of the endpoint velocity are used to monitor and control the hand displacement. There is clearly no obvious perception-action coupling in this case but rather intermediate processing that may be involved. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.
Developmental Changes in the Visual Span for Reading
Kwon, MiYoung; Legge, Gordon E.; Dubbels, Brock R.
2007-01-01
The visual span for reading refers to the range of letters, formatted as in text, that can be recognized reliably without moving the eyes. It is likely that the size of the visual span is determined primarily by characteristics of early visual processing. It has been hypothesized that the size of the visual span imposes a fundamental limit on reading speed (Legge, Mansfield, & Chung, 2001). The goal of the present study was to investigate developmental changes in the size of the visual span in school-age children, and the potential impact of these changes on children’s reading speed. The study design included groups of 10 children in 3rd, 5th, and 7th grade, and 10 adults. Visual span profiles were measured by asking participants to recognize letters in trigrams (random strings of three letters) flashed for 100 ms at varying letter positions left and right of the fixation point. Two print sizes (0.25° and 1.0°) were used. Over a block of trials, a profile was built up showing letter recognition accuracy (% correct) versus letter position. The area under this profile was defined to be the size of the visual span. Reading speed was measured in two ways: with Rapid Serial Visual Presentation (RSVP) and with short blocks of text (termed Flashcard presentation). Consistent with our prediction, we found that the size of the visual span increased linearly with grade level and it was significantly correlated with reading speed for both presentation methods. Regression analysis using the size of the visual span as a predictor indicated that 34% to 52% of variability in reading speeds can be accounted for by the size of the visual span. These findings are consistent with a significant role of early visual processing in the development of reading skills. PMID:17845810
Challinor, Kirsten L; Mond, Jonathan; Stephen, Ian D; Mitchison, Deborah; Stevenson, Richard J; Hay, Phillipa; Brooks, Kevin R
2017-12-01
Although body size and shape misperception (BSSM) is a common feature of anorexia nervosa, bulimia nervosa and muscle dysmorphia, little is known about its underlying neural mechanisms. Recently, a new approach has emerged, based on the long-established non-invasive technique of perceptual adaptation, which allows for inferences about the structure of the neural apparatus responsible for alterations in visual appearance. Here, we describe several recent experimental examples of BSSM, wherein exposure to "extreme" body stimuli causes visual aftereffects of biased perception. The implications of these studies for our understanding of the neural and cognitive representation of human bodies, along with their implications for clinical practice are discussed.
Corporate visual identity: a case in hospitals.
Alkibay, Sanem; Ozdogan, F Bahar; Ermec, Aysegul
2007-01-01
This paper aims to present a perspective to better understand corporate identity through examining the perceptions of Turkish patients and develop a corporate visual identity scale. While there is no study related to corporate identity research on hospitals in Turkey as a developing country, understanding consumer's perceptions about corporate identity efforts of hospitals could provide different perspectives for recruiters. When the hospitals are considered in two different groups as university and state hospitals, the priority of the characteristics of corporate visual identity may change, whereas the top five characteristics remain the same for all the hospitals.
Kato, Masaki; Yokoyama, Chihiro; Kawasaki, Akihiro; Takeda, Chiho; Koike, Taku; Onoe, Hirotaka; Iriki, Atsushi
2018-05-01
As with humans, vocal communication is an important social tool for nonhuman primates. Common marmosets (Callithrix jacchus) often produce whistle-like 'phee' calls when they are visually separated from conspecifics. The neural processes specific to phee call perception, however, are largely unknown, despite the possibility that these processes involve social information. Here, we examined behavioral and whole-brain mapping evidence regarding the detection of individual conspecific phee calls using an audio playback procedure. Phee calls evoked sound exploratory responses when the caller changed, indicating that marmosets can discriminate between caller identities. Positron emission tomography with [ 18 F] fluorodeoxyglucose revealed that perception of phee calls from a single subject was associated with activity in the dorsolateral prefrontal, medial prefrontal, orbitofrontal cortices, and the amygdala. These findings suggest that these regions are implicated in cognitive and affective processing of salient social information. However, phee calls from multiple subjects induced brain activation in only some of these regions, such as the dorsolateral prefrontal cortex. We also found distinctive brain deactivation and functional connectivity associated with phee call perception depending on the caller change. According to changes in pupillary size, phee calls from a single subject induced a higher arousal level compared with those from multiple subjects. These results suggest that marmoset phee calls convey information about individual identity and affective valence depending on the consistency or variability of the caller. Based on the flexible perception of the call based on individual recognition, humans and marmosets may share some neural mechanisms underlying conspecific vocal perception.
Acoustic-tactile rendering of visual information
NASA Astrophysics Data System (ADS)
Silva, Pubudu Madhawa; Pappas, Thrasyvoulos N.; Atkins, Joshua; West, James E.; Hartmann, William M.
2012-03-01
In previous work, we have proposed a dynamic, interactive system for conveying visual information via hearing and touch. The system is implemented with a touch screen that allows the user to interrogate a two-dimensional (2-D) object layout by active finger scanning while listening to spatialized auditory feedback. Sound is used as the primary source of information for object localization and identification, while touch is used both for pointing and for kinesthetic feedback. Our previous work considered shape and size perception of simple objects via hearing and touch. The focus of this paper is on the perception of a 2-D layout of simple objects with identical size and shape. We consider the selection and rendition of sounds for object identification and localization. We rely on the head-related transfer function for rendering sound directionality, and consider variations of sound intensity and tempo as two alternative approaches for rendering proximity. Subjective experiments with visually-blocked subjects are used to evaluate the effectiveness of the proposed approaches. Our results indicate that intensity outperforms tempo as a proximity cue, and that the overall system for conveying a 2-D layout is quite promising.
Helicopter pilot estimation of self-altitude in a degraded visual environment
NASA Astrophysics Data System (ADS)
Crowley, John S.; Haworth, Loran A.; Szoboszlay, Zoltan P.; Lee, Alan G.
2000-06-01
The effect of night vision devices and degraded visual imagery on self-attitude perception is unknown. Thirteen Army aviators with normal vision flew five flights under various visual conditions in a modified AH-1 (Cobra) helicopter. Subjects estimated their altitude or flew to specified altitudes while flying a series of maneuvers. The results showed that subjects were better at detecting and controlling changes in altitude than they were at flying to or naming a specific altitude. In cruise flight and descent, the subjects tended to fly above the desired altitude, an error in the safe direction. While hovering, the direction of error was less predictable. In the low-level cruise flight scenario tested in this study, altitude perception was affected more by changes in image resolution than by changes in FOV or ocularity.
Marino, Alexandria C.; Mazer, James A.
2016-01-01
During natural vision, saccadic eye movements lead to frequent retinal image changes that result in different neuronal subpopulations representing the same visual feature across fixations. Despite these potentially disruptive changes to the neural representation, our visual percept is remarkably stable. Visual receptive field remapping, characterized as an anticipatory shift in the position of a neuron’s spatial receptive field immediately before saccades, has been proposed as one possible neural substrate for visual stability. Many of the specific properties of remapping, e.g., the exact direction of remapping relative to the saccade vector and the precise mechanisms by which remapping could instantiate stability, remain a matter of debate. Recent studies have also shown that visual attention, like perception itself, can be sustained across saccades, suggesting that the attentional control system can also compensate for eye movements. Classical remapping could have an attentional component, or there could be a distinct attentional analog of visual remapping. At this time we do not yet fully understand how the stability of attentional representations relates to perisaccadic receptive field shifts. In this review, we develop a vocabulary for discussing perisaccadic shifts in receptive field location and perisaccadic shifts of attentional focus, review and synthesize behavioral and neurophysiological studies of perisaccadic perception and perisaccadic attention, and identify open questions that remain to be experimentally addressed. PMID:26903820
Correlation Between Measured Noise And Its Visual Perception.
NASA Astrophysics Data System (ADS)
Bollen, Romain
1986-06-01
For obvious reasons people in the field claim that measured data do not agree with what they perceive. Scientists reply by saying that their data are "true". Are they? Since images are made to be looked at, a request for data meaningful for what is perceived, is not foolish. We show that, when noise is characterized by standard density fluctuation figures, a good correlation with noise perception by the naked eye on a large size radiograph is obtained in applying microdensitometric scanning with a 400 micron aperture. For other viewing conditions the aperture size has to be adapted.
Contextual effects on perceived contrast: figure-ground assignment and orientation contrast.
Self, Matthew W; Mookhoek, Aart; Tjalma, Nienke; Roelfsema, Pieter R
2015-02-02
Figure-ground segregation is an important step in the path leading to object recognition. The visual system segregates objects ('figures') in the visual scene from their backgrounds ('ground'). Electrophysiological studies in awake-behaving monkeys have demonstrated that neurons in early visual areas increase their firing rate when responding to a figure compared to responding to the background. We hypothesized that similar changes in neural firing would take place in early visual areas of the human visual system, leading to changes in the perception of low-level visual features. In this study, we investigated whether contrast perception is affected by figure-ground assignment using stimuli similar to those in the electrophysiological studies in monkeys. We measured contrast discrimination thresholds and perceived contrast for Gabor probes placed on figures or the background and found that the perceived contrast of the probe was increased when it was placed on a figure. Furthermore, we tested how this effect compared with the well-known effect of orientation contrast on perceived contrast. We found that figure-ground assignment and orientation contrast produced changes in perceived contrast of a similar magnitude, and that they interacted. Our results demonstrate that figure-ground assignment influences perceived contrast, consistent with an effect of figure-ground assignment on activity in early visual areas of the human visual system. © 2015 ARVO.
Material and shape perception based on two types of intensity gradient information
Nishida, Shin'ya
2018-01-01
Visual estimation of the material and shape of an object from a single image includes a hard ill-posed computational problem. However, in our daily life we feel we can estimate both reasonably well. The neural computation underlying this ability remains poorly understood. Here we propose that the human visual system uses different aspects of object images to separately estimate the contributions of the material and shape. Specifically, material perception relies mainly on the intensity gradient magnitude information, while shape perception relies mainly on the intensity gradient order information. A clue to this hypothesis was provided by the observation that luminance-histogram manipulation, which changes luminance gradient magnitudes but not the luminance-order map, effectively alters the material appearance but not the shape of an object. In agreement with this observation, we found that the simulated physical material changes do not significantly affect the intensity order information. A series of psychophysical experiments further indicate that human surface shape perception is robust against intensity manipulations provided they do not disturb the intensity order information. In addition, we show that the two types of gradient information can be utilized for the discrimination of albedo changes from highlights. These findings suggest that the visual system relies on these diagnostic image features to estimate physical properties in a distal world. PMID:29702644
Hartmann, Christina; Siegrist, Michael
2015-06-01
The present study investigated the longitudinal development of body size perception in relation to different personality traits. A sample of Swiss adults (N=2905, 47% men), randomly selected from the telephone book, completed a questionnaire on two consecutive years (2012, 2013). Body size perception was assessed with the Contour Drawing Rating Scale and personality traits were assessed with a short version of the Big Five Inventory. Longitudinal analysis of change indicated that men and women scoring higher on conscientiousness perceived themselves as thinner one year later. In contrast, women scoring higher on neuroticism perceived their body size as larger one year later. No significant effect was observed for men scoring higher on neuroticism. These results were independent of weight changes, body mass index, age, and education. Our findings suggest that personality traits contribute to body size perception among adults. Copyright © 2015 Elsevier Ltd. All rights reserved.
Visuoperceptual impairment in dementia with Lewy bodies.
Mori, E; Shimomura, T; Fujimori, M; Hirono, N; Imamura, T; Hashimoto, M; Tanimukai, S; Kazui, H; Hanihara, T
2000-04-01
In dementia with Lewy bodies (DLB), vision-related cognitive and behavioral symptoms are common, and involvement of the occipital visual cortices has been demonstrated in functional neuroimaging studies. To delineate visuoperceptual disturbance in patients with DLB in comparison with that in patients with Alzheimer disease and to explore the relationship between visuoperceptual disturbance and the vision-related cognitive and behavioral symptoms. Case-control study. Research-oriented hospital. Twenty-four patients with probable DLB (based on criteria of the Consortium on DLB International Workshop) and 48 patients with probable Alzheimer disease (based on criteria of the National Institute of Neurological and Communicative Disorders and Stroke-Alzheimer's Disease and Related Disorders Association) who were matched to those with DLB 2:1 by age, sex, education, and Mini-Mental State Examination score. Four test items to examine visuoperceptual functions, including the object size discrimination, form discrimination, overlapping figure identification, and visual counting tasks. Compared with patients with probable Alzheimer disease, patients with probable DLB scored significantly lower on all the visuoperceptive tasks (P<.04 to P<.001). In the DLB group, patients with visual hallucinations (n = 18) scored significantly lower on the overlapping figure identification (P = .01) than those without them (n = 6), and patients with television misidentifications (n = 5) scored significantly lower on the size discrimination (P<.001), form discrimination (P = .01), and visual counting (P = .007) than those without them (n = 19). Visual perception is defective in probable DLB. The defective visual perception plays a role in development of visual hallucinations, delusional misidentifications, visual agnosias, and visuoconstructive disability charcteristic of DLB.
Szenczi-Cseh, J; Horváth, Zs; Ambrus, Á
2017-12-01
We tested the applicability of EPIC-SOFT food picture series used in the context of a Hungarian food consumption survey gathering data for exposure assessment, and investigated errors in food portion estimation resulted from the visual perception and conceptualisation-memory. Sixty-two participants in three age groups (10 to <74 years) were presented with three different portion sizes of five foods. The results were considered acceptable if the relative difference between average estimated and actual weight obtained through the perception method was ≤25%, and the relative standard deviation of the individual weight estimates was <30% after compensating the effect of potential outliers with winsorisation. Picture series for all five food items were rated acceptable. Small portion sizes were tended to be overestimated, large ones were tended to be underestimated. Portions of boiled potato and creamed spinach were all over- and underestimated, respectively. Recalling the portion sizes resulted in overestimation with larger differences (up to 60.7%).
Perceptual asymmetry in texture perception.
Williams, D; Julesz, B
1992-07-15
A fundamental property of human visual perception is our ability to distinguish between textures. A concerted effort has been made to account for texture segregation in terms of linear spatial filter models and their nonlinear extensions. However, for certain texture pairs the ease of discrimination changes when the role of figure and ground are reversed. This asymmetry poses a problem for both linear and nonlinear models. We have isolated a property of texture perception that can account for this asymmetry in discrimination: subjective closure. This property, which is also responsible for visual illusions, appears to be explainable by early visual processes alone. Our results force a reexamination of the process of human texture segregation and of some recent models that were introduced to explain it.
Assessment of body perception among Swedish adolescents and young adults.
Bergström, E; Stenlund, H; Svedjehäll, B
2000-01-01
To assess body perception in adolescents and young adults without anorexia nervosa. Using a visual size estimation technique, perceived body size was estimated in four groups of Swedish adolescents and young adults without anorexia nervosa (86 males and 95 females). Perceived body size was estimated at nine different body sites comparing these estimations to real body size. The results show that 95% of males and 96% of females overestimated their body size (mean overestimation: males +22%, females +33%). The overestimations were greatest in females. The greatest overestimations were made of the waist (males +31%, females +46%), buttocks (males +22%, females +42%), and thighs (males +27%, females +41%). The results indicate that overestimation of body size may be a general phenomenon in adolescents and young adults in a country such as Sweden, implying a similar, but less pronounced distortion of body image as in individuals with anorexia nervosa.
Differential patterns of 2D location versus depth decoding along the visual hierarchy.
Finlayson, Nonie J; Zhang, Xiaoli; Golomb, Julie D
2017-02-15
Visual information is initially represented as 2D images on the retina, but our brains are able to transform this input to perceive our rich 3D environment. While many studies have explored 2D spatial representations or depth perception in isolation, it remains unknown if or how these processes interact in human visual cortex. Here we used functional MRI and multi-voxel pattern analysis to investigate the relationship between 2D location and position-in-depth information. We stimulated different 3D locations in a blocked design: each location was defined by horizontal, vertical, and depth position. Participants remained fixated at the center of the screen while passively viewing the peripheral stimuli with red/green anaglyph glasses. Our results revealed a widespread, systematic transition throughout visual cortex. As expected, 2D location information (horizontal and vertical) could be strongly decoded in early visual areas, with reduced decoding higher along the visual hierarchy, consistent with known changes in receptive field sizes. Critically, we found that the decoding of position-in-depth information tracked inversely with the 2D location pattern, with the magnitude of depth decoding gradually increasing from intermediate to higher visual and category regions. Representations of 2D location information became increasingly location-tolerant in later areas, where depth information was also tolerant to changes in 2D location. We propose that spatial representations gradually transition from 2D-dominant to balanced 3D (2D and depth) along the visual hierarchy. Copyright © 2016 Elsevier Inc. All rights reserved.
Perceptual context and individual differences in the language proficiency of preschool children.
Banai, Karen; Yifat, Rachel
2016-02-01
Although the contribution of perceptual processes to language skills during infancy is well recognized, the role of perception in linguistic processing beyond infancy is not well understood. In the experiments reported here, we asked whether manipulating the perceptual context in which stimuli are presented across trials influences how preschool children perform visual (shape-size identification; Experiment 1) and auditory (syllable identification; Experiment 2) tasks. Another goal was to determine whether the sensitivity to perceptual context can explain part of the variance in oral language skills in typically developing preschool children. Perceptual context was manipulated by changing the relative frequency with which target visual (Experiment 1) and auditory (Experiment 2) stimuli were presented in arrays of fixed size, and identification of the target stimuli was tested. Oral language skills were assessed using vocabulary, word definition, and phonological awareness tasks. Changes in perceptual context influenced the performance of the majority of children on both identification tasks. Sensitivity to perceptual context accounted for 7% to 15% of the variance in language scores. We suggest that context effects are an outcome of a statistical learning process. Therefore, the current findings demonstrate that statistical learning can facilitate both visual and auditory identification processes in preschool children. Furthermore, consistent with previous findings in infants and in older children and adults, individual differences in statistical learning were found to be associated with individual differences in language skills of preschool children. Copyright © 2015 Elsevier Inc. All rights reserved.
Jastrzebski, Mikolaj; Bala, Aleksandra
2013-01-01
Psilocybin is a substance of natural origin, occurring in hallucinogenic mushrooms (most common in the Psilocybe family). After its synthesis in 1958 research began on its psychoactive properties, particularly strong effects on visual perception and spatial orientation. Due to the very broad spectrum of psilocybin effects research began on the different ranges of its actions--including the effect on physiological processes (such as eye saccada movements). Neuro-imaging and neurophysiological research (positron emission tomography-PET and electroencephalography-EEG), indicate a change in the rate of metabolism of the brain and desync cerebral hemispheres. Experimental studies show the changes in visual perception and distortion from psilocybin in the handwriting style of patients examined. There are widely described subjective experiences reported by the subjects. There are also efforts to apply testing via questionnaire on people under the influence of psilocybin, in the context of the similarity of psilocybin-induced state to the initial stages of schizophrenia, as well as research aimed at creating an 'artificial' model of the disease.
Using spatial metrics to predict scenic perception in a changing landscape: Dennis, Massachusetts
James F. Palmer
2004-01-01
This paper investigates residents' perceptions of scenic quality in the Cape Cod community of Dennis, Massachusetts during a period of significant landscape change. In the mid-1970s, Chandler [Natural and Visual Resources, Dennis, Massachusetts. Dennis Conservation Commission and Planning Board, Dennis, MA, 1976] worked with a community group to evaluate the...
The artist emerges: visual art learning alters neural structure and function.
Schlegel, Alexander; Alexander, Prescott; Fogelson, Sergey V; Li, Xueting; Lu, Zhengang; Kohler, Peter J; Riley, Enrico; Tse, Peter U; Meng, Ming
2015-01-15
How does the brain mediate visual artistic creativity? Here we studied behavioral and neural changes in drawing and painting students compared to students who did not study art. We investigated three aspects of cognition vital to many visual artists: creative cognition, perception, and perception-to-action. We found that the art students became more creative via the reorganization of prefrontal white matter but did not find any significant changes in perceptual ability or related neural activity in the art students relative to the control group. Moreover, the art students improved in their ability to sketch human figures from observation, and multivariate patterns of cortical and cerebellar activity evoked by this drawing task became increasingly separable between art and non-art students. Our findings suggest that the emergence of visual artistic skills is supported by plasticity in neural pathways that enable creative cognition and mediate perceptuomotor integration. Copyright © 2014 Elsevier Inc. All rights reserved.
Ageing vision and falls: a review.
Saftari, Liana Nafisa; Kwon, Oh-Sang
2018-04-23
Falls are the leading cause of accidental injury and death among older adults. One of three adults over the age of 65 years falls annually. As the size of elderly population increases, falls become a major concern for public health and there is a pressing need to understand the causes of falls thoroughly. While it is well documented that visual functions such as visual acuity, contrast sensitivity, and stereo acuity are correlated with fall risks, little attention has been paid to the relationship between falls and the ability of the visual system to perceive motion in the environment. The omission of visual motion perception in the literature is a critical gap because it is an essential function in maintaining balance. In the present article, we first review existing studies regarding visual risk factors for falls and the effect of ageing vision on falls. We then present a group of phenomena such as vection and sensory reweighting that provide information on how visual motion signals are used to maintain balance. We suggest that the current list of visual risk factors for falls should be elaborated by taking into account the relationship between visual motion perception and balance control.
Modeling a space-variant cortical representation for apparent motion.
Wurbs, Jeremy; Mingolla, Ennio; Yazdanbakhsh, Arash
2013-08-06
Receptive field sizes of neurons in early primate visual areas increase with eccentricity, as does temporal processing speed. The fovea is evidently specialized for slow, fine movements while the periphery is suited for fast, coarse movements. In either the fovea or periphery discrete flashes can produce motion percepts. Grossberg and Rudd (1989) used traveling Gaussian activity profiles to model long-range apparent motion percepts. We propose a neural model constrained by physiological data to explain how signals from retinal ganglion cells to V1 affect the perception of motion as a function of eccentricity. Our model incorporates cortical magnification, receptive field overlap and scatter, and spatial and temporal response characteristics of retinal ganglion cells for cortical processing of motion. Consistent with the finding of Baker and Braddick (1985), in our model the maximum flash distance that is perceived as an apparent motion (Dmax) increases linearly as a function of eccentricity. Baker and Braddick (1985) made qualitative predictions about the functional significance of both stimulus and visual system parameters that constrain motion perception, such as an increase in the range of detectable motions as a function of eccentricity and the likely role of higher visual processes in determining Dmax. We generate corresponding quantitative predictions for those functional dependencies for individual aspects of motion processing. Simulation results indicate that the early visual pathway can explain the qualitative linear increase of Dmax data without reliance on extrastriate areas, but that those higher visual areas may serve as a modulatory influence on the exact Dmax increase.
Dewar, Michaela T; Carey, David P
2006-01-01
Recent findings of visuomotor immunity to perceptual illusions have been attributed to a perception-action division of labour within two anatomically segregated streams in the visual cortex. However, critics argue that such experimental findings are not valid and have suggested that the perception-action dissociations can be explained away by differential attentional/processing demands, rather than a functional dissociation in the neurologically intact brain: perceptual tasks require processing of the entire illusion display while visuomotor tasks only require processing the target that is acted upon. The present study examined whether grasping of the Müller-Lyer display would remain immune to the illusion when the task required the direction of attention or a related resource towards both Müller-Lyer shafts. Twelve participants were required to match and grasp two Müller-Lyer shafts bimanually (i.e. one with each hand). It was found that bimanual grasping was not significantly affected by the illusion, while there was a highly significant illusion effect on perceptual estimation by matching. Furthermore, it was established that this dissociation did not result from a differing baseline rate of change in manual estimation and grasping aperture to a change in physical object size. These findings provide further support for the postulated perception-action dissociation and fail to uphold the idea that grasping 'immunity' to the Müller-Lyer illusions merely represents an experimental artefact.
Coordinate Transformations in Object Recognition
ERIC Educational Resources Information Center
Graf, Markus
2006-01-01
A basic problem of visual perception is how human beings recognize objects after spatial transformations. Three central classes of findings have to be accounted for: (a) Recognition performance varies systematically with orientation, size, and position; (b) recognition latencies are sequentially additive, suggesting analogue transformation…
Color categories affect pre-attentive color perception.
Clifford, Alexandra; Holmes, Amanda; Davies, Ian R L; Franklin, Anna
2010-10-01
Categorical perception (CP) of color is the faster and/or more accurate discrimination of colors from different categories than equivalently spaced colors from the same category. Here, we investigate whether color CP at early stages of chromatic processing is independent of top-down modulation from attention. A visual oddball task was employed where frequent and infrequent colored stimuli were either same- or different-category, with chromatic differences equated across conditions. Stimuli were presented peripheral to a central distractor task to elicit an event-related potential (ERP) known as the visual mismatch negativity (vMMN). The vMMN is an index of automatic and pre-attentive visual change detection arising from generating loci in visual cortices. The results revealed a greater vMMN for different-category than same-category change detection when stimuli appeared in the lower visual field, and an absence of attention-related ERP components. The findings provide the first clear evidence for an automatic and pre-attentive categorical code for color. Copyright © 2010 Elsevier B.V. All rights reserved.
Evolution of eye size and shape in primates.
Ross, Callum F; Kirk, E Christopher
2007-03-01
Strepsirrhine and haplorhine primates exhibit highly derived features of the visual system that distinguish them from most other mammals. Comparative data link the evolution of these visual specializations to the sequential acquisition of nocturnal visual predation in the primate stem lineage and diurnal visual predation in the anthropoid stem lineage. However, it is unclear to what extent these shifts in primate visual ecology were accompanied by changes in eye size and shape. Here we investigate the evolution of primate eye morphology using a comparative study of a large sample of mammalian eyes. Our analysis shows that primates differ from other mammals in having large eyes relative to body size and that anthropoids exhibit unusually small corneas relative to eye size and body size. The large eyes of basal primates probably evolved to improve visual acuity while maintaining high sensitivity in a nocturnal context. The reduced corneal sizes of anthropoids reflect reductions in the size of the dioptric apparatus as a means of increasing posterior nodal distance to improve visual acuity. These data support the conclusion that the origin of anthropoids was associated with a change in eye shape to improve visual acuity in the context of a diurnal predatory habitus.
Audio-visual sensory deprivation degrades visuo-tactile peri-personal space.
Noel, Jean-Paul; Park, Hyeong-Dong; Pasqualini, Isabella; Lissek, Herve; Wallace, Mark; Blanke, Olaf; Serino, Andrea
2018-05-01
Self-perception is scaffolded upon the integration of multisensory cues on the body, the space surrounding the body (i.e., the peri-personal space; PPS), and from within the body. We asked whether reducing information available from external space would change: PPS, interoceptive accuracy, and self-experience. Twenty participants were exposed to 15 min of audio-visual deprivation and performed: (i) a visuo-tactile interaction task measuring their PPS; (ii) a heartbeat perception task measuring interoceptive accuracy; and (iii) a series of questionnaires related to self-perception and mental illness. These tasks were carried out in two conditions: while exposed to a standard sensory environment and under a condition of audio-visual deprivation. Results suggest that while PPS becomes ill defined after audio-visual deprivation, interoceptive accuracy is unaltered at a group-level, with some participants improving and some worsening in interoceptive accuracy. Interestingly, correlational individual differences analyses revealed that changes in PPS after audio-visual deprivation were related to interoceptive accuracy and self-reports of "unusual experiences" on an individual subject basis. Taken together, the findings argue for a relationship between the malleability of PPS, interoceptive accuracy, and an inclination toward aberrant ideation often associated with mental illness. Copyright © 2018. Published by Elsevier Inc.
Sata, Yoshimi; Inagaki, Masumi; Shirane, Seiko; Kaga, Makiko
2002-11-01
In order to objectively evaluate visual perception of patients with mental retardation (MR), the P300 event-related potentials (ERPs) for visual oddball tasks were recorded in 26 patients and 13 age-matched healthy volunteers. The latency and amplitude of visual P300 in response to the Japanese ideogram stimuli (a pair of familiar Kanji characters or unfamiliar Kanji characters) and a pair of meaningless complicated figures were measured. In almost all MR patients visual P300 was observed, however, the peak latency was significantly prolonged compared to control subjects. There was no significant difference of P300 latency among the three tasks. The distribution pattern of P300 in MR patients was different from that in the controls and the amplitudes in the frontal region was larger in MR patients. The latency decreased with age even in both groups. The developmental change of P300 latency corresponded to developmental age rather than the chronological age. These findings suggest that MR patients have impairment in processing of visual perception. Assessment of P300 latencies to the visual stimuli may be useful as an objective indicator of mental deficit.
García-Domene, M C; Luque, M J; Díez-Ajenjo, M A; Desco-Esteban, M C; Artigas, J M
2018-02-01
To analyse the relationship between the choroidal thickness and the visual perception of patients with high myopia but without retinal damage. All patients underwent ophthalmic evaluation including a slit lamp examination and dilated ophthalmoscopy, subjective refraction, best corrected visual acuity, axial length, optical coherence tomography, contrast sensitivity function and sensitivity of the visual pathways. We included eleven eyes of subjects with high myopia. There are statistical correlations between choroidal thickness and almost all the contrast sensitivity values. The sensitivity of magnocellular and koniocellular pathways is the most affected, and the homogeneity of the sensibility of the magnocellular pathway depends on the choroidal thickness; when the thickness decreases, the sensitivity impairment extends from the center to the periphery of the visual field. Patients with high myopia without any fundus changes have visual impairments. We have found that choroidal thickness correlates with perceptual parameters such as contrast sensitivity or mean defect and pattern standard deviation of the visual fields of some visual pathways. Our study shows that the magnocellular and koniocellular pathways are the most affected, so that these patients have impairment in motion perception and blue-yellow contrast perception. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Posture-based processing in visual short-term memory for actions.
Vicary, Staci A; Stevens, Catherine J
2014-01-01
Visual perception of human action involves both form and motion processing, which may rely on partially dissociable neural networks. If form and motion are dissociable during visual perception, then they may also be dissociable during their retention in visual short-term memory (VSTM). To elicit form-plus-motion and form-only processing of dance-like actions, individual action frames can be presented in the correct or incorrect order. The former appears coherent and should elicit action perception, engaging both form and motion pathways, whereas the latter appears incoherent and should elicit posture perception, engaging form pathways alone. It was hypothesized that, if form and motion are dissociable in VSTM, then recognition of static body posture should be better after viewing incoherent than after viewing coherent actions. However, as VSTM is capacity limited, posture-based encoding of actions may be ineffective with increased number of items or frames. Using a behavioural change detection task, recognition of a single test posture was significantly more likely after studying incoherent than after studying coherent stimuli. However, this effect only occurred for spans of two (but not three) items and for stimuli with five (but not nine) frames. As in perception, posture and motion are dissociable in VSTM.
Changing motor perception by sensorimotor conflicts and body ownership
Salomon, R.; Fernandez, N. B.; van Elk, M.; Vachicouras, N.; Sabatier, F.; Tychinskaya, A.; Llobera, J.; Blanke, O.
2016-01-01
Experimentally induced sensorimotor conflicts can result in a loss of the feeling of control over a movement (sense of agency). These findings are typically interpreted in terms of a forward model in which the predicted sensory consequences of the movement are compared with the observed sensory consequences. In the present study we investigated whether a mismatch between movements and their observed sensory consequences does not only result in a reduced feeling of agency, but may affect motor perception as well. Visual feedback of participants’ finger movements was manipulated using virtual reality to be anatomically congruent or incongruent to the performed movement. Participants made a motor perception judgment (i.e. which finger did you move?) or a visual perceptual judgment (i.e. which finger did you see moving?). Subjective measures of agency and body ownership were also collected. Seeing movements that were visually incongruent to the performed movement resulted in a lower accuracy for motor perception judgments, but not visual perceptual judgments. This effect was modified by rotating the virtual hand (Exp.2), but not by passively induced movements (Exp.3). Hence, sensorimotor conflicts can modulate the perception of one’s motor actions, causing viewed “alien actions” to be felt as one’s own. PMID:27225834
Serial dependence in the perception of attractiveness.
Xia, Ye; Leib, Allison Yamanashi; Whitney, David
2016-12-01
The perception of attractiveness is essential for choices of food, object, and mate preference. Like perception of other visual features, perception of attractiveness is stable despite constant changes of image properties due to factors like occlusion, visual noise, and eye movements. Recent results demonstrate that perception of low-level stimulus features and even more complex attributes like human identity are biased towards recent percepts. This effect is often called serial dependence. Some recent studies have suggested that serial dependence also exists for perceived facial attractiveness, though there is also concern that the reported effects are due to response bias. Here we used an attractiveness-rating task to test the existence of serial dependence in perceived facial attractiveness. Our results demonstrate that perceived face attractiveness was pulled by the attractiveness level of facial images encountered up to 6 s prior. This effect was not due to response bias and did not rely on the previous motor response. This perceptual pull increased as the difference in attractiveness between previous and current stimuli increased. Our results reconcile previously conflicting findings and extend previous work, demonstrating that sequential dependence in perception operates across different levels of visual analysis, even at the highest levels of perceptual interpretation.
Metacognitive Confidence Increases with, but Does Not Determine, Visual Perceptual Learning.
Zizlsperger, Leopold; Kümmel, Florian; Haarmeier, Thomas
2016-01-01
While perceptual learning increases objective sensitivity, the effects on the constant interaction of the process of perception and its metacognitive evaluation have been rarely investigated. Visual perception has been described as a process of probabilistic inference featuring metacognitive evaluations of choice certainty. For visual motion perception in healthy, naive human subjects here we show that perceptual sensitivity and confidence in it increased with training. The metacognitive sensitivity-estimated from certainty ratings by a bias-free signal detection theoretic approach-in contrast, did not. Concomitant 3Hz transcranial alternating current stimulation (tACS) was applied in compliance with previous findings on effective high-low cross-frequency coupling subserving signal detection. While perceptual accuracy and confidence in it improved with training, there were no statistically significant tACS effects. Neither metacognitive sensitivity in distinguishing between their own correct and incorrect stimulus classifications, nor decision confidence itself determined the subjects' visual perceptual learning. Improvements of objective performance and the metacognitive confidence in it were rather determined by the perceptual sensitivity at the outset of the experiment. Post-decision certainty in visual perceptual learning was neither independent of objective performance, nor requisite for changes in sensitivity, but rather covaried with objective performance. The exact functional role of metacognitive confidence in human visual perception has yet to be determined.
Prestimulus oscillatory activity in the alpha band predicts visual discrimination ability.
van Dijk, Hanneke; Schoffelen, Jan-Mathijs; Oostenveld, Robert; Jensen, Ole
2008-02-20
Although the resting and baseline states of the human electroencephalogram and magnetoencephalogram (MEG) are dominated by oscillations in the alpha band (approximately 10 Hz), the functional role of these oscillations remains unclear. In this study we used MEG to investigate how spontaneous oscillations in humans presented before visual stimuli modulate visual perception. Subjects had to report if there was a subtle difference in gray levels between two superimposed presented discs. We then compared the prestimulus brain activity for correctly (hits) versus incorrectly (misses) identified stimuli. We found that visual discrimination ability decreased with an increase in prestimulus alpha power. Given that reaction times did not vary systematically with prestimulus alpha power changes in vigilance are not likely to explain the change in discrimination ability. Source reconstruction using spatial filters allowed us to identify the brain areas accounting for this effect. The dominant sources modulating visual perception were localized around the parieto-occipital sulcus. We suggest that the parieto-occipital alpha power reflects functional inhibition imposed by higher level areas, which serves to modulate the gain of the visual stream.
The nature of face representations in subcortical regions.
Gabay, Shai; Burlingham, Charles; Behrmann, Marlene
2014-07-01
Studies examining the neural correlates of face perception in humans have focused almost exclusively on the distributed cortical network of face-selective regions. Recently, however, investigations have also identified subcortical correlates of face perception and the question addressed here concerns the nature of these subcortical face representations. To explore this issue, we presented to participants pairs of images sequentially to the same or to different eyes. Superior performance in the former over latter condition implicates monocular, prestriate portions of the visual system. Over a series of five experiments, we manipulated both lower-level (size, location) as well as higher-level (identity) similarity across the pair of faces. A monocular advantage was observed even when the faces in a pair differed in location and in size, implicating some subcortical invariance across lower-level image properties. A monocular advantage was also observed when the faces in a pair were two different images of the same individual, indicating the engagement of subcortical representations in more abstract, higher-level aspects of face processing. We conclude that subcortical structures of the visual system are involved, perhaps interactively, in multiple aspects of face perception, and not simply in deriving initial coarse representations. Copyright © 2014 Elsevier Ltd. All rights reserved.
Effects of changes in size, speed and distance on the perception of curved 3D trajectories
Zhang, Junjun; Braunstein, Myron L.; Andersen, George J.
2012-01-01
Previous research on the perception of 3D object motion has considered time to collision, time to passage, collision detection and judgments of speed and direction of motion, but has not directly studied the perception of the overall shape of the motion path. We examined the perception of the magnitude of curvature and sign of curvature of the motion path for objects moving at eye level in a horizontal plane parallel to the line of sight. We considered two sources of information for the perception of motion trajectories: changes in angular size and changes in angular speed. Three experiments examined judgments of relative curvature for objects moving at different distances. At the closest distance studied, accuracy was high with size information alone but near chance with speed information alone. At the greatest distance, accuracy with size information alone decreased sharply but accuracy for displays with both size and speed information remained high. We found similar results in two experiments with judgments of sign of curvature. Accuracy was higher for displays with both size and speed information than with size information alone, even when the speed information was based on parallel projections and was not informative about sign of curvature. For both magnitude of curvature and sign of curvature judgments, information indicating that the trajectory was curved increased accuracy, even when this information was not directly relevant to the required judgment. PMID:23007204
Sensory adaptation for timing perception.
Roseboom, Warrick; Linares, Daniel; Nishida, Shin'ya
2015-04-22
Recent sensory experience modifies subjective timing perception. For example, when visual events repeatedly lead auditory events, such as when the sound and video tracks of a movie are out of sync, subsequent vision-leads-audio presentations are reported as more simultaneous. This phenomenon could provide insights into the fundamental problem of how timing is represented in the brain, but the underlying mechanisms are poorly understood. Here, we show that the effect of recent experience on timing perception is not just subjective; recent sensory experience also modifies relative timing discrimination. This result indicates that recent sensory history alters the encoding of relative timing in sensory areas, excluding explanations of the subjective phenomenon based only on decision-level changes. The pattern of changes in timing discrimination suggests the existence of two sensory components, similar to those previously reported for visual spatial attributes: a lateral shift in the nonlinear transducer that maps relative timing into perceptual relative timing and an increase in transducer slope around the exposed timing. The existence of these components would suggest that previous explanations of how recent experience may change the sensory encoding of timing, such as changes in sensory latencies or simple implementations of neural population codes, cannot account for the effect of sensory adaptation on timing perception.
Wu, Xiang; He, Sheng; Bushara, Khalaf; Zeng, Feiyan; Liu, Ying; Zhang, Daren
2012-10-01
Object recognition occurs even when environmental information is incomplete. Illusory contours (ICs), in which a contour is perceived though the contour edges are incomplete, have been extensively studied as an example of such a visual completion phenomenon. Despite the neural activity in response to ICs in visual cortical areas from low (V1 and V2) to high (LOC: the lateral occipital cortex) levels, the details of the neural processing underlying IC perception are largely not clarified. For example, how do the visual areas function in IC perception and how do they interact to archive the coherent contour perception? IC perception involves the process of completing the local discrete contour edges (contour completion) and the process of representing the global completed contour information (contour representation). Here, functional magnetic resonance imaging was used to dissociate contour completion and contour representation by varying each in opposite directions. The results show that the neural activity was stronger to stimuli with more contour completion than to stimuli with more contour representation in V1 and V2, which was the reverse of that in the LOC. When inspecting the neural activity change across the visual pathway, the activation remained high for the stimuli with more contour completion and increased for the stimuli with more contour representation. These results suggest distinct neural correlates of contour completion and contour representation, and the possible collaboration between the two processes during IC perception, indicating a neural connection between the discrete retinal input and the coherent visual percept. Copyright © 2011 Wiley Periodicals, Inc.
Mismatched summation mechanisms in older adults for the perception of small moving stimuli.
McDougall, Thomas J; Nguyen, Bao N; McKendrick, Allison M; Badcock, David R
2018-01-01
Previous studies have found evidence for reduced cortical inhibition in aging visual cortex. Reduced inhibition could plausibly increase the spatial area of excitation in receptive fields of older observers, as weaker inhibitory processes would allow the excitatory receptive field to dominate and be psychophysically measureable over larger areas. Here, we investigated aging effects on spatial summation of motion direction using the Battenberg summation method, which aims to control the influence of locally generated internal noise changes by holding overall display size constant. This method produces more accurate estimates of summation area than conventional methods that simply increase overall stimulus dimensions. Battenberg stimuli have a checkerboard arrangement, where check size (luminance-modulated drifting gratings alternating with mean luminance areas), but not display size, is varied and compared with performance for a full field stimulus to provide a measure of summation. Motion direction discrimination thresholds, where contrast was the dependent variable, were measured in 14 younger (24-34 years) and 14 older (62-76 years) adults. Older observers were less sensitive for all check sizes, but the relative sensitivity across sizes, also differed between groups. In the older adults, the full field stimulus offered smaller performance improvements compared to that for younger adults, specifically for the small checked Battenberg stimuli. This suggests aging impacts on short-range summation mechanisms, potentially underpinned by larger summation areas for the perception of small moving stimuli. Copyright © 2017 Elsevier Ltd. All rights reserved.
Dynamic Stimuli And Active Processing In Human Visual Perception
NASA Astrophysics Data System (ADS)
Haber, Ralph N.
1990-03-01
Theories of visual perception traditionally have considered a static retinal image to be the starting point for processing; and has considered processing both to be passive and a literal translation of that frozen, two dimensional, pictorial image. This paper considers five problem areas in the analysis of human visually guided locomotion, in which the traditional approach is contrasted to newer ones that utilize dynamic definitions of stimulation, and an active perceiver: (1) differentiation between object motion and self motion, and among the various kinds of self motion (e.g., eyes only, head only, whole body, and their combinations); (2) the sources and contents of visual information that guide movement; (3) the acquisition and performance of perceptual motor skills; (4) the nature of spatial representations, percepts, and the perceived layout of space; and (5) and why the retinal image is a poor starting point for perceptual processing. These newer approaches argue that stimuli must be considered as dynamic: humans process the systematic changes in patterned light when objects move and when they themselves move. Furthermore, the processing of visual stimuli must be active and interactive, so that perceivers can construct panoramic and stable percepts from an interaction of stimulus information and expectancies of what is contained in the visual environment. These developments all suggest a very different approach to the computational analyses of object location and identification, and of the visual guidance of locomotion.
Sandberg, Kristian; Bahrami, Bahador; Kanai, Ryota; Barnes, Gareth Robert; Overgaard, Morten; Rees, Geraint
2014-01-01
Previous studies indicate that conscious face perception may be related to neural activity in a large time window around 170-800ms after stimulus presentation, yet in the majority of these studies changes in conscious experience are confounded with changes in physical stimulation. Using multivariate classification on MEG data recorded when participants reported changes in conscious perception evoked by binocular rivalry between a face and a grating, we showed that only MEG signals in the 120-320ms time range, peaking at the M170 around 180ms and the P2m at around 260ms, reliably predicted conscious experience. Conscious perception could not only be decoded significantly better than chance from the sensors that showed the largest average difference, as previous studies suggest, but also from patterns of activity across groups of occipital sensors that individually were unable to predict perception better than chance. Additionally, source space analyses showed that sources in the early and late visual system predicted conscious perception more accurately than frontal and parietal sites, although conscious perception could also be decoded there. Finally, the patterns of neural activity associated with conscious face perception generalized from one participant to another around the times of maximum prediction accuracy. Our work thus demonstrates that the neural correlates of particular conscious contents (here, faces) are highly consistent in time and space within individuals and that these correlates are shared to some extent between individuals. PMID:23281780
NASA Astrophysics Data System (ADS)
Suaste-Gomez, Ernesto; Leybon, Jaime I.; Rodriguez, D.
1998-07-01
Visual scanpath has been an important work applied in neuro- ophthalmic and psychological studies. This is because it has been working like a tool to validate some pathologies such as visual perception in color or black/white images; color blindness; etc. On the other hand, this tool has reached a big field of applications such as marketing. The scanpath over a specific picture, shows the observer interest in color, shapes, letter size, etc.; even tough the picture be among a group of images, this tool has demonstrated to be helpful to catch people interest over a specific advertisement.
Pain and other symptoms of CRPS can be increased by ambiguous visual stimuli--an exploratory study.
Hall, Jane; Harrison, Simon; Cohen, Helen; McCabe, Candida S; Harris, N; Blake, David R
2011-01-01
Visual disturbance, visuo-spatial difficulties, and exacerbations of pain associated with these, have been reported by some patients with Complex Regional Pain Syndrome (CRPS). We investigated the hypothesis that some visual stimuli (i.e. those which produce ambiguous perceptions) can induce pain and other somatic sensations in people with CRPS. Thirty patients with CRPS, 33 with rheumatology conditions and 45 healthy controls viewed two images: a bistable spatial image and a control image. For each image participants recorded the frequency of percept change in 1 min and reported any changes in somatosensation. 73% of patients with CRPS reported increases in pain and/or sensory disturbances including changes in perception of the affected limb, temperature and weight changes and feelings of disorientation after viewing the bistable image. Additionally, 13% of the CRPS group responded with striking worsening of their symptoms which necessitated task cessation. Subjects in the control groups did not report pain increases or somatic sensations. It is possible to worsen the pain suffered in CRPS, and to produce other somatic sensations, by means of a visual stimulus alone. This is a newly described finding. As a clinical and research tool, the experimental method provides a means to generate and exacerbate somaesthetic disturbances, including pain, without moving the affected limb and causing nociceptive interference. This may be particularly useful for brain imaging studies. Copyright © 2010 European Federation of International Association for the Study of Pain Chapters. Published by Elsevier Ltd. All rights reserved.
Effects of set-size and lateral masking in visual search.
Põder, Endel
2004-01-01
In the present research, the roles of lateral masking and central processing limitations in visual search were studied. Two search conditions were used: (1) target differed from distractors by presence/absence of a simple feature; (2) target differed by relative position of the same components only. The number of displayed stimuli (set-size) and the distance between neighbouring stimuli were varied as independently as possible in order to measure the effect of both. The effect of distance between stimuli (lateral masking) was found to be similar in both conditions. The effect of set-size was much larger for relative position stimuli. The results support the view that perception of relative position of stimulus components is limited mainly by the capacity of central processing.
The effect of contextual sound cues on visual fidelity perception.
Rojas, David; Cowan, Brent; Kapralos, Bill; Collins, Karen; Dubrowski, Adam
2014-01-01
Previous work has shown that sound can affect the perception of visual fidelity. Here we build upon this previous work by examining the effect of contextual sound cues (i.e., sounds that are related to the visuals) on visual fidelity perception. Results suggest that contextual sound cues do influence visual fidelity perception and, more specifically, our perception of visual fidelity increases with contextual sound cues. These results have implications for designers of multimodal virtual worlds and serious games that, with the appropriate use of contextual sounds, can reduce visual rendering requirements without a corresponding decrease in the perception of visual fidelity.
Neural correlates of context-dependent feature conjunction learning in visual search tasks.
Reavis, Eric A; Frank, Sebastian M; Greenlee, Mark W; Tse, Peter U
2016-06-01
Many perceptual learning experiments show that repeated exposure to a basic visual feature such as a specific orientation or spatial frequency can modify perception of that feature, and that those perceptual changes are associated with changes in neural tuning early in visual processing. Such perceptual learning effects thus exert a bottom-up influence on subsequent stimulus processing, independent of task-demands or endogenous influences (e.g., volitional attention). However, it is unclear whether such bottom-up changes in perception can occur as more complex stimuli such as conjunctions of visual features are learned. It is not known whether changes in the efficiency with which people learn to process feature conjunctions in a task (e.g., visual search) reflect true bottom-up perceptual learning versus top-down, task-related learning (e.g., learning better control of endogenous attention). Here we show that feature conjunction learning in visual search leads to bottom-up changes in stimulus processing. First, using fMRI, we demonstrate that conjunction learning in visual search has a distinct neural signature: an increase in target-evoked activity relative to distractor-evoked activity (i.e., a relative increase in target salience). Second, we demonstrate that after learning, this neural signature is still evident even when participants passively view learned stimuli while performing an unrelated, attention-demanding task. This suggests that conjunction learning results in altered bottom-up perceptual processing of the learned conjunction stimuli (i.e., a perceptual change independent of the task). We further show that the acquired change in target-evoked activity is contextually dependent on the presence of distractors, suggesting that search array Gestalts are learned. Hum Brain Mapp 37:2319-2330, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Acoustic Tactile Representation of Visual Information
NASA Astrophysics Data System (ADS)
Silva, Pubudu Madhawa
Our goal is to explore the use of hearing and touch to convey graphical and pictorial information to visually impaired people. Our focus is on dynamic, interactive display of visual information using existing, widely available devices, such as smart phones and tablets with touch sensitive screens. We propose a new approach for acoustic-tactile representation of visual signals that can be implemented on a touch screen and allows the user to actively explore a two-dimensional layout consisting of one or more objects with a finger or a stylus while listening to auditory feedback via stereo headphones. The proposed approach is acoustic-tactile because sound is used as the primary source of information for object localization and identification, while touch is used for pointing and kinesthetic feedback. A static overlay of raised-dot tactile patterns can also be added. A key distinguishing feature of the proposed approach is the use of spatial sound (directional and distance cues) to facilitate the active exploration of the layout. We consider a variety of configurations for acoustic-tactile rendering of object size, shape, identity, and location, as well as for the overall perception of simple layouts and scenes. While our primary goal is to explore the fundamental capabilities and limitations of representing visual information in acoustic-tactile form, we also consider a number of relatively simple configurations that can be tied to specific applications. In particular, we consider a simple scene layout consisting of objects in a linear arrangement, each with a distinct tapping sound, which we compare to a ''virtual cane.'' We will also present a configuration that can convey a ''Venn diagram.'' We present systematic subjective experiments to evaluate the effectiveness of the proposed display for shape perception, object identification and localization, and 2-D layout perception, as well as the applications. Our experiments were conducted with visually blocked subjects. The results are evaluated in terms of accuracy and speed, and they demonstrate the advantages of spatial sound for guiding the scanning finger or pointer in shape perception, object localization, and layout exploration. We show that these advantages increase with the amount of detail (smaller object size) in the display. Our experimental results show that the proposed system outperforms the state of the art in shape perception, including variable friction displays. We also demonstrate that, even though they are currently available only as static overlays, raised dot patterns provide the best shape rendition in terms of both the accuracy and speed. Our experiments with layout rendering and perception demonstrate that simultaneous representation of objects, using the most effective approaches for directionality and distance rendering, approaches the optimal performance level provided by visual layout perception. Finally, experiments with the virtual cane and Venn diagram configurations demonstrate that the proposed techniques can be used effectively in simple but nontrivial real-world applications. One of the most important conclusions of our experiments is that there is a clear performance gap between experienced and inexperienced subjects, which indicates that there is a lot of room for improvement with appropriate and extensive training. By exploring a wide variety of design alternatives and focusing on different aspects of the acoustic-tactile interfaces, our results offer many valuable insights and great promise for the design of future systematic tests visually impaired and visually blocked subjects, utilizing the most effective configurations.
Fengler, Ineke; Nava, Elena; Röder, Brigitte
2015-01-01
Several studies have suggested that neuroplasticity can be triggered by short-term visual deprivation in healthy adults. Specifically, these studies have provided evidence that visual deprivation reversibly affects basic perceptual abilities. The present study investigated the long-lasting effects of short-term visual deprivation on emotion perception. To this aim, we visually deprived a group of young healthy adults, age-matched with a group of non-deprived controls, for 3 h and tested them before and after visual deprivation (i.e., after 8 h on average and at 4 week follow-up) on an audio–visual (i.e., faces and voices) emotion discrimination task. To observe changes at the level of basic perceptual skills, we additionally employed a simple audio–visual (i.e., tone bursts and light flashes) discrimination task and two unimodal (one auditory and one visual) perceptual threshold measures. During the 3 h period, both groups performed a series of auditory tasks. To exclude the possibility that changes in emotion discrimination may emerge as a consequence of the exposure to auditory stimulation during the 3 h stay in the dark, we visually deprived an additional group of age-matched participants who concurrently performed unrelated (i.e., tactile) tasks to the later tested abilities. The two visually deprived groups showed enhanced affective prosodic discrimination abilities in the context of incongruent facial expressions following the period of visual deprivation; this effect was partially maintained until follow-up. By contrast, no changes were observed in affective facial expression discrimination and in the basic perception tasks in any group. These findings suggest that short-term visual deprivation per se triggers a reweighting of visual and auditory emotional cues, which seems to possibly prevail for longer durations. PMID:25954166
Perceived duration decreases with increasing eccentricity.
Kliegl, Katrin M; Huckauf, Anke
2014-07-01
Previous studies examining the influence of stimulus location on temporal perception yield inhomogeneous and contradicting results. Therefore, the aim of the present study is to soundly examine the effect of stimulus eccentricity. In a series of five experiments, subjects compared the duration of foveal disks to disks presented at different retinal eccentricities on the horizontal meridian. The results show that the perceived duration of a visual stimulus declines with increasing eccentricity. The effect was replicated with various stimulus orders (Experiments 1-3), as well as with cortically magnified stimuli (Experiments 4-5), ruling out that the effect was merely caused by different cortical representation sizes. The apparent decreasing duration of stimuli with increasing eccentricity is discussed with respect to current models of time perception, the possible influence of visual attention and respective underlying physiological characteristics of the visual system. Copyright © 2014 Elsevier B.V. All rights reserved.
Action Intentions Modulate Allocation of Visual Attention: Electrophysiological Evidence
Wykowska, Agnieszka; Schubö, Anna
2012-01-01
In line with the Theory of Event Coding (Hommel et al., 2001), action planning has been shown to affect perceptual processing – an effect that has been attributed to a so-called intentional weighting mechanism (Wykowska et al., 2009; Hommel, 2010). This paper investigates the electrophysiological correlates of action-related modulations of selection mechanisms in visual perception. A paradigm combining a visual search task for size and luminance targets with a movement task (grasping or pointing) was introduced, and the EEG was recorded while participants were performing the tasks. The results showed that the behavioral congruency effects, i.e., better performance in congruent (relative to incongruent) action-perception trials have been reflected by a modulation of the P1 component as well as the N2pc (an ERP marker of spatial attention). These results support the argumentation that action planning modulates already early perceptual processing and attention mechanisms. PMID:23060841
Gersch, Timothy M.; Schnitzer, Brian S.; Dosher, Barbara A.; Kowler, Eileen
2012-01-01
Saccadic eye movements and perceptual attention work in a coordinated fashion to allow selection of the objects, features or regions with the greatest momentary need for limited visual processing resources. This study investigates perceptual characteristics of pre-saccadic shifts of attention during a sequence of saccades using the visual manipulations employed to study mechanisms of attention during maintained fixation. The first part of this paper reviews studies of the connections between saccades and attention, and their significance for both saccadic control and perception. The second part presents three experiments that examine the effects of pre-saccadic shifts of attention on vision during sequences of saccades. Perceptual enhancements at the saccadic goal location relative to non-goal locations were found across a range of stimulus contrasts, with either perceptual discrimination or detection tasks, with either single or multiple perceptual targets, and regardless of the presence of external noise. The results show that the preparation of saccades can evoke a variety of attentional effects, including attentionally-mediated changes in the strength of perceptual representations, selection of targets for encoding in visual memory, exclusion of external noise, or changes in the levels of internal visual noise. The visual changes evoked by saccadic planning make it possible for the visual system to effectively use saccadic eye movements to explore the visual environment. PMID:22809798
Transfer of adaptation reveals shared mechanism in grasping and manual estimation.
Cesanek, Evan; Domini, Fulvio
2018-06-19
An influential idea in cognitive neuroscience is that perception and action are highly separable brain functions, implemented in distinct neural systems. In particular, this theory predicts that the functional distinction between grasping, a skilled action, and manual estimation, a type of perceptual report, should be mirrored by a split between their respective control systems. This idea has received support from a variety of dissociations, yet many of these findings have been criticized for failing to pinpoint the source of the dissociation. In this study, we devised a novel approach to this question, first targeting specific grasp control mechanisms through visuomotor adaptation, then testing whether adapted mechanisms were also involved in manual estimation - a response widely characterized as perceptual in function. Participants grasped objects in virtual reality that could appear larger or smaller than the actual physical sizes felt at the end of each grasp. After brief exposure to a size perturbation, manual estimates were biased in the same direction as the maximum grip apertures of grasping movements, indicating that the adapted mechanism is active in both tasks, regardless of the perception-action distinction. Additional experiments showed that the transfer effect generalizes broadly over space (Exp. 1B) and does not appear to arise from a change in visual perception (Exp. 2). We discuss two adaptable mechanisms that could have mediated the observed effect: (a) an afferent proprioceptive mechanism for sensing grip shape; and (b) an efferent visuomotor transformation of size information into a grip-shaping motor command. Copyright © 2018. Published by Elsevier Ltd.
Task modulates functional connectivity networks in free viewing behavior.
Seidkhani, Hossein; Nikolaev, Andrey R; Meghanathan, Radha Nila; Pezeshk, Hamid; Masoudi-Nejad, Ali; van Leeuwen, Cees
2017-10-01
In free visual exploration, eye-movement is immediately followed by dynamic reconfiguration of brain functional connectivity. We studied the task-dependency of this process in a combined visual search-change detection experiment. Participants viewed two (nearly) same displays in succession. First time they had to find and remember multiple targets among distractors, so the ongoing task involved memory encoding. Second time they had to determine if a target had changed in orientation, so the ongoing task involved memory retrieval. From multichannel EEG recorded during 200 ms intervals time-locked to fixation onsets, we estimated the functional connectivity using a weighted phase lag index at the frequencies of theta, alpha, and beta bands, and derived global and local measures of the functional connectivity graphs. We found differences between both memory task conditions for several network measures, such as mean path length, radius, diameter, closeness and eccentricity, mainly in the alpha band. Both the local and the global measures indicated that encoding involved a more segregated mode of operation than retrieval. These differences arose immediately after fixation onset and persisted for the entire duration of the lambda complex, an evoked potential commonly associated with early visual perception. We concluded that encoding and retrieval differentially shape network configurations involved in early visual perception, affecting the way the visual input is processed at each fixation. These findings demonstrate that task requirements dynamically control the functional connectivity networks involved in early visual perception. Copyright © 2017 Elsevier Inc. All rights reserved.
Bukman, Andrea J; Teuscher, Dorit; Feskens, Edith J M; van Baak, Marleen A; Meershoek, Agnes; Renes, Reint Jan
2014-10-04
Individuals with low socioeconomic status (SES) are generally less well reached through lifestyle interventions than individuals with higher SES. The aim of this study was to identify opportunities for adapting lifestyle interventions in such a way that they are more appealing for individuals with low SES. To this end, the study provides insight into perspectives of groups with different socioeconomic positions regarding their current eating and physical activity behaviour; triggers for lifestyle change; and ways to support lifestyle change. Data were gathered in semi-structured focus group interviews among low SES (four groups) and high SES (five groups) adults. The group size varied between four and nine participants. The main themes discussed were perceptions and experiences of healthy eating, physical activity and lifestyle advice. Interviews were transcribed verbatim and a thematic approach was used to analyse the data. In general, three key topics were identified, namely: current lifestyle is logical for participants given their personal situation; lifestyle change is prompted by feedback from their body; and support for lifestyle change should include individually tailored advice and could profit from involving others. The perceptions of the low SES participants were generally comparable to the perceptions shared by the high SES participants. Some perceptions were, however, especially shared in the low SES groups. Low SES participants indicated that their current eating behaviour was sometimes affected by cost concerns. They seemed to be especially motivated to change their lifestyle when they experienced health complaints, but were rather hesitant to change their lifestyle for preventive purposes. Regarding support for lifestyle change, low SES participants preferred to receive advice in a group rather than on their own. For physical activities, groups should preferably consist of persons of the same age, gender or physical condition. To motivate individuals with low SES to change their lifestyle, it may be useful to (visually) raise their awareness of their current weight or health status. Lifestyle interventions targeting individuals with low SES should take possible cost concerns into account and should harness the supportive effect of (peer) groups.
Visual Aversive Learning Compromises Sensory Discrimination.
Shalev, Lee; Paz, Rony; Avidan, Galia
2018-03-14
Aversive learning is thought to modulate perceptual thresholds, which can lead to overgeneralization. However, it remains undetermined whether this modulation is domain specific or a general effect. Moreover, despite the unique role of the visual modality in human perception, it is unclear whether this aspect of aversive learning exists in this modality. The current study was designed to examine the effect of visual aversive outcomes on the perception of basic visual and auditory features. We tested the ability of healthy participants, both males and females, to discriminate between neutral stimuli, before and after visual learning. In each experiment, neutral stimuli were associated with aversive images in an experimental group and with neutral images in a control group. Participants demonstrated a deterioration in discrimination (higher discrimination thresholds) only after aversive learning. This deterioration was measured for both auditory (tone frequency) and visual (orientation and contrast) features. The effect was replicated in five different experiments and lasted for at least 24 h. fMRI neural responses and pupil size were also measured during learning. We showed an increase in neural activations in the anterior cingulate cortex, insula, and amygdala during aversive compared with neutral learning. Interestingly, the early visual cortex showed increased brain activity during aversive compared with neutral context trials, with identical visual information. Our findings imply the existence of a central multimodal mechanism, which modulates early perceptual properties, following exposure to negative situations. Such a mechanism could contribute to abnormal responses that underlie anxiety states, even in new and safe environments. SIGNIFICANCE STATEMENT Using a visual aversive-learning paradigm, we found deteriorated discrimination abilities for visual and auditory stimuli that were associated with visual aversive stimuli. We showed increased neural activations in the anterior cingulate cortex, insula, and amygdala during aversive learning, compared with neutral learning. Importantly, similar findings were also evident in the early visual cortex during trials with aversive/neutral context, but with identical visual information. The demonstration of this phenomenon in the visual modality is important, as it provides support to the notion that aversive learning can influence perception via a central mechanism, independent of input modality. Given the dominance of the visual system in human perception, our findings hold relevance to daily life, as well as imply a potential etiology for anxiety disorders. Copyright © 2018 the authors 0270-6474/18/382766-14$15.00/0.
Ortega, Laura; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru
2014-01-01
Whereas the visual modality tends to dominate over the auditory modality in bimodal spatial perception, the auditory modality tends to dominate over the visual modality in bimodal temporal perception. Recent results suggest that the visual modality dominates bimodal spatial perception because spatial discriminability is typically greater for the visual than auditory modality; accordingly, visual dominance is eliminated or reversed when visual-spatial discriminability is reduced by degrading visual stimuli to be equivalent or inferior to auditory spatial discriminability. Thus, for spatial perception, the modality that provides greater discriminability dominates. Here we ask whether auditory dominance in duration perception is similarly explained by factors that influence the relative quality of auditory and visual signals. In contrast to the spatial results, the auditory modality dominated over the visual modality in bimodal duration perception even when the auditory signal was clearly weaker, when the auditory signal was ignored (i.e., the visual signal was selectively attended), and when the temporal discriminability was equivalent for the auditory and visual signals. Thus, unlike spatial perception where the modality carrying more discriminable signals dominates, duration perception seems to be mandatorily linked to auditory processing under most circumstances. PMID:24806403
Sata, Yoshimi; Inagaki, Masumi; Shirane, Seiko; Kaga, Makiko
2002-07-01
In order to evaluate developmental change of visual perception, the P300 event-related potentials (ERPs) of visual oddball task were recorded in 34 healthy volunteers ranging from 7 to 37 years of age. The latency and amplitude of visual P300 in response to the Japanese ideogram stimuli (a pair of familiar Kanji characters or unfamiliar Kanji characters) and a pair of meaningless complicated figures were measured. Visual P300 was dominant at parietal area in almost all subjects. There was a significant difference of P300 latency among the three tasks. Reaction time to the both kind of Kanji tasks were significantly shorter than those to the complicated figure task. P300 latencies to the familiar Kanji, unfamiliar Kanji and figure stimuli decreased until 25.8, 26.9 and 29.4 years of age, respectively, and regression analysis revealed that a positive quadratic function could be fitted to the data. Around 9 years of age, the P300 latency/age slope was largest in the unfamiliar Kanji task. These findings suggest that visual P300 development depends on both the complexity of the tasks and specificity of the stimuli, which might reflect the variety in visual information processing.
Rocha, Karolinne Maia; Vabre, Laurent; Chateau, Nicolas; Krueger, Ronald R
2010-01-01
To evaluate the changes in visual acuity and visual perception generated by correcting higher order aberrations in highly aberrated eyes using a large-stroke adaptive optics visual simulator. A crx1 Adaptive Optics Visual Simulator (Imagine Eyes) was used to correct and modify the wavefront aberrations in 12 keratoconic eyes and 8 symptomatic postoperative refractive surgery (LASIK) eyes. After measuring ocular aberrations, the device was programmed to compensate for the eye's wavefront error from the second order to the fifth order (6-mm pupil). Visual acuity was assessed through the adaptive optics system using computer-generated ETDRS opto-types and the Freiburg Visual Acuity and Contrast Test. Mean higher order aberration root-mean-square (RMS) errors in the keratoconus and symptomatic LASIK eyes were 1.88+/-0.99 microm and 1.62+/-0.79 microm (6-mm pupil), respectively. The visual simulator correction of the higher order aberrations present in the keratoconus eyes improved their visual acuity by a mean of 2 lines when compared to their best spherocylinder correction (mean decimal visual acuity with spherocylindrical correction was 0.31+/-0.18 and improved to 0.44+/-0.23 with higher order aberration correction). In the symptomatic LASIK eyes, the mean decimal visual acuity with spherocylindrical correction improved from 0.54+/-0.16 to 0.71+/-0.13 with higher order aberration correction. The visual perception of ETDRS letters was improved when correcting higher order aberrations. The adaptive optics visual simulator can effectively measure and compensate for higher order aberrations (second to fifth order), which are associated with diminished visual acuity and perception in highly aberrated eyes. The adaptive optics technology may be of clinical benefit when counseling patients with highly aberrated eyes regarding their maximum subjective potential for vision correction. Copyright 2010, SLACK Incorporated.
Pupil size reflects successful encoding and recall of memory in humans.
Kucewicz, Michal T; Dolezal, Jaromir; Kremen, Vaclav; Berry, Brent M; Miller, Laura R; Magee, Abigail L; Fabian, Vratislav; Worrell, Gregory A
2018-03-21
Pupil responses are known to indicate brain processes involved in perception, attention and decision-making. They can provide an accessible biomarker of human memory performance and cognitive states in general. Here we investigated changes in the pupil size during encoding and recall of word lists. Consistent patterns in the pupil response were found across and within distinct phases of the free recall task. The pupil was most constricted in the initial fixation phase and was gradually more dilated through the subsequent encoding, distractor and recall phases of the task, as the word items were maintained in memory. Within the final recall phase, retrieving memory for individual words was associated with pupil dilation in absence of visual stimulation. Words that were successfully recalled showed significant differences in pupil response during their encoding compared to those that were forgotten - the pupil was more constricted before and more dilated after the onset of word presentation. Our results suggest pupil size as a potential biomarker for probing and modulation of memory processing.
Controlling the spotlight of attention: visual span size and flexibility in schizophrenia.
Elahipanah, Ava; Christensen, Bruce K; Reingold, Eyal M
2011-10-01
The current study investigated the size and flexible control of visual span among patients with schizophrenia during visual search performance. Visual span is the region of the visual field from which one extracts information during a single eye fixation, and a larger visual span size is linked to more efficient search performance. Therefore, a reduced visual span may explain patients' impaired performance on search tasks. The gaze-contingent moving window paradigm was used to estimate the visual span size of patients and healthy participants while they performed two different search tasks. In addition, changes in visual span size were measured as a function of two manipulations of task difficulty: target-distractor similarity and stimulus familiarity. Patients with schizophrenia searched more slowly across both tasks and conditions. Patients also demonstrated smaller visual span sizes on the easier search condition in each task. Moreover, healthy controls' visual span size increased as target discriminability or distractor familiarity increased. This modulation of visual span size, however, was reduced or not observed among patients. The implications of the present findings, with regard to previously reported visual search deficits, and other functional and structural abnormalities associated with schizophrenia, are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Short-term plasticity in auditory cognition.
Jääskeläinen, Iiro P; Ahveninen, Jyrki; Belliveau, John W; Raij, Tommi; Sams, Mikko
2007-12-01
Converging lines of evidence suggest that auditory system short-term plasticity can enable several perceptual and cognitive functions that have been previously considered as relatively distinct phenomena. Here we review recent findings suggesting that auditory stimulation, auditory selective attention and cross-modal effects of visual stimulation each cause transient excitatory and (surround) inhibitory modulations in the auditory cortex. These modulations might adaptively tune hierarchically organized sound feature maps of the auditory cortex (e.g. tonotopy), thus filtering relevant sounds during rapidly changing environmental and task demands. This could support auditory sensory memory, pre-attentive detection of sound novelty, enhanced perception during selective attention, influence of visual processing on auditory perception and longer-term plastic changes associated with perceptual learning.
Bokde, Arun L W; Karmann, Michaela; Teipel, Stefan J; Born, Christine; Lieb, Martin; Reiser, Maximilian F; Möller, Hans-Jürgen; Hampel, Harald
2009-04-01
Visual perception has been shown to be altered in Alzheimer disease (AD) patients, and it is associated with decreased cognitive function. Galantamine is an active cholinergic agent, which has been shown to lead to improved cognition in mild to moderate AD patients. This study examined brain activation in a group of mild AD patients after a 3-month open-label treatment with galantamine. The objective was to examine the changes in brain activation due to treatment. There were 2 tasks to visual perception. The first task was a face-matching task to test the activation along the ventral visual pathway, and the second task was a location-matching task to test neuronal function along the dorsal pathway. Brain activation was measured using functional magnetic resonance imaging. There were 5 mild AD patients in the study. There were no differences in the task performance and in the cognitive scores of the Consortium to Establish a Registry for Alzheimer's Disease battery before and after treatment. In the location-matching task, we found a statistically significant decrease in activation along the dorsal visual pathway after galantamine treatment. A previous study found that AD patients had higher activation in the location-matching task compared with healthy controls. There were no differences in activation for the face-matching task after treatment. Our data indicate that treatment with galantamine leads to more efficient visual processing of stimuli or changes the compensatory mechanism in the AD patients. A visual perception task recruiting the dorsal visual system may be useful as a biomarker of treatment effects.
Shape perception simultaneously up- and downregulates neural activity in the primary visual cortex.
Kok, Peter; de Lange, Floris P
2014-07-07
An essential part of visual perception is the grouping of local elements (such as edges and lines) into coherent shapes. Previous studies have shown that this grouping process modulates neural activity in the primary visual cortex (V1) that is signaling the local elements [1-4]. However, the nature of this modulation is controversial. Some studies find that shape perception reduces neural activity in V1 [2, 5, 6], while others report increased V1 activity during shape perception [1, 3, 4, 7-10]. Neurocomputational theories that cast perception as a generative process [11-13] propose that feedback connections carry predictions (i.e., the generative model), while feedforward connections signal the mismatch between top-down predictions and bottom-up inputs. Within this framework, the effect of feedback on early visual cortex may be either enhancing or suppressive, depending on whether the feedback signal is met by congruent bottom-up input. Here, we tested this hypothesis by quantifying the spatial profile of neural activity in V1 during the perception of illusory shapes using population receptive field mapping. We find that shape perception concurrently increases neural activity in regions of V1 that have a receptive field on the shape but do not receive bottom-up input and suppresses activity in regions of V1 that receive bottom-up input that is predicted by the shape. These effects were not modulated by task requirements. Together, these findings suggest that shape perception changes lower-order sensory representations in a highly specific and automatic manner, in line with theories that cast perception in terms of hierarchical generative models. Copyright © 2014 Elsevier Ltd. All rights reserved.
Touch influences perceived gloss
Adams, Wendy J.; Kerrigan, Iona S.; Graf, Erich W.
2016-01-01
Identifying an object’s material properties supports recognition and action planning: we grasp objects according to how heavy, hard or slippery we expect them to be. Visual cues to material qualities such as gloss have recently received attention, but how they interact with haptic (touch) information has been largely overlooked. Here, we show that touch modulates gloss perception: objects that feel slippery are perceived as glossier (more shiny).Participants explored virtual objects that varied in look and feel. A discrimination paradigm (Experiment 1) revealed that observers integrate visual gloss with haptic information. Observers could easily detect an increase in glossiness when it was paired with a decrease in friction. In contrast, increased glossiness coupled with decreased slipperiness produced a small perceptual change: the visual and haptic changes counteracted each other. Subjective ratings (Experiment 2) reflected a similar interaction – slippery objects were rated as glossier and vice versa. The sensory system treats visual gloss and haptic friction as correlated cues to surface material. Although friction is not a perfect predictor of gloss, the visual system appears to know and use a probabilistic relationship between these variables to bias perception – a sensible strategy given the ambiguity of visual clues to gloss. PMID:26915492
Henin, Simon; Fein, Dovid; Smouha, Eric; Parra, Lucas C
2016-01-01
Tinnitus correlates with elevated hearing thresholds and reduced cochlear compression. We hypothesized that reduced peripheral input leads to elevated neuronal gain resulting in the perception of a phantom sound. The purpose of this pilot study was to test whether compensating for this peripheral deficit could reduce the tinnitus percept acutely using customized auditory stimulation. To further enhance the effects of auditory stimulation, this intervention was paired with high-definition transcranial direct current stimulation (HD-tDCS). A randomized sham-controlled, single blind study was conducted in a clinical setting on adult participants with chronic tinnitus (n = 14). Compensatory auditory stimulation (CAS) and HD-tDCS were administered either individually or in combination in order to access the effects of both interventions on tinnitus perception. CAS consisted of sound exposure typical to daily living (20-minute sound-track of a TV show), which was adapted with compressive gain to compensate for deficits in each subject's individual audiograms. Minimum masking levels and the visual analog scale were used to assess the strength of the tinnitus percept immediately before and after the treatment intervention. CAS reduced minimum masking levels, and visual analog scale trended towards improvement. Effects of HD-tDCS could not be resolved with the current sample size. The results of this pilot study suggest that providing tailored auditory stimulation with frequency-specific gain and compression may alleviate tinnitus in a clinical population. Further experimentation with longer interventions is warranted in order to optimize effect sizes.
The role of human ventral visual cortex in motion perception
Saygin, Ayse P.; Lorenzi, Lauren J.; Egan, Ryan; Rees, Geraint; Behrmann, Marlene
2013-01-01
Visual motion perception is fundamental to many aspects of visual perception. Visual motion perception has long been associated with the dorsal (parietal) pathway and the involvement of the ventral ‘form’ (temporal) visual pathway has not been considered critical for normal motion perception. Here, we evaluated this view by examining whether circumscribed damage to ventral visual cortex impaired motion perception. The perception of motion in basic, non-form tasks (motion coherence and motion detection) and complex structure-from-motion, for a wide range of motion speeds, all centrally displayed, was assessed in five patients with a circumscribed lesion to either the right or left ventral visual pathway. Patients with a right, but not with a left, ventral visual lesion displayed widespread impairments in central motion perception even for non-form motion, for both slow and for fast speeds, and this held true independent of the integrity of areas MT/V5, V3A or parietal regions. In contrast with the traditional view in which only the dorsal visual stream is critical for motion perception, these novel findings implicate a more distributed circuit in which the integrity of the right ventral visual pathway is also necessary even for the perception of non-form motion. PMID:23983030
Category learning increases discriminability of relevant object dimensions in visual cortex.
Folstein, Jonathan R; Palmeri, Thomas J; Gauthier, Isabel
2013-04-01
Learning to categorize objects can transform how they are perceived, causing relevant perceptual dimensions predictive of object category to become enhanced. For example, an expert mycologist might become attuned to species-specific patterns of spacing between mushroom gills but learn to ignore cap textures attributable to varying environmental conditions. These selective changes in perception can persist beyond the act of categorizing objects and influence our ability to discriminate between them. Using functional magnetic resonance imaging adaptation, we demonstrate that such category-specific perceptual enhancements are associated with changes in the neural discriminability of object representations in visual cortex. Regions within the anterior fusiform gyrus became more sensitive to small variations in shape that were relevant during prior category learning. In addition, extrastriate occipital areas showed heightened sensitivity to small variations in shape that spanned the category boundary. Visual representations in cortex, just like our perception, are sensitive to an object's history of categorization.
Lemeshchenko, N A; Ivanov, A I; Lapa, V V; Davydov, V V; Zhelonkin, V I; Riabinin, V A; Golosov, S Iu
2014-01-01
The article deals with results of experimental studies conducted on flight testing desk and covering peculiarities of pilot's perception of flight information presented on on-board liquid crystal display in dependence on changes speed and update rate of the screen. The authors determine frequency characteristics of information update rate, that achieve acceptable quality of the flight parameters perception in accordance with the changes speed. Vigorous maneuvering with high angular velocities of changed parameters of roll and pitch causes visual distortions that are connected with poor frequency of information update rate, deteriorate piloting quality and can cause flight unsafety.
Hemispheric differences in visual search of simple line arrays.
Polich, J; DeFrancesco, D P; Garon, J F; Cohen, W
1990-01-01
The effects of perceptual organization on hemispheric visual-information processing were assessed with stimulus arrays composed of short lines arranged in columns. A visual-search task was employed in which subjects judged whether all the lines were vertical (same) or whether a single horizontal line was present (different). Stimulus-display organization was manipulated in two experiments by variation of line density, linear organization, and array size. In general, left-visual-field/right-hemisphere presentations demonstrated more rapid and accurate responses when the display was perceived as a whole. Right-visual-field/left-hemisphere superiorities were observed when the display organization coerced assessment of individual array elements because the physical qualities of the stimulus did not effect a gestalt whole. Response times increased somewhat with increases in array size, although these effects interacted with other stimulus variables. Error rates tended to follow the reaction-time patterns. The results suggest that laterality differences in visual search are governed by stimulus properties which contribute to, or inhibit, the perception of a display as a gestalt. The implications of these findings for theoretical interpretations of hemispheric specialization are discussed.
Bellocchi, Stéphanie; Muneaux, Mathilde; Huau, Andréa; Lévêque, Yohana; Jover, Marianne; Ducrot, Stéphanie
2017-08-01
Reading is known to be primarily a linguistic task. However, to successfully decode written words, children also need to develop good visual-perception skills. Furthermore, motor skills are implicated in letter recognition and reading acquisition. Three studies have been designed to determine the link between reading, visual perception, and visual-motor integration using the Developmental Test of Visual Perception version 2 (DTVP-2). Study 1 tests how visual perception and visual-motor integration in kindergarten predict reading outcomes in Grade 1, in typical developing children. Study 2 is aimed at finding out if these skills can be seen as clinical markers in dyslexic children (DD). Study 3 determines if visual-motor integration and motor-reduced visual perception can distinguish DD children according to whether they exhibit or not developmental coordination disorder (DCD). Results showed that phonological awareness and visual-motor integration predicted reading outcomes one year later. DTVP-2 demonstrated similarities and differences in visual-motor integration and motor-reduced visual perception between children with DD, DCD, and both of these deficits. DTVP-2 is a suitable tool to investigate links between visual perception, visual-motor integration and reading, and to differentiate cognitive profiles of children with developmental disabilities (i.e. DD, DCD, and comorbid children). Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Chang, Li-Hung; Yotsumoto, Yuko; Salat, David H; Andersen, George J; Watanabe, Takeo; Sasaki, Yuka
2015-01-01
Although normal aging is known to reduce cortical structures globally, the effects of aging on local structures and functions of early visual cortex are less understood. Here, using standard retinotopic mapping and magnetic resonance imaging morphologic analyses, we investigated whether aging affects areal size of the early visual cortex, which were retinotopically localized, and whether those morphologic measures were associated with individual performance on visual perceptual learning. First, significant age-associated reduction was found in the areal size of V1, V2, and V3. Second, individual ability of visual perceptual learning was significantly correlated with areal size of V3 in older adults. These results demonstrate that aging changes local structures of the early visual cortex, and the degree of change may be associated with individual visual plasticity. Copyright © 2015 Elsevier Inc. All rights reserved.
Verticality perception during and after galvanic vestibular stimulation.
Volkening, Katharina; Bergmann, Jeannine; Keller, Ingo; Wuehr, Max; Müller, Friedemann; Jahn, Klaus
2014-10-03
The human brain constructs verticality perception by integrating vestibular, somatosensory, and visual information. Here we investigated whether galvanic vestibular stimulation (GVS) has an effect on verticality perception both during and after application, by assessing the subjective verticals (visual, haptic and postural) in healthy subjects at those times. During stimulation the subjective visual vertical and the subjective haptic vertical shifted towards the anode, whereas this shift was reversed towards the cathode in all modalities once stimulation was turned off. Overall, the effects were strongest for the haptic modality. Additional investigation of the time course of GVS-induced changes in the haptic vertical revealed that anodal shifts persisted for the entire 20-min stimulation interval in the majority of subjects. Aftereffects exhibited different types of decay, with a preponderance for an exponential decay. The existence of such reverse effects after stimulation could have implications for GVS-based therapy. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Towards Understanding the Role of Colour Information in Scene Perception using Night Vision Device
2009-06-01
possessing a visual system much simplified from that of living birds, reptiles, and teleost (bony) fish , which are generally tetrachromatic (Bowmaker...Levkowitz and Herman (1992) speculated that the results might be limited to “ blob ” detection. A possible mediating factor may have been the size and...sharpness of the “ blobs ” used in their task. Mullen (1985) showed that the visual system is much more sensitive to the 7 DSTO-RR-0345 high spatial
Eubanks, Jessica R; Kenkel, Michaela Y; Gardner, Rick M
2006-04-01
This study investigated the relations among physical, emotional, and sexual abuse up to adolescence and subsequent perception of body size, detection of changes in body size, and body-esteem. The role of parenting history in abused participants was also examined. 38 college undergraduate women, half of whom had been abused, reported instances of abuse, childhood parenting history, and current body-esteem. A recently developed software program of Gardner and Boice was used to present a series of distorted frontal profiles of each participant's own body for the women to rate as being too wide or too thin. A psychophysical procedure called adaptive probit estimation was used to measure the amount of over- and underestimation of these ratings and whether these changes were statistically significant. Analysis showed abused participants had distorted perceptions of body size, although the direction of the distortion was not consistent. There was no difference in detection of changes in body size. Abused and nonabused participants differed on measures of body-esteem and on ratings of most parenting experiences, including experiences with both mothers and fathers.
Toschi, Nicola; Kim, Jieun; Sclocco, Roberta; Duggento, Andrea; Barbieri, Riccardo; Kuo, Braden; Napadow, Vitaly
2017-01-01
The brain networks supporting nausea not yet understood. We previously found that while visual stimulation activated primary (V1) and extrastriate visual cortices (MT+/V5, coding for visual motion), increasing nausea was associated with increasing sustained activation in several brain areas, with significant co-activation for anterior insula (aIns) and mid-cingulate (MCC) cortices. Here, we hypothesized that motion sickness also alters functional connectivity between visual motion and previously identified nausea-processing brain regions. Subjects prone to motion sickness and controls completed a motion sickness provocation task during fMRI/ECG acquisition. We studied changes in connectivity between visual processing areas activated by the stimulus (MT+/V5, V1), right aIns and MCC when comparing rest (BASELINE) to peak nausea state (NAUSEA). Compared to BASELINE, NAUSEA reduced connectivity between right and left V1 and increased connectivity between right MT+/V5 and aIns and between left MT+/V5 and MCC. Additionally, the change in MT+/V5 to insula connectivity was significantly associated with a change in sympathovagal balance, assessed by heart rate variability analysis. No state-related connectivity changes were noted for the control group. Increased connectivity between a visual motion processing region and nausea/salience brain regions may reflect increased transfer of visual/vestibular mismatch information to brain regions supporting nausea perception and autonomic processing. We conclude that vection-induced nausea increases connectivity between nausea-processing regions and those activated by the nauseogenic stimulus. This enhanced low-frequency coupling may support continual, slowly evolving nausea perception and shifts toward sympathetic dominance. Disengaging this coupling may be a target for biobehavioral interventions aimed at reducing motion sickness severity. Copyright © 2016 Elsevier B.V. All rights reserved.
Perceived visual speed constrained by image segmentation
NASA Technical Reports Server (NTRS)
Verghese, P.; Stone, L. S.
1996-01-01
Little is known about how or where the visual system parses the visual scene into objects or surfaces. However, it is generally assumed that the segmentation and grouping of pieces of the image into discrete entities is due to 'later' processing stages, after the 'early' processing of the visual image by local mechanisms selective for attributes such as colour, orientation, depth, and motion. Speed perception is also thought to be mediated by early mechanisms tuned for speed. Here we show that manipulating the way in which an image is parsed changes the way in which local speed information is processed. Manipulations that cause multiple stimuli to appear as parts of a single patch degrade speed discrimination, whereas manipulations that perceptually divide a single large stimulus into parts improve discrimination. These results indicate that processes as early as speed perception may be constrained by the parsing of the visual image into discrete entities.
Songnian, Zhao; Qi, Zou; Chang, Liu; Xuemin, Liu; Shousi, Sun; Jun, Qiu
2014-04-23
How it is possible to "faithfully" represent a three-dimensional stereoscopic scene using Cartesian coordinates on a plane, and how three-dimensional perceptions differ between an actual scene and an image of the same scene are questions that have not yet been explored in depth. They seem like commonplace phenomena, but in fact, they are important and difficult issues for visual information processing, neural computation, physics, psychology, cognitive psychology, and neuroscience. The results of this study show that the use of plenoptic (or all-optical) functions and their dual plane parameterizations can not only explain the nature of information processing from the retina to the primary visual cortex and, in particular, the characteristics of the visual pathway's optical system and its affine transformation, but they can also clarify the reason why the vanishing point and line exist in a visual image. In addition, they can better explain the reasons why a three-dimensional Cartesian coordinate system can be introduced into the two-dimensional plane to express a real three-dimensional scene. 1. We introduce two different mathematical expressions of the plenoptic functions, Pw and Pv that can describe the objective world. We also analyze the differences between these two functions when describing visual depth perception, that is, the difference between how these two functions obtain the depth information of an external scene.2. The main results include a basic method for introducing a three-dimensional Cartesian coordinate system into a two-dimensional plane to express the depth of a scene, its constraints, and algorithmic implementation. In particular, we include a method to separate the plenoptic function and proceed with the corresponding transformation in the retina and visual cortex.3. We propose that size constancy, the vanishing point, and vanishing line form the basis of visual perception of the outside world, and that the introduction of a three-dimensional Cartesian coordinate system into a two dimensional plane reveals a corresponding mapping between a retinal image and the vanishing point and line.
2014-01-01
Background How it is possible to “faithfully” represent a three-dimensional stereoscopic scene using Cartesian coordinates on a plane, and how three-dimensional perceptions differ between an actual scene and an image of the same scene are questions that have not yet been explored in depth. They seem like commonplace phenomena, but in fact, they are important and difficult issues for visual information processing, neural computation, physics, psychology, cognitive psychology, and neuroscience. Results The results of this study show that the use of plenoptic (or all-optical) functions and their dual plane parameterizations can not only explain the nature of information processing from the retina to the primary visual cortex and, in particular, the characteristics of the visual pathway’s optical system and its affine transformation, but they can also clarify the reason why the vanishing point and line exist in a visual image. In addition, they can better explain the reasons why a three-dimensional Cartesian coordinate system can be introduced into the two-dimensional plane to express a real three-dimensional scene. Conclusions 1. We introduce two different mathematical expressions of the plenoptic functions, P w and P v that can describe the objective world. We also analyze the differences between these two functions when describing visual depth perception, that is, the difference between how these two functions obtain the depth information of an external scene. 2. The main results include a basic method for introducing a three-dimensional Cartesian coordinate system into a two-dimensional plane to express the depth of a scene, its constraints, and algorithmic implementation. In particular, we include a method to separate the plenoptic function and proceed with the corresponding transformation in the retina and visual cortex. 3. We propose that size constancy, the vanishing point, and vanishing line form the basis of visual perception of the outside world, and that the introduction of a three-dimensional Cartesian coordinate system into a two dimensional plane reveals a corresponding mapping between a retinal image and the vanishing point and line. PMID:24755246
Fostering Kinship with Animals: Animal Portraiture in Humane Education
ERIC Educational Resources Information Center
Kalof, Linda; Zammit-Lucia, Joe; Bell, Jessica; Granter, Gina
2016-01-01
Visual depictions of animals can alter human perceptions of, emotional responses to, and attitudes toward animals. Our study addressed the potential of a slideshow designed to activate emotional responses to animals to foster feelings of kinship with them. The personal meaning map measured changes in perceptions of animals. The participants were…
The Posture of Putting One's Palms Together Modulates Visual Motion Event Perception.
Saito, Godai; Gyoba, Jiro
2018-02-01
We investigated the effect of an observer's hand postures on visual motion perception using the stream/bounce display. When two identical visual objects move across collinear horizontal trajectories toward each other in a two-dimensional display, observers perceive them as either streaming or bouncing. In our previous study, we found that when observers put their palms together just below the coincidence point of the two objects, the percentage of bouncing responses increased, mainly depending on the proprioceptive information from their own hands. However, it remains unclear if the tactile or haptic (force) information produced by the postures mostly influences the stream/bounce perception. We solved this problem by changing the tactile and haptic information on the palms of the hands. Experiment 1 showed that the promotion of bouncing perception was observed only when the posture of directly putting one's palms together was used, while there was no effect when a brick was sandwiched between the participant's palms. Experiment 2 demonstrated that the strength of force used when putting the palms together had no effect on increasing bounce perception. Our findings indicate that the hands-induced bounce effect derives from the tactile information produced by the direct contact between both palms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Like; Kang, Jian, E-mail: j.kang@sheffield.ac.uk; Schroth, Olaf
Large scale transportation projects can adversely affect the visual perception of environmental quality and require adequate visual impact assessment. In this study, we investigated the effects of the characteristics of the road project and the character of the existing landscape on the perceived visual impact of motorways, and developed a GIS-based prediction model based on the findings. An online survey using computer-visualised scenes of different motorway and landscape scenarios was carried out to obtain perception-based judgements on the visual impact. Motorway scenarios simulated included the baseline scenario without road, original motorway, motorways with timber noise barriers, transparent noise barriers andmore » tree screen; different landscape scenarios were created by changing land cover of buildings and trees in three distance zones. The landscape content of each scene was measured in GIS. The result shows that presence of a motorway especially with the timber barrier significantly decreases the visual quality of the view. The resulted visual impact tends to be lower where it is less visually pleasant with more buildings in the view, and can be slightly reduced by the visual absorption effect of the scattered trees between the motorway and the viewpoint. Based on the survey result, eleven predictors were identified for the visual impact prediction model which was applied in GIS to generate maps of visual impact of motorways in different scenarios. The proposed prediction model can be used to achieve efficient and reliable assessment of visual impact of motorways. - Highlights: • Motorways induce significant visual impact especially with timber noise barriers. • Visual impact is negatively correlated with amount of buildings in the view. • Visual impact is positively correlated with percentage of trees in the view. • Perception-based motorway visual impact prediction model using mapped predictors • Predicted visual impacts in different scenarios are mapped in GIS.« less
Nematzadeh, Nasim; Powers, David M W; Lewis, Trent W
2017-12-01
Why does our visual system fail to reconstruct reality, when we look at certain patterns? Where do Geometrical illusions start to emerge in the visual pathway? How far should we take computational models of vision with the same visual ability to detect illusions as we do? This study addresses these questions, by focusing on a specific underlying neural mechanism involved in our visual experiences that affects our final perception. Among many types of visual illusion, 'Geometrical' and, in particular, 'Tilt Illusions' are rather important, being characterized by misperception of geometric patterns involving lines and tiles in combination with contrasting orientation, size or position. Over the last decade, many new neurophysiological experiments have led to new insights as to how, when and where retinal processing takes place, and the encoding nature of the retinal representation that is sent to the cortex for further processing. Based on these neurobiological discoveries, we provide computer simulation evidence from modelling retinal ganglion cells responses to some complex Tilt Illusions, suggesting that the emergence of tilt in these illusions is partially related to the interaction of multiscale visual processing performed in the retina. The output of our low-level filtering model is presented for several types of Tilt Illusion, predicting that the final tilt percept arises from multiple-scale processing of the Differences of Gaussians and the perceptual interaction of foreground and background elements. The model is a variation of classical receptive field implementation for simple cells in early stages of vision with the scales tuned to the object/texture sizes in the pattern. Our results suggest that this model has a high potential in revealing the underlying mechanism connecting low-level filtering approaches to mid- and high-level explanations such as 'Anchoring theory' and 'Perceptual grouping'.
Reliability of a computer-based system for measuring visual performance skills.
Erickson, Graham B; Citek, Karl; Cove, Michelle; Wilczek, Jennifer; Linster, Carolyn; Bjarnason, Brendon; Langemo, Nathan
2011-09-01
Athletes have demonstrated better visual abilities than nonathletes. A vision assessment for an athlete should include methods to evaluate the quality of visual performance skills in the most appropriate, accurate, and repeatable manner. This study determines the reliability of the visual performance measures assessed with a computer-based system, known as the Nike Sensory Station. One hundred twenty-five subjects (56 men, 69 women), age 18 to 30, completed Phase I of the study. Subjects attended 2 sessions, separated by at least 1 week, in which identical protocols were followed. Subjects completed the following assessments: Visual Clarity, Contrast Sensitivity, Depth Perception, Near-Far Quickness, Target Capture, Perception Span, Eye-Hand Coordination, Go/No Go, and Reaction Time. An additional 36 subjects (20 men, 16 women), age 22 to 35, completed Phase II of the study involving modifications to the equipment, instructions, and protocols from Phase I. Results show no significant change in performance over time on assessments of Visual Clarity, Contrast Sensitivity, Depth Perception, Target Capture, Perception Span, and Reaction Time. Performance did improve over time for Near-Far Quickness, Eye-Hand Coordination, and Go/No Go. The results of this study show that many of the Nike Sensory Station assessments show repeatability and no learning effect over time. The measures that did improve across sessions show an expected learning effect caused by the motor response characteristics being measured. Copyright © 2011 American Optometric Association. Published by Elsevier Inc. All rights reserved.
Visual context processing deficits in schizophrenia: effects of deafness and disorganization.
Horton, Heather K; Silverstein, Steven M
2011-07-01
Visual illusions allow for strong tests of perceptual functioning. Perceptual impairments can produce superior task performance on certain tasks (i.e., more veridical perception), thereby avoiding generalized deficit confounds while tapping mechanisms that are largely outside of conscious control. Using a task based on the Ebbinghaus illusion, a perceptual phenomenon where the perceived size of a central target object is affected by the size of surrounding inducers, we tested hypotheses related to visual integration in deaf (n = 31) and hearing (n = 34) patients with schizophrenia. In past studies, psychiatrically healthy samples displayed increased visual integration relative to schizophrenia samples and thus were less able to correctly judge target sizes. Deafness, and especially the use of sign language, leads to heightened sensitivity to peripheral visual cues and increased sensitivity to visual context. Therefore, relative to hearing subjects, deaf subjects were expected to display increased context sensitivity (ie, a more normal illusion effect as evidenced by a decreased ability to correctly judge central target sizes). Confirming the hypothesis, deaf signers were significantly more sensitive to the illusion than nonsigning hearing patients. Moreover, an earlier age of sign language acquisition, higher levels of linguistic ability, and shorter illness duration were significantly related to increased context sensitivity. As predicted, disorganization was associated with reduced context sensitivity for all subjects. The primary implications of these data are that perceptual organization impairment in schizophrenia is plastic and that it is related to a broader failure in coordinating cognitive activity.
Smelling directions: Olfaction modulates ambiguous visual motion perception
Kuang, Shenbing; Zhang, Tao
2014-01-01
Senses of smells are often accompanied by simultaneous visual sensations. Previous studies have documented enhanced olfactory performance with concurrent presence of congruent color- or shape- related visual cues, and facilitated visual object perception when congruent smells are simultaneously present. These visual object-olfaction interactions suggest the existences of couplings between the olfactory pathway and the visual ventral processing stream. However, it is not known if olfaction can modulate visual motion perception, a function that is related to the visual dorsal stream. We tested this possibility by examining the influence of olfactory cues on the perceptions of ambiguous visual motion signals. We showed that, after introducing an association between motion directions and olfactory cues, olfaction could indeed bias ambiguous visual motion perceptions. Our result that olfaction modulates visual motion processing adds to the current knowledge of cross-modal interactions and implies a possible functional linkage between the olfactory system and the visual dorsal pathway. PMID:25052162
Bigness is in the eye of the beholder. [size and distance perception of pilots in flight
NASA Technical Reports Server (NTRS)
Roscoe, S. N.
1985-01-01
This report reviews an investigation of judgments of size and distance as required of pilots in flight. The experiments covered a broad spectrum of basic psychophysiological issues involving the measurement of visual accommodation and its correlation with various other dependent variables. Psychophysiological issues investigated included the size-distance invariance hypothesis, the projection of afterimages, the moon illusion, night and empty-field myopia, the dark focus and its so-called Mandelbaum effect, the nature and locus of the accommodative stimulus, the relation between accommodation, retinal size, and perceived size, and possible relationships among accommodative responses, autonomic balance, and personality variables.
Slow changing postural cues cancel visual field dependence on self-tilt detection.
Scotto Di Cesare, C; Macaluso, T; Mestre, D R; Bringoux, L
2015-01-01
Interindividual differences influence the multisensory integration process involved in spatial perception. Here, we assessed the effect of visual field dependence on self-tilt detection relative to upright, as a function of static vs. slow changing visual or postural cues. To that aim, we manipulated slow rotations (i.e., 0.05° s(-1)) of the body and/or the visual scene in pitch. Participants had to indicate whether they felt being tilted forward at successive angles. Results show that thresholds for self-tilt detection substantially differed between visual field dependent/independent subjects, when only the visual scene was rotated. This difference was no longer present when the body was actually rotated, whatever the visual scene condition (i.e., absent, static or rotated relative to the observer). These results suggest that the cancellation of visual field dependence by dynamic postural cues may rely on a multisensory reweighting process, where slow changing vestibular/somatosensory inputs may prevail over visual inputs. Copyright © 2014 Elsevier B.V. All rights reserved.
Subjective perception of natural scenes: the role of color
NASA Astrophysics Data System (ADS)
Bianchi-Berthouze, Nadia
2003-01-01
The subjective perception of colors has been extensively studied, with a focus on single colors or on combinations of a few colors. Not much has been done, however, to understand the subjective perception of colors in other contexts, where color is not a single feature. This is what the Kansei community in Japan has set itself to, by exploring subjective experiences of perceptions, and colors in particular, given its obvious influence on humans' emotional changes. The motivation is to create computational models of user visual perceptions, so that computers can be endowed with the ability to personalize visual aspects of their computational task, according to their user. Such a capability is hypothesized to be very important in fields such as printing, information search, design support, advertisement, etc. In this paper, we present our experimental results in the study of color as a contextual feature of images, rather than in isolation. The experiments aim at understanding the mechanisms linked to the personal perception of colors in complex images, and to understand the formation of color categories when labeling experiences related to color perception.
Structural and functional changes across the visual cortex of a patient with visual form agnosia.
Bridge, Holly; Thomas, Owen M; Minini, Loredana; Cavina-Pratesi, Cristiana; Milner, A David; Parker, Andrew J
2013-07-31
Loss of shape recognition in visual-form agnosia occurs without equivalent losses in the use of vision to guide actions, providing support for the hypothesis of two visual systems (for "perception" and "action"). The human individual DF received a toxic exposure to carbon monoxide some years ago, which resulted in a persisting visual-form agnosia that has been extensively characterized at the behavioral level. We conducted a detailed high-resolution MRI study of DF's cortex, combining structural and functional measurements. We present the first accurate quantification of the changes in thickness across DF's occipital cortex, finding the most substantial loss in the lateral occipital cortex (LOC). There are reduced white matter connections between LOC and other areas. Functional measures show pockets of activity that survive within structurally damaged areas. The topographic mapping of visual areas showed that ordered retinotopic maps were evident for DF in the ventral portions of visual cortical areas V1, V2, V3, and hV4. Although V1 shows evidence of topographic order in its dorsal portion, such maps could not be found in the dorsal parts of V2 and V3. We conclude that it is not possible to understand fully the deficits in object perception in visual-form agnosia without the exploitation of both structural and functional measurements. Our results also highlight for DF the cortical routes through which visual information is able to pass to support her well-documented abilities to use visual information to guide actions.
Effects of color combination and ambient illumination on visual perception time with TFT-LCD.
Lin, Chin-Chiuan; Huang, Kuo-Chen
2009-10-01
An empirical study was carried out to examine the effects of color combination and ambient illumination on visual perception time using TFT-LCD. The effect of color combination was broken down into two subfactors, luminance contrast ratio and chromaticity contrast. Analysis indicated that the luminance contrast ratio and ambient illumination had significant, though small effects on visual perception. Visual perception time was better at high luminance contrast ratio than at low luminance contrast ratio. Visual perception time under normal ambient illumination was better than at other ambient illumination levels, although the stimulus color had a confounding effect on visual perception time. In general, visual perception time was better for the primary colors than the middle-point colors. Based on the results, normal ambient illumination level and high luminance contrast ratio seemed to be the optimal choice for design of workplace with video display terminals TFT-LCD.
Controlling Attention through Action: Observing Actions Primes Action-Related Stimulus Dimensions
ERIC Educational Resources Information Center
Fagioli, Sabrina; Ferlazzo, Fabio; Hommel, Bernhard
2007-01-01
Previous findings suggest that planning an action "backward-primes" perceptual dimension related to this action: planning a grasp facilitates the processing of visual size information, while planning a reach facilitates the processing of location information. Here we show that dimensional priming of perception through action occurs even in the…
Reframing the action and perception dissociation in DF: haptics matters, but how?
Whitwell, Robert L; Buckingham, Gavin
2013-02-01
Goodale and Milner's (1992) "vision-for-action" and "vision-for-perception" account of the division of labor between the dorsal and ventral "streams" has come to dominate contemporary views of the functional roles of these two pathways. Nevertheless, some lines of evidence for the model remain controversial. Recently, Thomas Schenk reexamined visual form agnosic patient DF's spared anticipatory grip scaling to object size, one of the principal empirical pillars of the model. Based on this new evidence, Schenk rejects the original interpretation of DF's spared ability that was based on segregated processing of object size and argues that DF's spared grip scaling relies on haptic feedback to calibrate visual egocentric cues that relate the posture of the hand to the visible edges of the goal-object. However, a careful consideration of the tasks that Schenk employed reveals some problems with his claim. We suspect that the core issues of this controversy will require a closer examination of the role that cognition plays in the operation of the dorsal and ventral streams in healthy controls and in patient DF.
Weight status and the perception of body image in men
Gardner, Rick M
2014-01-01
Understanding the role of body size in relation to the accuracy of body image perception in men is an important topic because of the implications for avoiding and treating obesity, and it may serve as a potential diagnostic criterion for eating disorders. The early research on this topic produced mixed findings. About one-half of the early studies showed that obese men overestimated their body size, with the remaining half providing accurate estimates. Later, improvements in research technology and methodology provided a clearer indication of the role of weight status in body image perception. Research in our laboratory has also produced diverse findings, including that obese subjects sometimes overestimate their body size. However, when examining our findings across several studies, obese subjects had about the same level of accuracy in estimating their body size as normal-weight subjects. Studies in our laboratory also permitted the separation of sensory and nonsensory factors in body image perception. In all but one instance, no differences were found overall between the ability of obese and normal-weight subjects to detect overall changes in body size. Importantly, however, obese subjects are better at detecting changes in their body size when the image is distorted to be too thin as compared to too wide. Both obese and normal-weight men require about a 3%–7% change in the width of their body size in order to detect the change reliably. Correlations between a range of body mass index values and body size estimation accuracy indicated no relationship between these variables. Numerous studies in other laboratories asked men to place their body size into discrete categorizes, ranging from thin to obese. Researchers found that overweight and obese men underestimate their weight status, and that men are less accurate in their categorizations than are women. Cultural influences have been found to be important, with body size underestimations occurring in cultures where a larger body is found to be desirable. Methodological issues are reviewed with recommendations for future studies. PMID:25114606
Human balancing of an inverted pendulum: is sway size controlled by ankle impedance?
Loram, Ian D; Kelly, Sue M; Lakie, Martin
2001-01-01
Using the ankle musculature, subjects balanced a large inverted pendulum. The equilibrium of the pendulum is unstable and quasi-regular sway was observed like that in quiet standing. Two main questions were addressed. Can subjects systematically change sway size in response to instruction and availability of visual feedback? If so, do subjects decrease sway size by increasing ankle impedance or by some alternative mechanism? The position of the pendulum, the torque generated at each ankle and the soleus and tibialis anterior EMG were recorded. Results showed that subjects could significantly reduce the mean sway size of the pendulum by giving full attention to that goal. With visual feedback sway size could be minimised significantly more than without visual feedback. In changing sway size, the frequency of the sways was not changed. Results also revealed that ankle impedance and muscle co-contraction were not significantly changed when the sway size was decreased. As the ankle impedance and sway frequency do not change when the sway size is decreased, this implies no change in ankle stiffness or viscosity. Increasing ankle impedance, stiffness or viscosity are not the only methods by which sway size could be reduced. A reduction in torque noise or torque inaccuracy via a predictive process which provides active damping could reduce sway size without changing ankle impedance and is plausible given the data. Such a strategy involving motion recognition and generation of an accurate motor response may require higher levels of control than changing ankle impedance by altering reflex or feedforward gain. PMID:11313453
On spatial attention and its field size on the repulsion effect
Cutrone, Elizabeth K.; Heeger, David J.; Carrasco, Marisa
2018-01-01
We investigated the attentional repulsion effect—stimuli appear displaced further away from attended locations—in three experiments: one with exogenous (involuntary) attention, and two with endogenous (voluntary) attention with different attention-field sizes. It has been proposed that differences in attention-field size can account for qualitative differences in neural responses elicited by attended stimuli. We used psychophysical comparative judgments and manipulated either exogenous attention via peripheral cues or endogenous attention via central cues and a demanding rapid serial visual presentation task. We manipulated the attention field size of endogenous attention by presenting streams of letters at two specific locations or at two of many possible locations during each block. We found a robust attentional repulsion effect in all three experiments: with endogenous and exogenous attention and with both attention-field sizes. These findings advance our understanding of the influence of spatial attention on the perception of visual space and help relate this repulsion effect to possible neurophysiological correlates.
NASA Technical Reports Server (NTRS)
Reschke, M. F.; Parker, D. E.; Arrott, A. P.
1986-01-01
Report discusses physiological and physical concepts of proposed training system to precondition astronauts to weightless environment. System prevents motion sickness, often experienced during early part of orbital flight. Also helps prevent seasickness and other forms of terrestrial motion sickness, often experienced during early part of orbital flight. Training affects subject's perception of inner-ear signals, visual signals, and kinesthetic motion perception. Changed perception resembles that of astronauts who spent many days in space and adapted to weightlessness.
Do Visually Impaired People Develop Superior Smell Ability?
Majchrzak, Dorota; Eberhard, Julia; Kalaus, Barbara; Wagner, Karl-Heinz
2017-10-01
It is well known that visually impaired people perform better in orientation by sound than sighted individuals, but it is not clear whether this enhanced awareness also extends to other senses. Therefore, the aim of this study was to observe whether visually impaired subjects develop superior abilities in olfactory perception to compensate for their lack of vision. We investigated the odor perception of visually impaired individuals aged 7 to 89 ( n = 99; 52 women, 47 men) and compared them with subjects of a control group aged 8 to 82 years ( n = 100; 45 women, 55 men) without any visual impairment. The participants were evaluated by Sniffin' Sticks odor identification and discrimination test. Identification ability was assessed for 16 common odors presented in felt-tip pens. In the odor discrimination task, subjects had to determine which of three pens in 16 triplets had a different odor. The median number of correctly identified odorant pens in both groups was the same, 13 of the offered 16. In the discrimination test, there was also no significant difference observed. Gender did not influence results. Age-related changes were observed in both groups with olfactory perception decreasing after the age of 51. We could not confirm that visually impaired people were better in smell identification and discrimination ability than sighted individuals.
Absolute Depth Sensitivity in Cat Primary Visual Cortex under Natural Viewing Conditions.
Pigarev, Ivan N; Levichkina, Ekaterina V
2016-01-01
Mechanisms of 3D perception, investigated in many laboratories, have defined depth either relative to the fixation plane or to other objects in the visual scene. It is obvious that for efficient perception of the 3D world, additional mechanisms of depth constancy could operate in the visual system to provide information about absolute distance. Neurons with properties reflecting some features of depth constancy have been described in the parietal and extrastriate occipital cortical areas. It has also been shown that, for some neurons in the visual area V1, responses to stimuli of constant angular size differ at close and remote distances. The present study was designed to investigate whether, in natural free gaze viewing conditions, neurons tuned to absolute depths can be found in the primary visual cortex (area V1). Single-unit extracellular activity was recorded from the visual cortex of waking cats sitting on a trolley in front of a large screen. The trolley was slowly approaching the visual scene, which consisted of stationary sinusoidal gratings of optimal orientation rear-projected over the whole surface of the screen. Each neuron was tested with two gratings, with spatial frequency of one grating being twice as high as that of the other. Assuming that a cell is tuned to a spatial frequency, its maximum response to the grating with a spatial frequency twice as high should be shifted to a distance half way closer to the screen in order to attain the same size of retinal projection. For hypothetical neurons selective to absolute depth, location of the maximum response should remain at the same distance irrespective of the type of stimulus. It was found that about 20% of neurons in our experimental paradigm demonstrated sensitivity to particular distances independently of the spatial frequencies of the gratings. We interpret these findings as an indication of the use of absolute depth information in the primary visual cortex.
End-to-End Flow Control for Visual-Haptic Communication under Bandwidth Change
NASA Astrophysics Data System (ADS)
Yashiro, Daisuke; Tian, Dapeng; Yakoh, Takahiro
This paper proposes an end-to-end flow controller for visual-haptic communication. A visual-haptic communication system transmits non-real-time packets, which contain large-size visual data, and real-time packets, which contain small-size haptic data. When the transmission rate of visual data exceeds the communication bandwidth, the visual-haptic communication system becomes unstable owing to buffer overflow. To solve this problem, an end-to-end flow controller is proposed. This controller determines the optimal transmission rate of visual data on the basis of the traffic conditions, which are estimated by the packets for haptic communication. Experimental results confirm that in the proposed method, a short packet-sending interval and a short delay are achieved under bandwidth change, and thus, high-precision visual-haptic communication is realized.
Camouflage, communication and thermoregulation: lessons from colour changing organisms.
Stuart-Fox, Devi; Moussalli, Adnan
2009-02-27
Organisms capable of rapid physiological colour change have become model taxa in the study of camouflage because they are able to respond dynamically to the changes in their visual environment. Here, we briefly review the ways in which studies of colour changing organisms have contributed to our understanding of camouflage and highlight some unique opportunities they present. First, from a proximate perspective, comparison of visual cues triggering camouflage responses and the visual perception mechanisms involved can provide insight into general visual processing rules. Second, colour changing animals can potentially tailor their camouflage response not only to different backgrounds but also to multiple predators with different visual capabilities. We present new data showing that such facultative crypsis may be widespread in at least one group, the dwarf chameleons. From an ultimate perspective, we argue that colour changing organisms are ideally suited to experimental and comparative studies of evolutionary interactions between the three primary functions of animal colour patterns: camouflage; communication; and thermoregulation.
Camouflage, communication and thermoregulation: lessons from colour changing organisms
Stuart-Fox, Devi; Moussalli, Adnan
2008-01-01
Organisms capable of rapid physiological colour change have become model taxa in the study of camouflage because they are able to respond dynamically to the changes in their visual environment. Here, we briefly review the ways in which studies of colour changing organisms have contributed to our understanding of camouflage and highlight some unique opportunities they present. First, from a proximate perspective, comparison of visual cues triggering camouflage responses and the visual perception mechanisms involved can provide insight into general visual processing rules. Second, colour changing animals can potentially tailor their camouflage response not only to different backgrounds but also to multiple predators with different visual capabilities. We present new data showing that such facultative crypsis may be widespread in at least one group, the dwarf chameleons. From an ultimate perspective, we argue that colour changing organisms are ideally suited to experimental and comparative studies of evolutionary interactions between the three primary functions of animal colour patterns: camouflage; communication; and thermoregulation. PMID:19000973
Thigpen, Nina N; Bartsch, Felix; Keil, Andreas
2017-04-01
Emotional experience changes visual perception, leading to the prioritization of sensory information associated with threats and opportunities. These emotional biases have been extensively studied by basic and clinical scientists, but their underlying mechanism is not known. The present study combined measures of brain-electric activity and autonomic physiology to establish how threat biases emerge in human observers. Participants viewed stimuli designed to differentially challenge known properties of different neuronal populations along the visual pathway: location, eye, and orientation specificity. Biases were induced using aversive conditioning with only 1 combination of eye, orientation, and location predicting a noxious loud noise and replicated in a separate group of participants. Selective heart rate-orienting responses for the conditioned threat stimulus indicated bias formation. Retinotopic visual brain responses were persistently and selectively enhanced after massive aversive learning for only the threat stimulus and dissipated after extinction training. These changes were location-, eye-, and orientation-specific, supporting the hypothesis that short-term plasticity in primary visual neurons mediates the formation of perceptual biases to threat. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
Parker, D. E.; Reschke, M. F.; Von Gierke, H. E.; Lessard, C. S.
1987-01-01
The preflight adaptation trainer (PAT) was designed to produce rearranged relationships between visual and otolith signals analogous to those experienced in space. Investigations have been undertaken with three prototype trainers. The results indicated that exposure to the PAT sensory rearrangement altered self-motion perception, induced motion sickness, and changed the amplitude and phase of the horizontal eye movements evoked by roll stimulation. However, the changes were inconsistent.
The uncrowded window of object recognition
Pelli, Denis G; Tillman, Katharine A
2009-01-01
It is now emerging that vision is usually limited by object spacing rather than size. The visual system recognizes an object by detecting and then combining its features. ‘Crowding’ occurs when objects are too close together and features from several objects are combined into a jumbled percept. Here, we review the explosion of studies on crowding—in grating discrimination, letter and face recognition, visual search, selective attention, and reading—and find a universal principle, the Bouma law. The critical spacing required to prevent crowding is equal for all objects, although the effect is weaker between dissimilar objects. Furthermore, critical spacing at the cortex is independent of object position, and critical spacing at the visual field is proportional to object distance from fixation. The region where object spacing exceeds critical spacing is the ‘uncrowded window’. Observers cannot recognize objects outside of this window and its size limits the speed of reading and search. PMID:18828191
McGurk illusion recalibrates subsequent auditory perception
Lüttke, Claudia S.; Ekman, Matthias; van Gerven, Marcel A. J.; de Lange, Floris P.
2016-01-01
Visual information can alter auditory perception. This is clearly illustrated by the well-known McGurk illusion, where an auditory/aba/ and a visual /aga/ are merged to the percept of ‘ada’. It is less clear however whether such a change in perception may recalibrate subsequent perception. Here we asked whether the altered auditory perception due to the McGurk illusion affects subsequent auditory perception, i.e. whether this process of fusion may cause a recalibration of the auditory boundaries between phonemes. Participants categorized auditory and audiovisual speech stimuli as /aba/, /ada/ or /aga/ while activity patterns in their auditory cortices were recorded using fMRI. Interestingly, following a McGurk illusion, an auditory /aba/ was more often misperceived as ‘ada’. Furthermore, we observed a neural counterpart of this recalibration in the early auditory cortex. When the auditory input /aba/ was perceived as ‘ada’, activity patterns bore stronger resemblance to activity patterns elicited by /ada/ sounds than when they were correctly perceived as /aba/. Our results suggest that upon experiencing the McGurk illusion, the brain shifts the neural representation of an /aba/ sound towards /ada/, culminating in a recalibration in perception of subsequent auditory input. PMID:27611960
Ambiguous Figures – What Happens in the Brain When Perception Changes But Not the Stimulus
Kornmeier, Jürgen; Bach, Michael
2011-01-01
During observation of ambiguous figures our perception reverses spontaneously although the visual information stays unchanged. Research on this phenomenon so far suffered from the difficulty to determine the instant of the endogenous reversals with sufficient temporal precision. A novel experimental paradigm with discontinuous stimulus presentation improved on previous temporal estimates of the reversal event by a factor of three. It revealed that disambiguation of ambiguous visual information takes roughly 50 ms or two loops of recurrent neural activity. Further, the decision about the perceptual outcome has taken place at least 340 ms before the observer is able to indicate the consciously perceived reversal manually. We provide a short review about physiological studies on multistable perception with a focus on electrophysiological data. We further present a new perspective on multistable perception that can easily integrate previous apparently contradicting explanatory approaches. Finally we propose possible extensions toward other research fields where ambiguous figure perception may be useful as an investigative tool. PMID:22461773
Transformation priming helps to disambiguate sudden changes of sensory inputs.
Pastukhov, Alexander; Vivian-Griffiths, Solveiga; Braun, Jochen
2015-11-01
Retinal input is riddled with abrupt transients due to self-motion, changes in illumination, object-motion, etc. Our visual system must correctly interpret each of these changes to keep visual perception consistent and sensitive. This poses an enormous challenge, as many transients are highly ambiguous in that they are consistent with many alternative physical transformations. Here we investigated inter-trial effects in three situations with sudden and ambiguous transients, each presenting two alternative appearances (rotation-reversing structure-from-motion, polarity-reversing shape-from-shading, and streaming-bouncing object collisions). In every situation, we observed priming of transformations as the outcome perceived in earlier trials tended to repeat in subsequent trials and this repetition was contingent on perceptual experience. The observed priming was specific to transformations and did not originate in priming of perceptual states preceding a transient. Moreover, transformation priming was independent of attention and specific to low level stimulus attributes. In summary, we show how "transformation priors" and experience-driven updating of such priors helps to disambiguate sudden changes of sensory inputs. We discuss how dynamic transformation priors can be instantiated as "transition energies" in an "energy landscape" model of the visual perception. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ahmetoglu, Emine; Aral, Neriman; Butun Ayhan, Aynur
This study was conducted in order to (a) compare the visual perceptions of seven-year-old children diagnosed with attention deficit hyperactivity disorder with those of normally developing children of the same age and development level and (b) determine whether the visual perceptions of children with attention deficit hyperactivity disorder vary with respect to gender, having received preschool education and parents` educational level. A total of 60 children, 30 with attention deficit hyperactivity disorder and 30 with normal development, were assigned to the study. Data about children with attention deficit hyperactivity disorder and their families was collected by using a General Information Form and the visual perception of children was examined through the Frostig Developmental Test of Visual Perception. The Mann-Whitney U-test and Kruskal-Wallis variance analysis was used to determine whether there was a difference of between the visual perceptions of children with normal development and those diagnosed with attention deficit hyperactivity disorder and to discover whether the variables of gender, preschool education and parents` educational status affected the visual perceptions of children with attention deficit hyperactivity disorder. The results showed that there was a statistically meaningful difference between the visual perceptions of the two groups and that the visual perceptions of children with attention deficit hyperactivity disorder were affected meaningfully by gender, preschool education and parents` educational status.
Virtual-reality techniques resolve the visual cues used by fruit flies to evaluate object distances.
Schuster, Stefan; Strauss, Roland; Götz, Karl G
2002-09-17
Insects can estimate distance or time-to-contact of surrounding objects from locomotion-induced changes in their retinal position and/or size. Freely walking fruit flies (Drosophila melanogaster) use the received mixture of different distance cues to select the nearest objects for subsequent visits. Conventional methods of behavioral analysis fail to elucidate the underlying data extraction. Here we demonstrate first comprehensive solutions of this problem by substituting virtual for real objects; a tracker-controlled 360 degrees panorama converts a fruit fly's changing coordinates into object illusions that require the perception of specific cues to appear at preselected distances up to infinity. An application reveals the following: (1) en-route sampling of retinal-image changes accounts for distance discrimination within a surprising range of at least 8-80 body lengths (20-200 mm). Stereopsis and peering are not involved. (2) Distance from image translation in the expected direction (motion parallax) outweighs distance from image expansion, which accounts for impact-avoiding flight reactions to looming objects. (3) The ability to discriminate distances is robust to artificially delayed updating of image translation. Fruit flies appear to interrelate self-motion and its visual feedback within a surprisingly long time window of about 2 s. The comparative distance inspection practiced in the small fruit fly deserves utilization in self-moving robots.
Guo, Bing-bing; Zheng, Xiao-lin; Lu, Zhen-gang; Wang, Xing; Yin, Zheng-qin; Hou, Wen-sheng; Meng, Ming
2015-01-01
Visual cortical prostheses have the potential to restore partial vision. Still limited by the low-resolution visual percepts provided by visual cortical prostheses, implant wearers can currently only “see” pixelized images, and how to obtain the specific brain responses to different pixelized images in the primary visual cortex (the implant area) is still unknown. We conducted a functional magnetic resonance imaging experiment on normal human participants to investigate the brain activation patterns in response to 18 different pixelized images. There were 100 voxels in the brain activation pattern that were selected from the primary visual cortex, and voxel size was 4 mm × 4 mm × 4 mm. Multi-voxel pattern analysis was used to test if these 18 different brain activation patterns were specific. We chose a Linear Support Vector Machine (LSVM) as the classifier in this study. The results showed that the classification accuracies of different brain activation patterns were significantly above chance level, which suggests that the classifier can successfully distinguish the brain activation patterns. Our results suggest that the specific brain activation patterns to different pixelized images can be obtained in the primary visual cortex using a 4 mm × 4 mm × 4 mm voxel size and a 100-voxel pattern. PMID:26692860
Nakashima, Ryoichi; Iwai, Ritsuko; Ueda, Sayako; Kumada, Takatsune
2015-01-01
When observers perceive several objects in a space, at the same time, they should effectively perceive their own position as a viewpoint. However, little is known about observers’ percepts of their own spatial location based on the visual scene information viewed from them. Previous studies indicate that two distinct visual spatial processes exist in the locomotion situation: the egocentric position perception and egocentric direction perception. Those studies examined such perceptions in information rich visual environments where much dynamic and static visual information was available. This study examined these two perceptions in information of impoverished environments, including only static lane edge information (i.e., limited information). We investigated the visual factors associated with static lane edge information that may affect these perceptions. Especially, we examined the effects of the two factors on egocentric direction and position perceptions. One is the “uprightness factor” that “far” visual information is seen at upper location than “near” visual information. The other is the “central vision factor” that observers usually look at “far” visual information using central vision (i.e., foveal vision) whereas ‘near’ visual information using peripheral vision. Experiment 1 examined the effect of the “uprightness factor” using normal and inverted road images. Experiment 2 examined the effect of the “central vision factor” using normal and transposed road images where the upper half of the normal image was presented under the lower half. Experiment 3 aimed to replicate the results of Experiments 1 and 2. Results showed that egocentric direction perception is interfered with image inversion or image transposition, whereas egocentric position perception is robust against these image transformations. That is, both “uprightness” and “central vision” factors are important for egocentric direction perception, but not for egocentric position perception. Therefore, the two visual spatial perceptions about observers’ own viewpoints are fundamentally dissociable. PMID:26648895
Knips, Guido; Zibner, Stephan K U; Reimann, Hendrik; Schöner, Gregor
2017-01-01
Reaching for objects and grasping them is a fundamental skill for any autonomous robot that interacts with its environment. Although this skill seems trivial to adults, who effortlessly pick up even objects they have never seen before, it is hard for other animals, for human infants, and for most autonomous robots. Any time during movement preparation and execution, human reaching movement are updated if the visual scene changes (with a delay of about 100 ms). The capability for online updating highlights how tightly perception, movement planning, and movement generation are integrated in humans. Here, we report on an effort to reproduce this tight integration in a neural dynamic process model of reaching and grasping that covers the complete path from visual perception to movement generation within a unified modeling framework, Dynamic Field Theory. All requisite processes are realized as time-continuous dynamical systems that model the evolution in time of neural population activation. Population level neural processes bring about the attentional selection of objects, the estimation of object shape and pose, and the mapping of pose parameters to suitable movement parameters. Once a target object has been selected, its pose parameters couple into the neural dynamics of movement generation so that changes of pose are propagated through the architecture to update the performed movement online. Implementing the neural architecture on an anthropomorphic robot arm equipped with a Kinect sensor, we evaluate the model by grasping wooden objects. Their size, shape, and pose are estimated from a neural model of scene perception that is based on feature fields. The sequential organization of a reach and grasp act emerges from a sequence of dynamic instabilities within a neural dynamics of behavioral organization, that effectively switches the neural controllers from one phase of the action to the next. Trajectory formation itself is driven by a dynamical systems version of the potential field approach. We highlight the emergent capacity for online updating by showing that a shift or rotation of the object during the reaching phase leads to the online adaptation of the movement plan and successful completion of the grasp.
Knips, Guido; Zibner, Stephan K. U.; Reimann, Hendrik; Schöner, Gregor
2017-01-01
Reaching for objects and grasping them is a fundamental skill for any autonomous robot that interacts with its environment. Although this skill seems trivial to adults, who effortlessly pick up even objects they have never seen before, it is hard for other animals, for human infants, and for most autonomous robots. Any time during movement preparation and execution, human reaching movement are updated if the visual scene changes (with a delay of about 100 ms). The capability for online updating highlights how tightly perception, movement planning, and movement generation are integrated in humans. Here, we report on an effort to reproduce this tight integration in a neural dynamic process model of reaching and grasping that covers the complete path from visual perception to movement generation within a unified modeling framework, Dynamic Field Theory. All requisite processes are realized as time-continuous dynamical systems that model the evolution in time of neural population activation. Population level neural processes bring about the attentional selection of objects, the estimation of object shape and pose, and the mapping of pose parameters to suitable movement parameters. Once a target object has been selected, its pose parameters couple into the neural dynamics of movement generation so that changes of pose are propagated through the architecture to update the performed movement online. Implementing the neural architecture on an anthropomorphic robot arm equipped with a Kinect sensor, we evaluate the model by grasping wooden objects. Their size, shape, and pose are estimated from a neural model of scene perception that is based on feature fields. The sequential organization of a reach and grasp act emerges from a sequence of dynamic instabilities within a neural dynamics of behavioral organization, that effectively switches the neural controllers from one phase of the action to the next. Trajectory formation itself is driven by a dynamical systems version of the potential field approach. We highlight the emergent capacity for online updating by showing that a shift or rotation of the object during the reaching phase leads to the online adaptation of the movement plan and successful completion of the grasp. PMID:28303100
Thøgersen, Mikkel; Hansen, John; Arendt-Nielsen, Lars; Flor, Herta; Petrini, Laura
2018-07-16
The purpose of the present study was to assess changes in body perception when visual feedback was removed from the hand and arm with the purpose of resembling the visual deprivation arising from amputation. The illusion was created by removing the visual feedback from the participants' own left forearm using a mixed reality (MR) and green screen environment. Thirty healthy persons (15 female) participated in the study. Each subject experienced two MR conditions, one with and one without visual feedback from the left hand, and a baseline condition with normal vision of the limb (no MR). Body perception was assessed using proprioceptive drift, questionnaires on body perception, and thermal sensitivity measures (cold, warm, heat pain and cold pain detection thresholds). The proprioceptive drift showed a significant shift of the tip of the index finger (p<0.001) towards the elbow in the illusion condition (mean drift: -3.71 cm). Self-report showed a significant decrease in ownership (p<0.001), shift in perceptual distortions, (e.g. "It feels as if my lower arm has become shorter") (p=0.025), and changes in sensations of the hand (tingling, tickling) (p=0.025). A significant decrease was also observed in cold detection threshold (p<0.001), i.e. the detection threshold was cooler than for the control conditions. The proprioceptive drift together with the self-reported questionnaire showed that the participants felt a proximal retraction of their limb, resembling the telescoping experienced by phantom limb patients. The study highlights the influence of missing visual feedback and its possible contribution to phantom limb phenomena. Copyright © 2018 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen Bo; State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Science, Beijing 100101; Xia Jing
Physiological and behavioral studies have demonstrated that a number of visual functions such as visual acuity, contrast sensitivity, and motion perception can be impaired by acute alcohol exposure. The orientation- and direction-selective responses of cells in primary visual cortex are thought to participate in the perception of form and motion. To investigate how orientation selectivity and direction selectivity of neurons are influenced by acute alcohol exposure in vivo, we used the extracellular single-unit recording technique to examine the response properties of neurons in primary visual cortex (A17) of adult cats. We found that alcohol reduces spontaneous activity, visual evoked unitmore » responses, the signal-to-noise ratio, and orientation selectivity of A17 cells. In addition, small but detectable changes in both the preferred orientation/direction and the bandwidth of the orientation tuning curve of strongly orientation-biased A17 cells were observed after acute alcohol administration. Our findings may provide physiological evidence for some alcohol-related deficits in visual function observed in behavioral studies.« less
Visual Memories Bypass Normalization.
Bloem, Ilona M; Watanabe, Yurika L; Kibbe, Melissa M; Ling, Sam
2018-05-01
How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores-neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation.
Visual Memories Bypass Normalization
Bloem, Ilona M.; Watanabe, Yurika L.; Kibbe, Melissa M.; Ling, Sam
2018-01-01
How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores—neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation. PMID:29596038
Evaluation of three-dimensional virtual perception of garments
NASA Astrophysics Data System (ADS)
Aydoğdu, G.; Yeşilpinar, S.; Erdem, D.
2017-10-01
In recent years, three-dimensional design, dressing and simulation programs came into prominence in the textile industry. By these programs, the need to produce clothing samples for every design in design process has been eliminated. Clothing fit, design, pattern, fabric and accessory details and fabric drape features can be evaluated easily. Also, body size of virtual mannequin can be adjusted so more realistic simulations can be created. Moreover, three-dimensional virtual garment images created by these programs can be used while presenting the product to end-user instead of two-dimensional photograph images. In this study, a survey was carried out to investigate the visual perception of consumers. The survey was conducted for three different garment types, separately. Questions about gender, profession etc. was asked to the participants and expected them to compare real samples and artworks or three-dimensional virtual images of garments. When survey results were analyzed statistically, it is seen that demographic situation of participants does not affect visual perception and three-dimensional virtual garment images reflect the real sample characteristics better than artworks for each garment type. Also, it is reported that there is no perception difference depending on garment type between t-shirt, sweatshirt and tracksuit bottom.
Etcoff, Nancy L; Stock, Shannon; Haley, Lauren E; Vickery, Sarah A; House, David M
2011-01-01
Research on the perception of faces has focused on the size, shape, and configuration of inherited features or the biological phenotype, and largely ignored the effects of adornment, or the extended phenotype. Research on the evolution of signaling has shown that animals frequently alter visual features, including color cues, to attract, intimidate or protect themselves from conspecifics. Humans engage in conscious manipulation of visual signals using cultural tools in real time rather than genetic changes over evolutionary time. Here, we investigate one tool, the use of color cosmetics. In two studies, we asked viewers to rate the same female faces with or without color cosmetics, and we varied the style of makeup from minimal (natural), to moderate (professional), to dramatic (glamorous). Each look provided increasing luminance contrast between the facial features and surrounding skin. Faces were shown for 250 ms or for unlimited inspection time, and subjects rated them for attractiveness, competence, likeability and trustworthiness. At 250 ms, cosmetics had significant positive effects on all outcomes. Length of inspection time did not change the effect for competence or attractiveness. However, with longer inspection time, the effect of cosmetics on likability and trust varied by specific makeup looks, indicating that cosmetics could impact automatic and deliberative judgments differently. The results suggest that cosmetics can create supernormal facial stimuli, and that one way they may do so is by exaggerating cues to sexual dimorphism. Our results provide evidence that judgments of facial trustworthiness and attractiveness are at least partially separable, that beauty has a significant positive effect on judgment of competence, a universal dimension of social cognition, but has a more nuanced effect on the other universal dimension of social warmth, and that the extended phenotype significantly influences perception of biologically important signals at first glance and at longer inspection.
Etcoff, Nancy L.; Stock, Shannon; Haley, Lauren E.; Vickery, Sarah A.; House, David M.
2011-01-01
Research on the perception of faces has focused on the size, shape, and configuration of inherited features or the biological phenotype, and largely ignored the effects of adornment, or the extended phenotype. Research on the evolution of signaling has shown that animals frequently alter visual features, including color cues, to attract, intimidate or protect themselves from conspecifics. Humans engage in conscious manipulation of visual signals using cultural tools in real time rather than genetic changes over evolutionary time. Here, we investigate one tool, the use of color cosmetics. In two studies, we asked viewers to rate the same female faces with or without color cosmetics, and we varied the style of makeup from minimal (natural), to moderate (professional), to dramatic (glamorous). Each look provided increasing luminance contrast between the facial features and surrounding skin. Faces were shown for 250 ms or for unlimited inspection time, and subjects rated them for attractiveness, competence, likeability and trustworthiness. At 250 ms, cosmetics had significant positive effects on all outcomes. Length of inspection time did not change the effect for competence or attractiveness. However, with longer inspection time, the effect of cosmetics on likability and trust varied by specific makeup looks, indicating that cosmetics could impact automatic and deliberative judgments differently. The results suggest that cosmetics can create supernormal facial stimuli, and that one way they may do so is by exaggerating cues to sexual dimorphism. Our results provide evidence that judgments of facial trustworthiness and attractiveness are at least partially separable, that beauty has a significant positive effect on judgment of competence, a universal dimension of social cognition, but has a more nuanced effect on the other universal dimension of social warmth, and that the extended phenotype significantly influences perception of biologically important signals at first glance and at longer inspection. PMID:21991328
Takahashi, Chie; Watt, Simon J.
2014-01-01
When we hold an object while looking at it, estimates from visual and haptic cues to size are combined in a statistically optimal fashion, whereby the “weight” given to each signal reflects their relative reliabilities. This allows object properties to be estimated more precisely than would otherwise be possible. Tools such as pliers and tongs systematically perturb the mapping between object size and the hand opening. This could complicate visual-haptic integration because it may alter the reliability of the haptic signal, thereby disrupting the determination of appropriate signal weights. To investigate this we first measured the reliability of haptic size estimates made with virtual pliers-like tools (created using a stereoscopic display and force-feedback robots) with different “gains” between hand opening and object size. Haptic reliability in tool use was straightforwardly determined by a combination of sensitivity to changes in hand opening and the effects of tool geometry. The precise pattern of sensitivity to hand opening, which violated Weber's law, meant that haptic reliability changed with tool gain. We then examined whether the visuo-motor system accounts for these reliability changes. We measured the weight given to visual and haptic stimuli when both were available, again with different tool gains, by measuring the perceived size of stimuli in which visual and haptic sizes were varied independently. The weight given to each sensory cue changed with tool gain in a manner that closely resembled the predictions of optimal sensory integration. The results are consistent with the idea that different tool geometries are modeled by the brain, allowing it to calculate not only the distal properties of objects felt with tools, but also the certainty with which those properties are known. These findings highlight the flexibility of human sensory integration and tool-use, and potentially provide an approach for optimizing the design of visual-haptic devices. PMID:24592245
D Visualization of Mangrove and Aquaculture Conversion in Banate Bay, Iloilo
NASA Astrophysics Data System (ADS)
Domingo, G. A.; Mallillin, M. M.; Perez, A. M. C.; Claridades, A. R. C.; Tamondong, A. M.
2017-10-01
Studies have shown that mangrove forests in the Philippines have been drastically reduced due to conversion to fishponds, salt ponds, reclamation, as well as other forms of industrial development and as of 2011, Iloilo's 95 % mangrove forest was converted to fishponds. In this research, six (6) Landsat images acquired on the years 1973, 1976, 2000, 2006, 2010, and 2016, were classified using Support Vector Machine (SVM) Classification to determine land cover changes, particularly the area change of mangrove and aquaculture from 1976 to 2016. The results of the classification were used as layers for the generation of 3D visualization models using four (4) platforms namely Google Earth, ArcScene, Virtual Terrain Project, and Terragen. A perception survey was conducted among respondents with different levels of expertise in spatial analysis, 3D visualization, as well as in forestry, fisheries, and aquatic resources to assess the usability, effectiveness, and potential of the various platforms used. Change detection showed that largest negative change for mangrove areas happened from 1976 to 2000, with the mangrove area decreasing from 545.374 hectares to 286.935 hectares. Highest increase in fishpond area occurred from 1973 to 1976 rising from 2,930.67 hectares to 3,441.51 hectares. Results of the perception survey showed that ArcScene is preferred for spatial analysis while respondents favored Terragen for 3D visualization and for forestry, fishery and aquatic resources applications.
Rendering visual events as sounds: Spatial attention capture by auditory augmented reality.
Stone, Scott A; Tata, Matthew S
2017-01-01
Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.
Rendering visual events as sounds: Spatial attention capture by auditory augmented reality
Tata, Matthew S.
2017-01-01
Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible. PMID:28792518
A Model of Generating Visual Place Cells Based on Environment Perception and Similar Measure.
Zhou, Yang; Wu, Dewei
2016-01-01
It is an important content to generate visual place cells (VPCs) in the field of bioinspired navigation. By analyzing the firing characteristic of biological place cells and the existing methods for generating VPCs, a model of generating visual place cells based on environment perception and similar measure is abstracted in this paper. VPCs' generation process is divided into three phases, including environment perception, similar measure, and recruiting of a new place cell. According to this process, a specific method for generating VPCs is presented. External reference landmarks are obtained based on local invariant characteristics of image and a similar measure function is designed based on Euclidean distance and Gaussian function. Simulation validates the proposed method is available. The firing characteristic of the generated VPCs is similar to that of biological place cells, and VPCs' firing fields can be adjusted flexibly by changing the adjustment factor of firing field (AFFF) and firing rate's threshold (FRT).
A Model of Generating Visual Place Cells Based on Environment Perception and Similar Measure
2016-01-01
It is an important content to generate visual place cells (VPCs) in the field of bioinspired navigation. By analyzing the firing characteristic of biological place cells and the existing methods for generating VPCs, a model of generating visual place cells based on environment perception and similar measure is abstracted in this paper. VPCs' generation process is divided into three phases, including environment perception, similar measure, and recruiting of a new place cell. According to this process, a specific method for generating VPCs is presented. External reference landmarks are obtained based on local invariant characteristics of image and a similar measure function is designed based on Euclidean distance and Gaussian function. Simulation validates the proposed method is available. The firing characteristic of the generated VPCs is similar to that of biological place cells, and VPCs' firing fields can be adjusted flexibly by changing the adjustment factor of firing field (AFFF) and firing rate's threshold (FRT). PMID:27597859
Catecholamines alter the intrinsic variability of cortical population activity and perception
Avramiea, Arthur-Ervin; Nolte, Guido; Engel, Andreas K.; Linkenkaer-Hansen, Klaus; Donner, Tobias H.
2018-01-01
The ascending modulatory systems of the brain stem are powerful regulators of global brain state. Disturbances of these systems are implicated in several major neuropsychiatric disorders. Yet, how these systems interact with specific neural computations in the cerebral cortex to shape perception, cognition, and behavior remains poorly understood. Here, we probed into the effect of two such systems, the catecholaminergic (dopaminergic and noradrenergic) and cholinergic systems, on an important aspect of cortical computation: its intrinsic variability. To this end, we combined placebo-controlled pharmacological intervention in humans, recordings of cortical population activity using magnetoencephalography (MEG), and psychophysical measurements of the perception of ambiguous visual input. A low-dose catecholaminergic, but not cholinergic, manipulation altered the rate of spontaneous perceptual fluctuations as well as the temporal structure of “scale-free” population activity of large swaths of the visual and parietal cortices. Computational analyses indicate that both effects were consistent with an increase in excitatory relative to inhibitory activity in the cortical areas underlying visual perceptual inference. We propose that catecholamines regulate the variability of perception and cognition through dynamically changing the cortical excitation–inhibition ratio. The combined readout of fluctuations in perception and cortical activity we established here may prove useful as an efficient and easily accessible marker of altered cortical computation in neuropsychiatric disorders. PMID:29420565
Impairing the useful field of view in natural scenes: Tunnel vision versus general interference.
Ringer, Ryan V; Throneburg, Zachary; Johnson, Aaron P; Kramer, Arthur F; Loschky, Lester C
2016-01-01
A fundamental issue in visual attention is the relationship between the useful field of view (UFOV), the region of visual space where information is encoded within a single fixation, and eccentricity. A common assumption is that impairing attentional resources reduces the size of the UFOV (i.e., tunnel vision). However, most research has not accounted for eccentricity-dependent changes in spatial resolution, potentially conflating fixed visual properties with flexible changes in visual attention. Williams (1988, 1989) argued that foveal loads are necessary to reduce the size of the UFOV, producing tunnel vision. Without a foveal load, it is argued that the attentional decrement is constant across the visual field (i.e., general interference). However, other research asserts that auditory working memory (WM) loads produce tunnel vision. To date, foveal versus auditory WM loads have not been compared to determine if they differentially change the size of the UFOV. In two experiments, we tested the effects of a foveal (rotated L vs. T discrimination) task and an auditory WM (N-back) task on an extrafoveal (Gabor) discrimination task. Gabor patches were scaled for size and processing time to produce equal performance across the visual field under single-task conditions, thus removing the confound of eccentricity-dependent differences in visual sensitivity. The results showed that although both foveal and auditory loads reduced Gabor orientation sensitivity, only the foveal load interacted with retinal eccentricity to produce tunnel vision, clearly demonstrating task-specific changes to the form of the UFOV. This has theoretical implications for understanding the UFOV.
Short-term memory for figure-ground organization in the visual cortex.
O'Herron, Philip; von der Heydt, Rüdiger
2009-03-12
Whether the visual system uses a buffer to store image information and the duration of that storage have been debated intensely in recent psychophysical studies. The long phases of stable perception of reversible figures suggest a memory that persists for seconds. But persistence of similar duration has not been found in signals of the visual cortex. Here, we show that figure-ground signals in the visual cortex can persist for a second or more after the removal of the figure-ground cues. When new figure-ground information is presented, the signals adjust rapidly, but when a figure display is changed to an ambiguous edge display, the signals decay slowly--a behavior that is characteristic of memory devices. Figure-ground signals represent the layout of objects in a scene, and we propose that a short-term memory for object layout is important in providing continuity of perception in the rapid stream of images flooding our eyes.
Kiat, John E; Dodd, Michael D; Belli, Robert F; Cheadle, Jacob E
2018-05-01
Neuroimaging-based investigations of change blindness, a phenomenon in which seemingly obvious changes in visual scenes fail to be detected, have significantly advanced our understanding of visual awareness. The vast majority of prior investigations, however, utilize paradigms involving visual disruptions (e.g., intervening blank screens, saccadic movements, "mudsplashes"), making it difficult to isolate neural responses toward visual changes cleanly. To address this issue in this present study, high-density EEG data (256 channel) were collected from 25 participants using a paradigm in which visual changes were progressively introduced into detailed real-world scenes without the use of visual disruption. Oscillatory activity associated with undetected changes was contrasted with activity linked to their absence using standardized low-resolution brain electromagnetic tomography (sLORETA). Although an insufficient number of detections were present to allow for analysis of actual change detection, increased beta-2 activity in the right inferior parietal lobule (rIPL), a region repeatedly associated with change blindness in disruption paradigms, followed by increased theta activity in the right superior temporal gyrus (rSTG) was noted in undetected visual change responses relative to the absence of change. We propose the rIPL beta-2 activity to be associated with orienting attention toward visual changes, with the subsequent rise in rSTG theta activity being potentially linked with updating preconscious perceptual memory representations. NEW & NOTEWORTHY This study represents the first neuroimaging-based investigation of gradual change blindness, a visual phenomenon that has significant potential to shed light on the processes underlying visual detection and conscious perception. The use of gradual change materials is reflective of real-world visual phenomena and allows for cleaner isolation of signals associated with the neural registration of change relative to the use of abrupt change transients.
Atypical Face Perception in Autism: A Point of View?
Morin, Karine; Guy, Jacalyn; Habak, Claudine; Wilson, Hugh R; Pagani, Linda; Mottron, Laurent; Bertone, Armando
2015-10-01
Face perception is the most commonly used visual metric of social perception in autism. However, when found to be atypical, the origin of face perception differences in autism is contentious. One hypothesis proposes that a locally oriented visual analysis, characteristic of individuals with autism, ultimately affects performance on face tasks where a global analysis is optimal. The objective of this study was to evaluate this hypothesis by assessing face identity discrimination with synthetic faces presented with and without changes in viewpoint, with the former condition minimizing access to local face attributes used for identity discrimination. Twenty-eight individuals with autism and 30 neurotypical participants performed a face identity discrimination task. Stimuli were synthetic faces extracted from traditional face photographs in both front and 20° side viewpoints, digitized from 37 points to provide a continuous measure of facial geometry. Face identity discrimination thresholds were obtained using a two-alternative, temporal forced choice match-to-sample paradigm. Analyses revealed an interaction between group and condition, with group differences found only for the viewpoint change condition, where performance in the autism group was decreased compared to that of neurotypical participants. The selective decrease in performance for the viewpoint change condition suggests that face identity discrimination in autism is more difficult when access to local cues is minimized, and/or when dependence on integrative analysis is increased. These results lend support to a perceptual contribution of atypical face perception in autism. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.
Neural Processing of Congruent and Incongruent Audiovisual Speech in School-Age Children and Adults
ERIC Educational Resources Information Center
Heikkilä, Jenni; Tiippana, Kaisa; Loberg, Otto; Leppänen, Paavo H. T.
2018-01-01
Seeing articulatory gestures enhances speech perception. Perception of auditory speech can even be changed by incongruent visual gestures, which is known as the McGurk effect (e.g., dubbing a voice saying /mi/ onto a face articulating /ni/, observers often hear /ni/). In children, the McGurk effect is weaker than in adults, but no previous…
Global Transsaccadic Change Blindness During Scene Perception
2003-09-01
objects in natural scenes. Psychonomic Bulletin & Review , 8 , 761–768. Irwin, D.E. (1991). Information integration across saccadic eye... Psychonomic Bulletin & Review , 5 , 644–649. Tanaka, K. (1996). Inferotemporal cortex and object vision. Annual Review of Neuro- science... Bulletin & Review , 8 , 753–760. Hoffman, J.R., & Subramanian, B. (1995). The role of visual attention in saccadic eye movements. Perception
Altered figure-ground perception in monkeys with an extra-striate lesion.
Supèr, Hans; Lamme, Victor A F
2007-11-05
The visual system binds and segments the elements of an image into coherent objects and their surroundings. Recent findings demonstrate that primary visual cortex is involved in this process of figure-ground organization. In the primary visual cortex the late part of a neural response to a stimulus correlates with figure-ground segregation and perception. Such a late onset indicates an involvement of feedback projections from higher visual areas. To investigate the possible role of feedback in figure-ground perception we removed dorsal extra-striate areas of the monkey visual cortex. The findings show that figure-ground perception is reduced when the figure is presented in the lesioned hemifield and perception is normal when the figure appeared in the intact hemifield. In conclusion, our observations show the importance for recurrent processing in visual perception.
The genesis of errors in drawing.
Chamberlain, Rebecca; Wagemans, Johan
2016-06-01
The difficulty adults find in drawing objects or scenes from real life is puzzling, assuming that there are few gross individual differences in the phenomenology of visual scenes and in fine motor control in the neurologically healthy population. A review of research concerning the perceptual, motoric and memorial correlates of drawing ability was conducted in order to understand why most adults err when trying to produce faithful representations of objects and scenes. The findings reveal that accurate perception of the subject and of the drawing is at the heart of drawing proficiency, although not to the extent that drawing skill elicits fundamental changes in visual perception. Instead, the decisive role of representational decisions reveals the importance of appropriate segmentation of the visual scene and of the influence of pictorial schemas. This leads to the conclusion that domain-specific, flexible, top-down control of visual attention plays a critical role in development of skill in visual art and may also be a window into creative thinking. Copyright © 2016 Elsevier Ltd. All rights reserved.
Most, Tova; Michaelis, Hilit
2012-08-01
This study aimed to investigate the effect of hearing loss (HL) on emotion-perception ability among young children with and without HL. A total of 26 children 4.0-6.6 years of age with prelingual sensory-neural HL ranging from moderate to profound and 14 children with normal hearing (NH) participated. They were asked to identify happiness, anger, sadness, and fear expressed by an actress when uttering the same neutral nonsense sentence. Their auditory, visual, and auditory-visual perceptions of the emotional content were assessed. The accuracy of emotion perception among children with HL was lower than that of the NH children in all 3 conditions: auditory, visual, and auditory-visual. Perception through the combined auditory-visual mode significantly surpassed the auditory or visual modes alone in both groups, indicating that children with HL utilized the auditory information for emotion perception. No significant differences in perception emerged according to degree of HL. In addition, children with profound HL and cochlear implants did not perform differently from children with less severe HL who used hearing aids. The relatively high accuracy of emotion perception by children with HL may be explained by their intensive rehabilitation, which emphasizes suprasegmental and paralinguistic aspects of verbal communication.
The Whole Warps the Sum of Its Parts.
Corbett, Jennifer E
2017-01-01
The efficiency of averaging properties of sets without encoding redundant details is analogous to gestalt proposals that perception is parsimoniously organized as a function of recurrent order in the world. This similarity suggests that grouping and averaging are part of a broader set of strategies allowing the visual system to circumvent capacity limitations. To examine how gestalt grouping affects the manner in which information is averaged and remembered, I compared the error in observers' adjustments of remembered sizes of individual circles in two different mean-size sets defined by similarity, proximity, connectedness, or a common region. Overall, errors were more similar within the same gestalt-defined groups than between different gestalt-defined groups, such that the remembered sizes of individual circles were biased toward the mean size of their respective gestalt-defined groups. These results imply that gestalt grouping facilitates perceptual averaging to minimize the error with which individual items are encoded, thereby optimizing the efficiency of visual short-term memory.
A horse's eye view: size and shape discrimination compared with other mammals.
Tomonaga, Masaki; Kumazaki, Kiyonori; Camus, Florine; Nicod, Sophie; Pereira, Carlos; Matsuzawa, Tetsuro
2015-11-01
Mammals have adapted to a variety of natural environments from underwater to aerial and these different adaptations have affected their specific perceptive and cognitive abilities. This study used a computer-controlled touchscreen system to examine the visual discrimination abilities of horses, particularly regarding size and shape, and compared the results with those from chimpanzee, human and dolphin studies. Horses were able to discriminate a difference of 14% in circle size but showed worse discrimination thresholds than chimpanzees and humans; these differences cannot be explained by visual acuity. Furthermore, the present findings indicate that all species use length cues rather than area cues to discriminate size. In terms of shape discrimination, horses exhibited perceptual similarities among shapes with curvatures, vertical/horizontal lines and diagonal lines, and the relative contributions of each feature to perceptual similarity in horses differed from those for chimpanzees, humans and dolphins. Horses pay more attention to local components than to global shapes. © 2015 The Author(s).
The Moon as a Tiny Bright Disc: Insights From Observations in the Planetarium.
Carbon, Claus-Christian
2015-01-01
Despite a relatively constant visual angle, the size of the moon appears very variable, mostly depending on elevation and context factors--the so-called moon illusion. As our perceptual experience of the size of the moon is clearly limited to the perceptual sphere of the sky, however, we do not know whether the typical perception of the moon at its zenith reflects a veridical interpretation of its visual angle of only 0.5 degrees. When testing the moon illusion in a large-scale planetarium, we observed two important things: (a) variation in perceptual size was no longer apparent, and (b) the moon looked very much smaller than in any viewing condition in the real sky--even when comparing it at its zenith. A closer inspection of the control console of the planetarium revealed that classic-analog as well as updated-digital planetariums use projections of the moon with strongly increased sizes to compensate for the loss of a natural view of the moon in the artificial dome of the sky.
Perception and control of rotorcraft flight
NASA Technical Reports Server (NTRS)
Owen, Dean H.
1991-01-01
Three topics which can be applied to rotorcraft flight are examined: (1) the nature of visual information; (2) what visual information is informative about; and (3) the control of visual information. The anchorage of visual perception is defined as the distribution of structure in the surrounding optical array or the distribution of optical structure over the retinal surface. A debate was provoked about whether the referent of visual event perception, and in turn control, is optical motion, kinetics, or dynamics. The interface of control theory and visual perception is also considered. The relationships among these problems is the basis of this article.
ERIC Educational Resources Information Center
Washington County Public Schools, Washington, PA.
Symptoms displayed by primary age children with learning disabilities are listed; perceptual handicaps are explained. Activities are suggested for developing visual perception and perception involving motor activities. Also suggested are activities to develop body concept, visual discrimination and attentiveness, visual memory, and figure ground…
[Visual perception and its disorders].
Ruf-Bächtiger, L
1989-11-21
It's the brain and not the eye that decides what is perceived. In spite of this fact, quite a lot is known about the functioning of the eye and the first sections of the optic tract, but little about the actual process of perception. Examination of visual perception and its malfunctions relies therefore on certain hypotheses. Proceeding from the model of functional brain systems, variant functional domains of visual perception can be distinguished. Among the more important of these domains are: digit span, visual discrimination and figure-ground discrimination. Evaluation of these functional domains allows us to understand those children with disorders of visual perception better and to develop more effective treatment methods.
Fang, Ying; Zhang, Ying
2017-01-01
Visual motor integration (VMI) is a vital ability in childhood development, which is associated with the performance of many functional skills. By using the Beery Developmental Test Package and Executive Function Tasks, the present study explored the VMI development and its factors (visual perception, motor coordination, and executive function) among 151 Chinese preschoolers from 4 to 6 years. Results indicated that the VMI skills of children increased quickly at 4 years and peaked at 5 years and decreased at around 5 to 6 years. Motor coordination and cognitive flexibility were related to the VMI development of children from 4 to 6 years. Visual perception was associated with the VMI development at early 4 years and inhibitory control was also associated with it among 4-year-old and the beginning of 5-year-old children. Working memory had no impact on the VMI. In conclusion, the development of VMI skills among children in preschool was not stable but changed dynamically in this study. Meanwhile the factors of the VMI worked in different age range for preschoolers. These findings may give some guidance to researchers or health professionals on improving children's VMI skills in their early childhood. PMID:29457030
ERIC Educational Resources Information Center
Brown, Ted; Murdolo, Yuki
2015-01-01
The "Developmental Test of Visual Perception-Third Edition" (DTVP-3) is a recent revision of the "Developmental Test of Visual Perception-Second Edition" (DTVP-2). The DTVP-3 is designed to assess the visual perceptual and/or visual-motor integration skills of children from 4 to 12 years of age. The test is standardized using…
A Critical Review of the "Motor-Free Visual Perception Test-Fourth Edition" (MVPT-4)
ERIC Educational Resources Information Center
Brown, Ted; Peres, Lisa
2018-01-01
The "Motor-Free Visual Perception Test-fourth edition" (MVPT-4) is a revised version of the "Motor-Free Visual Perception Test-third edition." The MVPT-4 is used to assess the visual-perceptual ability of individuals aged 4.0 through 80+ years via a series of visual-perceptual tasks that do not require a motor response. Test…
The repeatability of mean defect with size III and size V standard automated perimetry.
Wall, Michael; Doyle, Carrie K; Zamba, K D; Artes, Paul; Johnson, Chris A
2013-02-15
The mean defect (MD) of the visual field is a global statistical index used to monitor overall visual field change over time. Our goal was to investigate the relationship of MD and its variability for two clinically used strategies (Swedish Interactive Threshold Algorithm [SITA] standard size III and full threshold size V) in glaucoma patients and controls. We tested one eye, at random, for 46 glaucoma patients and 28 ocularly healthy subjects with Humphrey program 24-2 SITA standard for size III and full threshold for size V each five times over a 5-week period. The standard deviation of MD was regressed against the MD for the five repeated tests, and quantile regression was used to show the relationship of variability and MD. A Wilcoxon test was used to compare the standard deviations of the two testing methods following quantile regression. Both types of regression analysis showed increasing variability with increasing visual field damage. Quantile regression showed modestly smaller MD confidence limits. There was a 15% decrease in SD with size V in glaucoma patients (P = 0.10) and a 12% decrease in ocularly healthy subjects (P = 0.08). The repeatability of size V MD appears to be slightly better than size III SITA testing. When using MD to determine visual field progression, a change of 1.5 to 4 decibels (dB) is needed to be outside the normal 95% confidence limits, depending on the size of the stimulus and the amount of visual field damage.
An aftereffect of adaptation to mean size
Corbett, Jennifer E.; Wurnitsch, Nicole; Schwartz, Alex; Whitney, David
2013-01-01
The visual system rapidly represents the mean size of sets of objects. Here, we investigated whether mean size is explicitly encoded by the visual system, along a single dimension like texture, numerosity, and other visual dimensions susceptible to adaptation. Observers adapted to two sets of dots with different mean sizes, presented simultaneously in opposite visual fields. After adaptation, two test patches replaced the adapting dot sets, and participants judged which test appeared to have the larger average dot diameter. They generally perceived the test that replaced the smaller mean size adapting set as being larger than the test that replaced the larger adapting set. This differential aftereffect held for single test dots (Experiment 2) and high-pass filtered displays (Experiment 3), and changed systematically as a function of the variance of the adapting dot sets (Experiment 4), providing additional support that mean size is adaptable, and therefore explicitly encoded dimension of visual scenes. PMID:24348083
Perceived Average Orientation Reflects Effective Gist of the Surface.
Cha, Oakyoon; Chong, Sang Chul
2018-03-01
The human ability to represent ensemble visual information, such as average orientation and size, has been suggested as the foundation of gist perception. To effectively summarize different groups of objects into the gist of a scene, observers should form ensembles separately for different groups, even when objects have similar visual features across groups. We hypothesized that the visual system utilizes perceptual groups characterized by spatial configuration and represents separate ensembles for different groups. Therefore, participants could not integrate ensembles of different perceptual groups on a task basis. We asked participants to determine the average orientation of visual elements comprising a surface with a contour situated inside. Although participants were asked to estimate the average orientation of all the elements, they ignored orientation signals embedded in the contour. This constraint may help the visual system to keep the visual features of occluding objects separate from those of the occluded objects.
Association of Chronic Subjective Tinnitus with Neuro- Cognitive Performance.
Gudwani, Sunita; Munjal, Sanjay K; Panda, Naresh K; Kohli, Adarsh
2017-12-01
Chronic subjective tinnitus is associated with cognitive disruptions affecting perception, thinking, language, reasoning, problem solving, memory, visual tasks (reading) and attention. To evaluate existence of any association between tinnitus parameters and neuropsychological performance to explain cognitive processing. Study design was prospective, consisting 25 patients with idiopathic chronic subjective tinnitus and gave informed consent before planning their treatment. Neuropsychological profile included (i) performance on verbal information, comprehension, arithmetic and digit span; (ii) non-verbal performance for visual pattern completion analogies; (iii) memory performance for long-term, recent, delayed-recall, immediate-recall, verbal-retention, visualretention, visual recognition; (iv) reception, interpretation and execution for visual motor gestalt. Correlation between tinnitus onset duration/ loudness perception with neuropsychological profile was assessed by calculating Spearman's coefficient. Findings suggest that tinnitus may interfere with cognitive processing especially performance on digit span, verbal comprehension, mental balance, attention & concentration, immediate recall, visual recognition and visual-motor gestalt subtests. Negative correlation between neurocognitive tasks with tinnitus loudness and onset duration indicated their association. Positive correlation between tinnitus and visual-motor gestalt performance indicated the brain dysfunction. Tinnitus association with non-auditory processing of verbal, visual and visuo-spatial information suggested neuroplastic changes that need to be targeted in cognitive rehabilitation.
Parietal cortex mediates perceptual Gestalt grouping independent of stimulus size.
Grassi, Pablo R; Zaretskaya, Natalia; Bartels, Andreas
2016-06-01
The integration of local moving elements into a unified gestalt percept has previously been linked to the posterior parietal cortex. There are two possible interpretations for the lack of involvement of other occipital regions. The first is that parietal cortex is indeed uniquely functionally specialized to perform grouping. Another possibility is that other visual regions can perform grouping as well, but that the large spatial separation of the local elements used previously exceeded their neurons' receptive field (RF) sizes, preventing their involvement. In this study we distinguished between these two alternatives. We measured whole-brain activity using fMRI in response to a bistable motion illusion that induced mutually exclusive percepts of either an illusory global Gestalt or of local elements. The stimulus was presented in two sizes, a large version known to activate IPS only, and a version sufficiently small to fit into the RFs of mid-level dorsal regions such as V5/MT. We found that none of the separately localized motion regions apart from parietal cortex showed a preference for global Gestalt perception, even for the smaller version of the stimulus. This outcome suggests that grouping-by-motion is mediated by a specialized size-invariant mechanism with parietal cortex as its anatomical substrate. Copyright © 2016 Elsevier Inc. All rights reserved.
Do domestic dogs (Canis lupus familiaris) perceive the Delboeuf illusion?
Miletto Petrazzini, Maria Elena; Bisazza, Angelo; Agrillo, Christian
2017-05-01
In the last decade, visual illusions have been repeatedly used as a tool to compare visual perception among species. Several studies have investigated whether non-human primates perceive visual illusions in a human-like fashion, but little attention has been paid to other mammals, and sensitivity to visual illusions has been never investigated in the dog. Here, we studied whether domestic dogs perceive the Delboeuf illusion. In human and non-human primates, this illusion creates a misperception of item size as a function of its surrounding context. To examine this effect in dogs, we adapted the spontaneous preference paradigm recently used with chimpanzees. Subjects were presented with two plates containing food. In control trials, two different amounts of food were presented in two identical plates. In this circumstance, dogs were expected to select the larger amount. In test trials, equal food portion sizes were presented in two plates differing in size: if dogs perceived the illusion as primates do, they were expected to select the amount of food presented in the smaller plate. Dogs significantly discriminated the two alternatives in control trials, whereas their performance did not differ from chance in test trials with the illusory pattern. The fact that dogs do not seem to be susceptible to the Delboeuf illusion suggests a potential discontinuity in the perceptual biases affecting size judgments between primates and dogs.
Boosting pitch encoding with audiovisual interactions in congenital amusia.
Albouy, Philippe; Lévêque, Yohana; Hyde, Krista L; Bouchet, Patrick; Tillmann, Barbara; Caclin, Anne
2015-01-01
The combination of information across senses can enhance perception, as revealed for example by decreased reaction times or improved stimulus detection. Interestingly, these facilitatory effects have been shown to be maximal when responses to unisensory modalities are weak. The present study investigated whether audiovisual facilitation can be observed in congenital amusia, a music-specific disorder primarily ascribed to impairments of pitch processing. Amusic individuals and their matched controls performed two tasks. In Task 1, they were required to detect auditory, visual, or audiovisual stimuli as rapidly as possible. In Task 2, they were required to detect as accurately and as rapidly as possible a pitch change within an otherwise monotonic 5-tone sequence that was presented either only auditorily (A condition), or simultaneously with a temporally congruent, but otherwise uninformative visual stimulus (AV condition). Results of Task 1 showed that amusics exhibit typical auditory and visual detection, and typical audiovisual integration capacities: both amusics and controls exhibited shorter response times for audiovisual stimuli than for either auditory stimuli or visual stimuli. Results of Task 2 revealed that both groups benefited from simultaneous uninformative visual stimuli to detect pitch changes: accuracy was higher and response times shorter in the AV condition than in the A condition. The audiovisual improvements of response times were observed for different pitch interval sizes depending on the group. These results suggest that both typical listeners and amusic individuals can benefit from multisensory integration to improve their pitch processing abilities and that this benefit varies as a function of task difficulty. These findings constitute the first step towards the perspective to exploit multisensory paradigms to reduce pitch-related deficits in congenital amusia, notably by suggesting that audiovisual paradigms are effective in an appropriate range of unimodal performance. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Stepinski, T. F.; Mitasova, H.; Jasiewicz, J.; Neteler, M.; Gebbert, S.
2014-12-01
GRASS GIS is a leading open source GIS for geospatial analysis and modeling. In addition to being utilized as a desktop GIS it also serves as a processing engine for high performance geospatial computing for applications in diverse disciplines. The newly released GRASS GIS 7 supports big data analysis including temporal framework, image segmentation, watershed analysis, synchronized 2D/3D animations and many others. This presentation will focus on new GRASS GIS 7-powered tools for geoprocessing giga-size earth observation (EO) data using spatial pattern analysis. Pattern-based analysis connects to human visual perception of space as well as makes geoprocessing of giga-size EO data possible in an efficient and robust manner. GeoPAT is a collection of GRASS GIS 7 modules that fully integrates procedures for pattern representation of EO data and patterns similarity calculations with standard GIS tasks of mapping, maps overlay, segmentation, classification(Fig 1a), change detections etc. GeoPAT works very well on a desktop but it also underpins several GeoWeb applications (http://sil.uc.edu/ ) which allow users to do analysis on selected EO datasets without the need to download them. The GRASS GIS 7 temporal framework and high resolution visualizations will be illustrated using time series of giga-size, lidar-based digital elevation models representing the dynamics of North Carolina barrier islands over the past 15 years. The temporal framework supports efficient raster and vector data series analysis and simplifies data input for visual analysis of dynamic landscapes (Fig. 1b) allowing users to rapidly identify vulnerable locations, changes in built environment and eroding coastlines. Numerous improvements in GRASS GIS 7 were implemented to support terabyte size data processing for reconstruction of MODIS land surface temperature (LST) at 250m resolution using multiple regressions and PCA (Fig. 1c) . The new MODIS LST series (http://gis.cri.fmach.it/eurolst/) includes 4 maps per day since year 2000, provide improved data for the epidemiological predictions, viticulture, assessment of urban heat islands and numerous other applications. The presentation will conclude with outline of future development for big data interfaces to further enhance the web-based GRASS GIS data analysis.
Freud, Erez; Ganel, Tzvi; Avidan, Galia; Gilaie-Dotan, Sharon
2016-03-01
According to the two visual systems model, the cortical visual system is segregated into a ventral pathway mediating object recognition, and a dorsal pathway mediating visuomotor control. In the present study we examined whether the visual control of action could develop normally even when visual perceptual abilities are compromised from early childhood onward. Using his fingers, LG, an individual with a rare developmental visual object agnosia, manually estimated (perceptual condition) the width of blocks that varied in width and length (but not in overall size), or simply picked them up across their width (grasping condition). LG's perceptual sensitivity to target width was profoundly impaired in the manual estimation task compared to matched controls. In contrast, the sensitivity to object shape during grasping, as measured by maximum grip aperture (MGA), the time to reach the MGA, the reaction time and the total movement time were all normal in LG. Further analysis, however, revealed that LG's sensitivity to object shape during grasping emerged at a later time stage during the movement compared to controls. Taken together, these results demonstrate a dissociation between action and perception of object shape, and also point to a distinction between different stages of the grasping movement, namely planning versus online control. Moreover, the present study implies that visuomotor abilities can develop normally even when perceptual abilities developed in a profoundly impaired fashion. Copyright © 2016 Elsevier Ltd. All rights reserved.
Murakoshi, Takuma; Masuda, Tomohiro; Utsumi, Ken; Tsubota, Kazuo; Wada, Yuji
2013-01-01
Previous studies have reported the effects of statistics of luminance distribution on visual freshness perception using pictures which included the degradation process of food samples. However, these studies did not examine the effect of individual differences between the same kinds of food. Here we elucidate whether luminance distribution would continue to have a significant effect on visual freshness perception even if visual stimuli included individual differences in addition to the degradation process of foods. We took pictures of the degradation of three fishes over 3.29 hours in a controlled environment, then cropped square patches of their eyes from the original images as visual stimuli. Eleven participants performed paired comparison tests judging the visual freshness of the fish eyes at three points of degradation. Perceived freshness scores (PFS) were calculated using the Bradley-Terry Model for each image. The ANOVA revealed that the PFS for each fish decreased as the degradation time increased; however, the differences in the PFS between individual fish was larger for the shorter degradation time, and smaller for the longer degradation time. A multiple linear regression analysis was conducted in order to determine the relative importance of the statistics of luminance distribution of the stimulus images in predicting PFS. The results show that standard deviation and skewness in luminance distribution have a significant influence on PFS. These results show that even if foodstuffs contain individual differences, visual freshness perception and changes in luminance distribution correlate with degradation time.
The development of visual speech perception in Mandarin Chinese-speaking children.
Chen, Liang; Lei, Jianghua
2017-01-01
The present study aimed to investigate the development of visual speech perception in Chinese-speaking children. Children aged 7, 13 and 16 were asked to visually identify both consonant and vowel sounds in Chinese as quickly and accurately as possible. Results revealed (1) an increase in accuracy of visual speech perception between ages 7 and 13 after which the accuracy rate either stagnates or drops; and (2) a U-shaped development pattern in speed of perception with peak performance in 13-year olds. Results also showed that across all age groups, the overall levels of accuracy rose, whereas the response times fell for simplex finals, complex finals and initials. These findings suggest that (1) visual speech perception in Chinese is a developmental process that is acquired over time and is still fine-tuned well into late adolescence; (2) factors other than cross-linguistic differences in phonological complexity and degrees of reliance on visual information are involved in development of visual speech perception.
Cortical visual prostheses: from microstimulation to functional percept
NASA Astrophysics Data System (ADS)
Najarpour Foroushani, Armin; Pack, Christopher C.; Sawan, Mohamad
2018-04-01
Cortical visual prostheses are intended to restore vision by targeted electrical stimulation of the visual cortex. The perception of spots of light, called phosphenes, resulting from microstimulation of the visual pathway, suggests the possibility of creating meaningful percept made of phosphenes. However, to date electrical stimulation of V1 has still not resulted in perception of phosphenated images that goes beyond punctate spots of light. In this review, we summarize the clinical and experimental progress that has been made in generating phosphenes and modulating their associated perceptual characteristics in human and macaque primary visual cortex (V1). We focus specifically on the effects of different microstimulation parameters on perception and we analyse key challenges facing the generation of meaningful artificial percepts. Finally, we propose solutions to these challenges based on the application of supervised learning of population codes for spatial stimulation of visual cortex.
Shourie, Nasrin; Firoozabadi, Mohammad; Badie, Kambiz
2014-01-01
In this paper, differences between multichannel EEG signals of artists and nonartists were analyzed during visual perception and mental imagery of some paintings and at resting condition using approximate entropy (ApEn). It was found that ApEn is significantly higher for artists during the visual perception and the mental imagery in the frontal lobe, suggesting that artists process more information during these conditions. It was also observed that ApEn decreases for the two groups during the visual perception due to increasing mental load; however, their variation patterns are different. This difference may be used for measuring progress in novice artists. In addition, it was found that ApEn is significantly lower during the visual perception than the mental imagery in some of the channels, suggesting that visual perception task requires more cerebral efforts.
A model of color vision with a robot system
NASA Astrophysics Data System (ADS)
Wang, Haihui
2006-01-01
In this paper, we propose to generalize the saccade target method and state that perceptual stability in general arises by learning the effects one's actions have on sensor responses. The apparent visual stability of color percept across saccadic eye movements can be explained by positing that perception involves observing how sensory input changes in response to motor activities. The changes related to self-motion can be learned, and once learned, used to form stable percepts. The variation of sensor data in response to a motor act is therefore a requirement for stable perception rather than something that has to be compensated for in order to perceive a stable world. In this paper, we have provided a simple implementation of this sensory-motor contingency view of perceptual stability. We showed how a straightforward application of the temporal difference enhancement learning technique yielding color percepts that are stable across saccadic eye movements, even though the raw sensor input may change radically.
How to reinforce perception of depth in single two-dimensional pictures
NASA Technical Reports Server (NTRS)
Nagata, S.
1989-01-01
The physical conditions of the display of single 2-D pictures, which produce images realistically, were studied by using the characteristics of the intake of the information for visual depth perception. Depth sensitivity, which is defined as the ratio of viewing distance to depth discrimination threshold, was introduced in order to evaluate the availability of various cues for depth perception: binocular parallax, motion parallax, accommodation, convergence, size, texture, brightness, and air-perspective contrast. The effects of binocular parallax in different conditions, the depth sensitivity of which is greatest at a distance of up to about 10 m, were studied with the new versatile stereoscopic display. From these results, four conditions to reinforce the perception of depth in single pictures were proposed, and these conditions are met by the old viewing devices and the new high-definition and wide television displays.
The Sound and Feel of Titrations: A Smartphone Aid for Color-Blind and Visually Impaired Students
ERIC Educational Resources Information Center
Bandyopadhyay, Subhajit; Rathod, Balraj B.
2017-01-01
An Android-based application has been developed to provide color-blind and visually impaired students a multisensory perception of color change observed in a titration. The application records and converts the color information into beep sounds and vibration pulses, which are generated by the smartphone. It uses a range threshold of hue and…
Great bowerbirds create theaters with forced perspective when seen by their audience.
Endler, John A; Endler, Lorna C; Doerr, Natalie R
2010-09-28
Birds in the infraorder Corvida [1] (ravens, jays, bowerbirds) are renowned for their cognitive abilities [2-4], which include advanced problem solving with spatial inference [4-8], tool use and complex constructions [7-10], and bowerbird cognitive ability is associated with mating success [11]. Great bowerbird males construct bowers with a long avenue from within which females view the male displaying over his bower court [10]. This predictable audience viewpoint is a prerequisite for forced (altered) visual perspective [12-14]. Males make courts with gray and white objects that increase in size with distance from the avenue entrance. This gradient creates forced visual perspective for the audience; court object visual angles subtended on the female viewer's eye are more uniform than if the objects were placed at random. Forced perspective can yield false perception of size and distance [12, 15]. After experimental reversal of their size-distance gradient, males recovered their gradients within 3 days, and there was little difference from the original after 2 wks. Variation among males in their forced-perspective quality as seen by their female audience indicates that visual perspective is available for use in mate choice, perhaps as an indicator of cognitive ability. Regardless of function, the creation and maintenance of forced visual perspective is clearly important to great bowerbirds and suggests the possibility of a previously unknown dimension of bird cognition. Copyright © 2010 Elsevier Ltd. All rights reserved.
Cultural Differences in Face-ism: Male Politicians Have Bigger Heads in More Gender-Equal Cultures
ERIC Educational Resources Information Center
Konrath, Sara; Au, Josephine; Ramsey, Laura R.
2012-01-01
Women are visually depicted with lower facial prominence than men, with consequences for perceptions of their competence. The current study examines the relationship between the size of this "face-ism" bias (i.e., individual or micro-level sexism) and a number of gender inequality indicators (i.e., institutional or macro-level sexism) at the…
Theory of mind and perceptual context-processing in schizophrenia.
Uhlhaas, Peter J; Phillips, William A; Schenkel, Lindsay S; Silverstein, Steven M
2006-07-01
A series of studies have suggested that schizophrenia patients are deficient in theory of mind (ToM). However, the cognitive mechanisms underlying ToM deficits in schizophrenia are largely unknown. The present study examined the hypothesis that impaired ToM in schizophrenia can be understood as a deficit in context processing. Disorganised schizophrenia patients (N = 12), nondisorganised schizophrenia patients (N = 36), and nonpsychotic psychiatric patients (N = 26) were tested on three ToM tasks and a visual size perception task, a measure of perceptual context processing. In addition, statistical analyses were carried out which compared chronic, treatment-refractory schizophrenia patients (N = 28) to those with an episodic course of illness (N = 20). Overall, ToM performance was linked to deficits in context processing in schizophrenia patients. Statistical comparisons showed that disorganised as well as chronic schizophrenia patients were more impaired in ToM but more accurate in a visual size perception task where perceptual context is misleading. This pattern of results is interpreted as indicating a possible link between deficits in ToM and perceptual context processing, which together with deficits in perceptual grouping, are part of a broader dysfunction in cognitive coordination in schizophrenia.
Altered visual perception in long-term ecstasy (MDMA) users.
White, Claire; Brown, John; Edwards, Mark
2013-09-01
The present study investigated the long-term consequences of ecstasy use on visual processes thought to reflect serotonergic functions in the occipital lobe. Evidence indicates that the main psychoactive ingredient in ecstasy (methylendioxymethamphetamine) causes long-term changes to the serotonin system in human users. Previous research has found that amphetamine-abstinent ecstasy users have disrupted visual processing in the occipital lobe which relies on serotonin, with researchers concluding that ecstasy broadens orientation tuning bandwidths. However, other processes may have accounted for these results. The aim of the present research was to determine if amphetamine-abstinent ecstasy users have changes in occipital lobe functioning, as revealed by two studies: a masking study that directly measured the width of orientation tuning bandwidths and a contour integration task that measured the strength of long-range connections in the visual cortex of drug users compared to controls. Participants were compared on the width of orientation tuning bandwidths (26 controls, 12 ecstasy users, 10 ecstasy + amphetamine users) and the strength of long-range connections (38 controls, 15 ecstasy user, 12 ecstasy + amphetamine users) in the occipital lobe. Amphetamine-abstinent ecstasy users had significantly broader orientation tuning bandwidths than controls and significantly lower contour detection thresholds (CDTs), indicating worse performance on the task, than both controls and ecstasy + amphetamine users. These results extend on previous research, which is consistent with the proposal that ecstasy may damage the serotonin system, resulting in behavioral changes on tests of visual perception processes which are thought to reflect serotonergic functions in the occipital lobe.
Fawkner, Samantha; Henretty, Joan; Knowles, Ann-Marie; Nevill, Alan; Niven, Ailsa
2014-01-01
The aim of this study was to adopt a longitudinal design to explore the direct effects of both absolute and relative maturation and changes in body size on physical activity, and explore if, and how, physical self-perceptions might mediate this effect. We recruited 208 girls (11.8 ± 0.4 years) at baseline. Data were collected at three subsequent time points, each 6 months apart. At 18 months, 119 girls remained in the study. At each time point, girls completed the Physical Activity Questionnaire for Children, the Pubertal Development Scale (from which, both a measure of relative and absolute maturation were defined) and the Physical Self-Perception Profile, and had physical size characteristics assessed. Multilevel modelling for physical activity indicated a significant negative effect of age, positive effect for physical condition and sport competence and positive association for relatively early maturers. Absolute maturation, body mass, waist circumference and sum of skinfolds did not significantly contribute to the model. Contrary to common hypotheses, relatively more mature girls may, in fact, be more active than their less mature peers. However, neither changes in absolute maturation nor physical size appear to directly influence changes in physical activity in adolescent girls.
Separate visual representations for perception and for visually guided behavior
NASA Technical Reports Server (NTRS)
Bridgeman, Bruce
1989-01-01
Converging evidence from several sources indicates that two distinct representations of visual space mediate perception and visually guided behavior, respectively. The two maps of visual space follow different rules; spatial values in either one can be biased without affecting the other. Ordinarily the two maps give equivalent responses because both are veridically in register with the world; special techniques are required to pull them apart. One such technique is saccadic suppression: small target displacements during saccadic eye movements are not preceived, though the displacements can change eye movements or pointing to the target. A second way to separate cognitive and motor-oriented maps is with induced motion: a slowly moving frame will make a fixed target appear to drift in the opposite direction, while motor behavior toward the target is unchanged. The same result occurs with stroboscopic induced motion, where the frame jump abruptly and the target seems to jump in the opposite direction. A third method of separating cognitive and motor maps, requiring no motion of target, background or eye, is the Roelofs effect: a target surrounded by an off-center rectangular frame will appear to be off-center in the direction opposite the frame. Again the effect influences perception, but in half of the subjects it does not influence pointing to the target. This experience also reveals more characteristics of the maps and their interactions with one another, the motor map apparently has little or no memory, and must be fed from the biased cognitive map if an enforced delay occurs between stimulus presentation and motor response. In designing spatial displays, the results mean that what you see isn't necessarily what you get. Displays must be designed with either perception or visually guided behavior in mind.
Alphonsa, Sushma; Dai, Boyi; Benham-Deal, Tami; Zhu, Qin
2016-01-01
The speed-accuracy trade-off is a fundamental movement problem that has been extensively investigated. It has been established that the speed at which one can move to tap targets depends on how large the targets are and how far they are apart. These spatial properties of the targets can be quantified by the index of difficulty (ID). Two visual illusions are known to affect the perception of target size and movement amplitude: the Ebbinghaus illusion and Muller-Lyer illusion. We created visual images that combined these two visual illusions to manipulate the perceived ID, and then examined people's visual perception of the targets in illusory context as well as their performance in tapping those targets in both discrete and continuous manners. The findings revealed that the combined visual illusions affected the perceived ID similarly in both discrete and continuous judgment conditions. However, the movement outcomes were affected by the combined visual illusions according to the tapping mode. In discrete tapping, the combined visual illusions affected both movement accuracy and movement amplitude such that the effective ID resembled the perceived ID. In continuous tapping, none of the movement outcomes were affected by the combined visual illusions. Participants tapped the targets with higher speed and accuracy in all visual conditions. Based on these findings, we concluded that distinct visual-motor control mechanisms were responsible for execution of discrete and continuous Fitts' tapping. Although discrete tapping relies on allocentric information (object-centered) to plan for action, continuous tapping relies on egocentric information (self-centered) to control for action. The planning-control model for rapid aiming movements is supported.
NASA Technical Reports Server (NTRS)
Jobson, Daniel J.
1990-01-01
The visual perception of form information is considered to be based on the functioning of simple and complex neurons in the primate striate cortex. However, a review of the physiological data on these brain cells cannot be harmonized with either the perceptual spatial frequency performance of primates or the performance which is necessary for form perception in humans. This discrepancy together with recent interest in cortical-like and perceptual-like processing in image coding and machine vision prompted a series of image processing experiments intended to provide some definition of the selection of image operators. The experiments were aimed at determining operators which could be used to detect edges in a computational manner consistent with the visual perception of structure in images. Fundamental issues were the selection of size (peak spatial frequency) and circular versus oriented operators (or some combination). In a previous study, circular difference-of-Gaussian (DOG) operators, with peak spatial frequency responses at about 11 and 33 cyc/deg were found to capture the primary structural information in images. Here larger scale circular DOG operators were explored and led to severe loss of image structure and introduced spatial dislocations (due to blur) in structure which is not consistent with visual perception. Orientation sensitive operators (akin to one class of simple cortical neurons) introduced ambiguities of edge extent regardless of the scale of the operator. For machine vision schemes which are functionally similar to natural vision form perception, two circularly symmetric very high spatial frequency channels appear to be necessary and sufficient for a wide range of natural images. Such a machine vision scheme is most similar to the physiological performance of the primate lateral geniculate nucleus rather than the striate cortex.
Visual-motor recalibration in geographical slant perception
NASA Technical Reports Server (NTRS)
Bhalla, M.; Proffitt, D. R.; Kaiser, M. K. (Principal Investigator)
1999-01-01
In 4 experiments, it was shown that hills appear steeper to people who are encumbered by wearing a heavy backpack (Experiment 1), are fatigued (Experiment 2), are of low physical fitness (Experiment 3), or are elderly and/or in declining health (Experiment 4). Visually guided actions are unaffected by these manipulations of physiological potential. Although dissociable, the awareness and action systems were also shown to be interconnected. Recalibration of the transformation relating awareness and actions was found to occur over long-term changes in physiological potential (fitness level, age, and health) but not with transitory changes (fatigue and load). Findings are discussed in terms of a time-dependent coordination between the separate systems that control explicit visual awareness and visually guided action.
Intranasal oxytocin reduces social perception in women: Neural activation and individual variation.
Hecht, Erin E; Robins, Diana L; Gautam, Pritam; King, Tricia Z
2017-02-15
Most intranasal oxytocin research to date has been carried out in men, but recent studies indicate that females' responses can differ substantially from males'. This randomized, double-blind, placebo-controlled study involved an all-female sample of 28 women not using hormonal contraception. Participants viewed animations of geometric shapes depicting either random movement or social interactions such as playing, chasing, or fighting. Probe questions asked whether any shapes were "friends" or "not friends." Social videos were preceded by cues to attend to either social relationships or physical size changes. All subjects received intranasal placebo spray at scan 1. While the experimenter was not blinded to nasal spray contents at Scan 1, the participants were. Scan 2 followed a randomized, double-blind design. At scan 2, half received a second placebo dose while the other half received 24 IU of intranasal oxytocin. We measured neural responses to these animations at baseline, as well as the change in neural activity induced by oxytocin. Oxytocin reduced activation in early visual cortex and dorsal-stream motion processing regions for the social > size contrast, indicating reduced activity related to social attention. Oxytocin also reduced endorsements that shapes were "friends" or "not friends," and this significantly correlated with reduction in neural activation. Furthermore, participants who perceived fewer social relationships at baseline were more likely to show oxytocin-induced increases in a broad network of regions involved in social perception and social cognition, suggesting that lower social processing at baseline may predict more positive neural responses to oxytocin. Copyright © 2016 Elsevier Inc. All rights reserved.
Thomas, K Jackson; Denham, Bryan E; Dinolfo, John D
2011-01-01
This pilot study was designed to assess the perceptions of physical therapy (PT) and occupational therapy (OT) students regarding the use of computer-assisted pedagogy and prosection-oriented communications in the laboratory component of a human anatomy course at a comprehensive health sciences university in the southeastern United States. The goal was to determine whether student perceptions changed over the course of a summer session regarding verbal, visual, tactile, and web-based teaching methodologies. Pretest and post-test surveys were distributed online to students who volunteered to participate in the pilot study. Despite the relatively small sample size, statistically significant results indicated that PT and OT students who participated in this study perceived an improved ability to name major anatomical structures from memory, to draw major anatomical structures from memory, and to explain major anatomical relationships from memory. Students differed in their preferred learning styles. This study demonstrates that the combination of small group learning and digital web-based learning seems to increase PT and OT students' confidence in their anatomical knowledge. Further research is needed to determine which forms of integrated instruction lead to improved student performance in the human gross anatomy laboratory. Copyright © 2011 American Association of Anatomists.
Díaz-Santos, Mirella; Cao, Bo; Mauro, Samantha A.; Yazdanbakhsh, Arash; Neargarder, Sandy; Cronin-Golomb, Alice
2017-01-01
Parkinson’s disease (PD) and normal aging have been associated with changes in visual perception, including reliance on external cues to guide behavior. This raises the question of the extent to which these groups use visual cues when disambiguating information. Twenty-seven individuals with PD, 23 normal control adults (NC), and 20 younger adults (YA) were presented a Necker cube in which one face was highlighted by thickening the lines defining the face. The hypothesis was that the visual cues would help PD and NC to exert better control over bistable perception. There were three conditions, including passive viewing and two volitional-control conditions (hold one percept in front; and switch: speed up the alternation between the two). In the Hold condition, the cue was either consistent or inconsistent with task instructions. Mean dominance durations (time spent on each percept) under passive viewing were comparable in PD and NC, and shorter in YA. PD and YA increased dominance durations in the Hold cue-consistent condition relative to NC, meaning that appropriate cues helped PD but not NC hold one perceptual interpretation. By contrast, in the Switch condition, NC and YA decreased dominance durations relative to PD, meaning that the use of cues helped NC but not PD in expediting the switch between percepts. Provision of low-level cues has effects on volitional control in PD that are different from in normal aging, and only under task-specific conditions does the use of such cues facilitate the resolution of perceptual ambiguity. PMID:25765890
McGuckian, Thomas B; Cole, Michael H; Pepping, Gert-Jan
2018-04-01
To visually perceive opportunities for action, athletes rely on the movements of their eyes, head and body to explore their surrounding environment. To date, the specific types of technology and their efficacy for assessing the exploration behaviours of association footballers have not been systematically reviewed. This review aimed to synthesise the visual perception and exploration behaviours of footballers according to the task constraints, action requirements of the experimental task, and level of expertise of the athlete, in the context of the technology used to quantify the visual perception and exploration behaviours of footballers. A systematic search for papers that included keywords related to football, technology, and visual perception was conducted. All 38 included articles utilised eye-movement registration technology to quantify visual perception and exploration behaviour. The experimental domain appears to influence the visual perception behaviour of footballers, however no studies investigated exploration behaviours of footballers in open-play situations. Studies rarely utilised representative stimulus presentation or action requirements. To fully understand the visual perception requirements of athletes, it is recommended that future research seek to validate alternate technologies that are capable of investigating the eye, head and body movements associated with the exploration behaviours of footballers during representative open-play situations.
Matsumiya, Kazumichi
2013-10-01
Current views on face perception assume that the visual system receives only visual facial signals. However, I show that the visual perception of faces is systematically biased by adaptation to a haptically explored face. Recently, face aftereffects (FAEs; the altered perception of faces after adaptation to a face) have been demonstrated not only in visual perception but also in haptic perception; therefore, I combined the two FAEs to examine whether the visual system receives face-related signals from the haptic modality. I found that adaptation to a haptically explored facial expression on a face mask produced a visual FAE for facial expression. This cross-modal FAE was not due to explicitly imaging a face, response bias, or adaptation to local features. Furthermore, FAEs transferred from vision to haptics. These results indicate that visual face processing depends on substrates adapted by haptic faces, which suggests that face processing relies on shared representation underlying cross-modal interactions.
Learning what to expect (in visual perception)
Seriès, Peggy; Seitz, Aaron R.
2013-01-01
Expectations are known to greatly affect our experience of the world. A growing theory in computational neuroscience is that perception can be successfully described using Bayesian inference models and that the brain is “Bayes-optimal” under some constraints. In this context, expectations are particularly interesting, because they can be viewed as prior beliefs in the statistical inference process. A number of questions remain unsolved, however, for example: How fast do priors change over time? Are there limits in the complexity of the priors that can be learned? How do an individual’s priors compare to the true scene statistics? Can we unlearn priors that are thought to correspond to natural scene statistics? Where and what are the neural substrate of priors? Focusing on the perception of visual motion, we here review recent studies from our laboratories and others addressing these issues. We discuss how these data on motion perception fit within the broader literature on perceptual Bayesian priors, perceptual expectations, and statistical and perceptual learning and review the possible neural basis of priors. PMID:24187536
A review of visual perception mechanisms that regulate rapid adaptive camouflage in cuttlefish.
Chiao, Chuan-Chin; Chubb, Charles; Hanlon, Roger T
2015-09-01
We review recent research on the visual mechanisms of rapid adaptive camouflage in cuttlefish. These neurophysiologically complex marine invertebrates can camouflage themselves against almost any background, yet their ability to quickly (0.5-2 s) alter their body patterns on different visual backgrounds poses a vexing challenge: how to pick the correct body pattern amongst their repertoire. The ability of cuttlefish to change appropriately requires a visual system that can rapidly assess complex visual scenes and produce the motor responses-the neurally controlled body patterns-that achieve camouflage. Using specifically designed visual backgrounds and assessing the corresponding body patterns quantitatively, we and others have uncovered several aspects of scene variation that are important in regulating cuttlefish patterning responses. These include spatial scale of background pattern, background intensity, background contrast, object edge properties, object contrast polarity, object depth, and the presence of 3D objects. Moreover, arm postures and skin papillae are also regulated visually for additional aspects of concealment. By integrating these visual cues, cuttlefish are able to rapidly select appropriate body patterns for concealment throughout diverse natural environments. This sensorimotor approach of studying cuttlefish camouflage thus provides unique insights into the mechanisms of visual perception in an invertebrate image-forming eye.
Using virtual reality to augment perception, enhance sensorimotor adaptation, and change our minds.
Wright, W Geoffrey
2014-01-01
Technological advances that involve human sensorimotor processes can have both intended and unintended effects on the central nervous system (CNS). This mini review focuses on the use of virtual environments (VE) to augment brain functions by enhancing perception, eliciting automatic motor behavior, and inducing sensorimotor adaptation. VE technology is becoming increasingly prevalent in medical rehabilitation, training simulators, gaming, and entertainment. Although these VE applications have often been shown to optimize outcomes, whether it be to speed recovery, reduce training time, or enhance immersion and enjoyment, there are inherent drawbacks to environments that can potentially change sensorimotor calibration. Across numerous VE studies over the years, we have investigated the effects of combining visual and physical motion on perception, motor control, and adaptation. Recent results from our research involving exposure to dynamic passive motion within a visually-depicted VE reveal that short-term exposure to augmented sensorimotor discordance can result in systematic aftereffects that last beyond the exposure period. Whether these adaptations are advantageous or not, remains to be seen. Benefits as well as risks of using VE-driven sensorimotor stimulation to enhance brain processes will be discussed.
Ambron, Elisabetta; White, Nicole; Faseyitan, Olufunsho; Kessler, Sudha K; Medina, Jared; Coslett, H Branch
2018-04-18
Changes in the perceived size of a body part using magnifying lenses influence tactile perception and pain. We investigated whether the visual magnification of one's hand also influences the motor system, as indexed by transcranial magnetic stimulation (TMS)-induced motor evoked potentials (MEPs). In Experiment 1, MEPs were measured while participants gazed at their hand with and without magnification of the hand. MEPs were significantly larger when participants gazed at a magnified image of their hand. In Experiment 2, we demonstrated that this effect is specific to the hand that is visually magnified. TMS of the left motor cortex did not induce an increase of MEPs when participants looked at their magnified left hand. Experiment 3 was performed to determine if magnification altered the topography of the cortical representation of the hand. To that end, a 3 × 5 grid centered on the cortical hot spot (cortical location at which a motor threshold is obtained with the lowest level of stimulation) was overlaid on the participant's MRI image, and all 15 sites in the grid were stimulated with and without magnification of the hand. We confirmed the increase in the MEPs at the hot spot with magnification and demonstrated that MEPs significantly increased with magnification at sites up to 16.5 mm from the cortical hot spot. In Experiment 4, we used paired-pulse TMS to measure short-interval intracortical inhibition and intracortical facilitation. Magnification was associated with an increase in short-interval intracortical inhibition. These experiments demonstrate that the visual magnification of one's hand induces changes in motor cortex excitability and generates a rapid remapping of the cortical representation of the hand that may, at least in part, be mediated by changes in short-interval intracortical inhibition.
Optical phonetics and visual perception of lexical and phrasal stress in English.
Scarborough, Rebecca; Keating, Patricia; Mattys, Sven L; Cho, Taehong; Alwan, Abeer
2009-01-01
In a study of optical cues to the visual perception of stress, three American English talkers spoke words that differed in lexical stress and sentences that differed in phrasal stress, while video and movements of the face were recorded. The production of stressed and unstressed syllables from these utterances was analyzed along many measures of facial movement, which were generally larger and faster in the stressed condition. In a visual perception experiment, 16 perceivers identified the location of stress in forced-choice judgments of video clips of these utterances (without audio). Phrasal stress was better perceived than lexical stress. The relation of the visual intelligibility of the prosody of these utterances to the optical characteristics of their production was analyzed to determine which cues are associated with successful visual perception. While most optical measures were correlated with perception performance, chin measures, especially Chin Opening Displacement, contributed the most to correct perception independently of the other measures. Thus, our results indicate that the information for visual stress perception is mainly associated with mouth opening movements.
Visual and auditory perception in preschool children at risk for dyslexia.
Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina
2014-11-01
Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit. Copyright © 2014 Elsevier Ltd. All rights reserved.
Eye movements and attention in reading, scene perception, and visual search.
Rayner, Keith
2009-08-01
Eye movements are now widely used to investigate cognitive processes during reading, scene perception, and visual search. In this article, research on the following topics is reviewed with respect to reading: (a) the perceptual span (or span of effective vision), (b) preview benefit, (c) eye movement control, and (d) models of eye movements. Related issues with respect to eye movements during scene perception and visual search are also reviewed. It is argued that research on eye movements during reading has been somewhat advanced over research on eye movements in scene perception and visual search and that some of the paradigms developed to study reading should be more widely adopted in the study of scene perception and visual search. Research dealing with "real-world" tasks and research utilizing the visual-world paradigm are also briefly discussed.
Audio-visual speech perception: a developmental ERP investigation
Knowland, Victoria CP; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael SC
2014-01-01
Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development. PMID:24176002
Owsley, Cynthia
2010-01-01
Given the increasing size of the older adult population in many countries, there is a pressing need to identify the nature of aging-related vision impairments, their underlying mechanisms, and how they impact older adults’ performance of everyday visual tasks. The results of this research can then be used to develop and evaluate interventions to slow or reverse aging-related declines in vision, thereby improving quality of life. Here we summarize salient developments in research on aging and vision over the past 25 years, focusing on spatial contrast sensitivity, vision under low luminance, temporal sensitivity and motion perception, and visual processing speed. PMID:20974168
Predicting bias in perceived position using attention field models.
Klein, Barrie P; Paffen, Chris L E; Pas, Susan F Te; Dumoulin, Serge O
2016-05-01
Attention is the mechanism through which we select relevant information from our visual environment. We have recently demonstrated that attention attracts receptive fields across the visual hierarchy (Klein, Harvey, & Dumoulin, 2014). We captured this receptive field attraction using an attention field model. Here, we apply this model to human perception: We predict that receptive field attraction results in a bias in perceived position, which depends on the size of the underlying receptive fields. We instructed participants to compare the relative position of Gabor stimuli, while we manipulated the focus of attention using exogenous cueing. We varied the eccentric position and spatial frequency of the Gabor stimuli to vary underlying receptive field size. The positional biases as a function of eccentricity matched the predictions by an attention field model, whereas the bias as a function of spatial frequency did not. As spatial frequency and eccentricity are encoded differently across the visual hierarchy, we speculate that they might interact differently with the attention field that is spatially defined.
Task relevance induces momentary changes in the functional visual field during reading.
Kaakinen, Johanna K; Hyönä, Jukka
2014-02-01
In the research reported here, we examined whether task demands can induce momentary tunnel vision during reading. More specifically, we examined whether the size of the functional visual field depends on task relevance. Forty participants read an expository text with a specific task in mind while their eye movements were recorded. A display-change paradigm with random-letter strings as preview masks was used to study the size of the functional visual field within sentences that contained task-relevant and task-irrelevant information. The results showed that orthographic parafoveal-on-foveal effects and preview benefits were observed for words within task-irrelevant but not task-relevant sentences. The results indicate that the size of the functional visual field is flexible and depends on the momentary processing demands of a reading task. The higher cognitive processing requirements experienced when reading task-relevant text rather than task-irrelevant text induce momentary tunnel vision, which narrows the functional visual field.
Study of Pattern of Change in Handwriting Class Characters with Different Grades of Myopia.
Hedge, Shruti Prabhat; Dayanidhi, Vijay Kautilya; Sriram
2015-12-01
Handwriting is a visuo-motor skill highly dependent on visual skills. Any defect in the visual inputs could affect a change in the handwriting. Understanding the variation in handwriting characters caused by visual acuity change can help in identifying learning disabilities in children and also assess the disability in elderly. In our study we try to analyse and catalogue these changes in the handwriting of a person. The study was conducted among 100 subjects having normal visual acuity. They were asked to perform a set of writing tasks, after which the same tasks were repeated after inducing different grades of myopia. Changes in the handwriting class characters were analysed and compared in all grades of myopia. In the study it was found that the letter size, pastiosity, word omissions, inability to stay on line all increase with changes in visual acuity. However these finding are not proportional to the grade of myopia. From the findings of the study it can be concluded that myopia significantly influences the handwriting and any change in visual acuity would induce corresponding changes in handwriting. There is increase in letter size, pastiosity where as the ability to stay on line and space between the lines decrease in different grades of myopia. The changes are not linear and cannot be used to predict the grade of myopia but can be used as parameters suggestive of refractive error.
Cue Integration in Categorical Tasks: Insights from Audio-Visual Speech Perception
Bejjanki, Vikranth Rao; Clayards, Meghan; Knill, David C.; Aslin, Richard N.
2011-01-01
Previous cue integration studies have examined continuous perceptual dimensions (e.g., size) and have shown that human cue integration is well described by a normative model in which cues are weighted in proportion to their sensory reliability, as estimated from single-cue performance. However, this normative model may not be applicable to categorical perceptual dimensions (e.g., phonemes). In tasks defined over categorical perceptual dimensions, optimal cue weights should depend not only on the sensory variance affecting the perception of each cue but also on the environmental variance inherent in each task-relevant category. Here, we present a computational and experimental investigation of cue integration in a categorical audio-visual (articulatory) speech perception task. Our results show that human performance during audio-visual phonemic labeling is qualitatively consistent with the behavior of a Bayes-optimal observer. Specifically, we show that the participants in our task are sensitive, on a trial-by-trial basis, to the sensory uncertainty associated with the auditory and visual cues, during phonemic categorization. In addition, we show that while sensory uncertainty is a significant factor in determining cue weights, it is not the only one and participants' performance is consistent with an optimal model in which environmental, within category variability also plays a role in determining cue weights. Furthermore, we show that in our task, the sensory variability affecting the visual modality during cue-combination is not well estimated from single-cue performance, but can be estimated from multi-cue performance. The findings and computational principles described here represent a principled first step towards characterizing the mechanisms underlying human cue integration in categorical tasks. PMID:21637344
Khaligh-Razavi, Seyed-Mahdi; Cichy, Radoslaw Martin; Pantazis, Dimitrios; Oliva, Aude
2018-06-07
Animacy and real-world size are properties that describe any object and thus bring basic order into our perception of the visual world. Here, we investigated how the human brain processes real-world size and animacy. For this, we applied representational similarity to fMRI and MEG data to yield a view of brain activity with high spatial and temporal resolutions, respectively. Analysis of fMRI data revealed that a distributed and partly overlapping set of cortical regions extending from occipital to ventral and medial temporal cortex represented animacy and real-world size. Within this set, parahippocampal cortex stood out as the region representing animacy and size stronger than most other regions. Further analysis of the detailed representational format revealed differences among regions involved in processing animacy. Analysis of MEG data revealed overlapping temporal dynamics of animacy and real-world size processing starting at around 150 msec and provided the first neuromagnetic signature of real-world object size processing. Finally, to investigate the neural dynamics of size and animacy processing simultaneously in space and time, we combined MEG and fMRI with a novel extension of MEG-fMRI fusion by representational similarity. This analysis revealed partly overlapping and distributed spatiotemporal dynamics, with parahippocampal cortex singled out as a region that represented size and animacy persistently when other regions did not. Furthermore, the analysis highlighted the role of early visual cortex in representing real-world size. A control analysis revealed that the neural dynamics of processing animacy and size were distinct from the neural dynamics of processing low-level visual features. Together, our results provide a detailed spatiotemporal view of animacy and size processing in the human brain.
Default perception of high-speed motion
Wexler, Mark; Glennerster, Andrew; Cavanagh, Patrick; Ito, Hiroyuki; Seno, Takeharu
2013-01-01
When human observers are exposed to even slight motion signals followed by brief visual transients—stimuli containing no detectable coherent motion signals—they perceive large and salient illusory jumps. This visually striking effect, which we call “high phi,” challenges well-entrenched assumptions about the perception of motion, namely the minimal-motion principle and the breakdown of coherent motion perception with steps above an upper limit called dmax. Our experiments with transients, such as texture randomization or contrast reversal, show that the magnitude of the jump depends on spatial frequency and transient duration—but not on the speed of the inducing motion signals—and the direction of the jump depends on the duration of the inducer. Jump magnitude is robust across jump directions and different types of transient. In addition, when a texture is actually displaced by a large step beyond the upper step size limit of dmax, a breakdown of coherent motion perception is expected; however, in the presence of an inducer, observers again perceive coherent displacements at or just above dmax. In summary, across a large variety of stimuli, we find that when incoherent motion noise is preceded by a small bias, instead of perceiving little or no motion—as suggested by the minimal-motion principle—observers perceive jumps whose amplitude closely follows their own dmax limits. PMID:23572578
Pastukhov, Alexander
2016-02-01
We investigated the relation between perception and sensory memory of multi-stable structure-from-motion displays. The latter is an implicit visual memory that reflects a recent history of perceptual dominance and influences only the initial perception of multi-stable displays. First, we established the earliest time point when the direction of an illusory rotation can be reversed after the display onset (29-114 ms). Because our display manipulation did not bias perception towards a specific direction of illusory rotation but only signaled the change in motion, this means that the perceptual dominance was established no later than 29-114 ms after the stimulus onset. Second, we used orientation-selectivity of sensory memory to establish which display orientation produced the strongest memory trace and when this orientation was presented during the preceding prime interval (80-140 ms). Surprisingly, both estimates point towards the time interval immediately after the display onset, indicating that both perception and sensory memory form at approximately the same time. This suggests a tighter integration between perception and sensory memory than previously thought, warrants a reconsideration of its role in visual perception, and indicates that sensory memory could be a unique behavioral correlate of the earlier perceptual inference that can be studied post hoc.
Visual Perception of Force: Comment on White (2012)
ERIC Educational Resources Information Center
Hubbard, Timothy L.
2012-01-01
White (2012) proposed that kinematic features in a visual percept are matched to stored representations containing information regarding forces (based on prior haptic experience) and that information in the matched, stored representations regarding forces is then incorporated into visual perception. Although some elements of White's (2012) account…
Interesting case of base of skull mass infiltrating cavernous sinuses.
Singh, Achintya Dinesh; Soneja, Manish; Memon, Saba Samad; Vyas, Surabhi
2016-11-16
A man aged 35 years presented with chronic headache and earache of 1-year duration. He had progressive vision loss and diplopia since last 9 months. He also had pain over the face and episodic profuse epistaxis. On examination, perception of light was absent in the right eye and hand movements were detected at 4 m distance in the left eye. Imaging revealed a lobulated mass in the nasopharynx extending into the bilateral cavernous sinuses and sphenoid sinus with bony erosions. Biopsy of the nasopharyngeal mass revealed pathological features which are characteristic of IgG4 disease. His serum IgG4 levels and acute inflammatory markers were also elevated. The patient was started on oral corticosteroid therapy. Fever, headache and earache resolved early and there was gradual improvement in the vision of the left eye. After 6 months, visual acuity in the left eye was 6/9, but right eye visual acuity had no change. Follow-up imaging revealed a significant reduction in the size of the mass. 2016 BMJ Publishing Group Ltd.
Zenner, Andre; Kruger, Antonio
2017-04-01
We define the concept of Dynamic Passive Haptic Feedback (DPHF) for virtual reality by introducing the weight-shifting physical DPHF proxy object Shifty. This concept combines actuators known from active haptics and physical proxies known from passive haptics to construct proxies that automatically adapt their passive haptic feedback. We describe the concept behind our ungrounded weight-shifting DPHF proxy Shifty and the implementation of our prototype. We then investigate how Shifty can, by automatically changing its internal weight distribution, enhance the user's perception of virtual objects interacted with in two experiments. In a first experiment, we show that Shifty can enhance the perception of virtual objects changing in shape, especially in length and thickness. Here, Shifty was shown to increase the user's fun and perceived realism significantly, compared to an equivalent passive haptic proxy. In a second experiment, Shifty is used to pick up virtual objects of different virtual weights. The results show that Shifty enhances the perception of weight and thus the perceived realism by adapting its kinesthetic feedback to the picked-up virtual object. In the same experiment, we additionally show that specific combinations of haptic, visual and auditory feedback during the pick-up interaction help to compensate for visual-haptic mismatch perceived during the shifting process.
Influence of Visual Prism Adaptation on Auditory Space Representation.
Pochopien, Klaudia; Fahle, Manfred
2017-01-01
Prisms shifting the visual input sideways produce a mismatch between the visual versus felt position of one's hand. Prism adaptation eliminates this mismatch, realigning hand proprioception with visual input. Whether this realignment concerns exclusively the visuo-(hand)motor system or it generalizes to acoustic inputs is controversial. We here show that there is indeed a slight influence of visual adaptation on the perceived direction of acoustic sources. However, this shift in perceived auditory direction can be fully explained by a subconscious head rotation during prism exposure and by changes in arm proprioception. Hence, prism adaptation does only indirectly generalize to auditory space perception.
Advances in color science: from retina to behavior
Chatterjee, Soumya; Field, Greg D.; Horwitz, Gregory D.; Johnson, Elizabeth N.; Koida, Kowa; Mancuso, Katherine
2010-01-01
Color has become a premier model system for understanding how information is processed by neural circuits, and for investigating the relationships among genes, neural circuits and perception. Both the physical stimulus for color and the perceptual output experienced as color are quite well characterized, but the neural mechanisms that underlie the transformation from stimulus to perception are incompletely understood. The past several years have seen important scientific and technical advances that are changing our understanding of these mechanisms. Here, and in the accompanying minisymposium, we review the latest findings and hypotheses regarding color computations in the retina, primary visual cortex and higher-order visual areas, focusing on non-human primates, a model of human color vision. PMID:21068298
Visual Form Perception Can Be a Cognitive Correlate of Lower Level Math Categories for Teenagers.
Cui, Jiaxin; Zhang, Yiyun; Cheng, Dazhi; Li, Dawei; Zhou, Xinlin
2017-01-01
Numerous studies have assessed the cognitive correlates of performance in mathematics, but little research has been conducted to systematically examine the relations between visual perception as the starting point of visuospatial processing and typical mathematical performance. In the current study, we recruited 223 seventh graders to perform a visual form perception task (figure matching), numerosity comparison, digit comparison, exact computation, approximate computation, and curriculum-based mathematical achievement tests. Results showed that, after controlling for gender, age, and five general cognitive processes (choice reaction time, visual tracing, mental rotation, spatial working memory, and non-verbal matrices reasoning), visual form perception had unique contributions to numerosity comparison, digit comparison, and exact computation, but had no significant relation with approximate computation or curriculum-based mathematical achievement. These results suggest that visual form perception is an important independent cognitive correlate of lower level math categories, including the approximate number system, digit comparison, and exact computation.
Short-Term Memory for Figure-Ground Organization in the Visual Cortex
O’Herron, Philip; von der Heydt, Rüdiger
2009-01-01
Summary Whether the visual system uses a buffer to store image information and the duration of that storage have been debated intensely in recent psychophysical studies. The long phases of stable perception of reversible figures suggest a memory that persists for seconds. But persistence of similar duration has not been found in signals of the visual cortex. Here we show that figure-ground signals in the visual cortex can persist for a second or more after the removal of the figure-ground cues. When new figure-ground information is presented, the signals adjust rapidly, but when a figure display is changed to an ambiguous edge display, the signals decay slowly – a behavior that is characteristic of memory devices. Figure-ground signals represent the layout of objects in a scene, and we propose that a short-term memory for object layout is important in providing continuity of perception in the rapid stream of images flooding our eyes. PMID:19285475
Mental Imagery Induces Cross-Modal Sensory Plasticity and Changes Future Auditory Perception.
Berger, Christopher C; Ehrsson, H Henrik
2018-04-01
Can what we imagine in our minds change how we perceive the world in the future? A continuous process of multisensory integration and recalibration is responsible for maintaining a correspondence between the senses (e.g., vision, touch, audition) and, ultimately, a stable and coherent perception of our environment. This process depends on the plasticity of our sensory systems. The so-called ventriloquism aftereffect-a shift in the perceived localization of sounds presented alone after repeated exposure to spatially mismatched auditory and visual stimuli-is a clear example of this type of plasticity in the audiovisual domain. In a series of six studies with 24 participants each, we investigated an imagery-induced ventriloquism aftereffect in which imagining a visual stimulus elicits the same frequency-specific auditory aftereffect as actually seeing one. These results demonstrate that mental imagery can recalibrate the senses and induce the same cross-modal sensory plasticity as real sensory stimuli.
Hagan, Cindy C; Woods, Will; Johnson, Sam; Calder, Andrew J; Green, Gary G R; Young, Andrew W
2009-11-24
An influential neural model of face perception suggests that the posterior superior temporal sulcus (STS) is sensitive to those aspects of faces that produce transient visual changes, including facial expression. Other researchers note that recognition of expression involves multiple sensory modalities and suggest that the STS also may respond to crossmodal facial signals that change transiently. Indeed, many studies of audiovisual (AV) speech perception show STS involvement in AV speech integration. Here we examine whether these findings extend to AV emotion. We used magnetoencephalography to measure the neural responses of participants as they viewed and heard emotionally congruent fear and minimally congruent neutral face and voice stimuli. We demonstrate significant supra-additive responses (i.e., where AV > [unimodal auditory + unimodal visual]) in the posterior STS within the first 250 ms for emotionally congruent AV stimuli. These findings show a role for the STS in processing crossmodal emotive signals.
The contribution of dynamic visual cues to audiovisual speech perception.
Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador
2015-08-01
Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.
A computer-generated animated face stimulus set for psychophysiological research
Naples, Adam; Nguyen-Phuc, Alyssa; Coffman, Marika; Kresse, Anna; Faja, Susan; Bernier, Raphael; McPartland., James
2014-01-01
Human faces are fundamentally dynamic, but experimental investigations of face perception traditionally rely on static images of faces. While naturalistic videos of actors have been used with success in some contexts, much research in neuroscience and psychophysics demands carefully controlled stimuli. In this paper, we describe a novel set of computer generated, dynamic, face stimuli. These grayscale faces are tightly controlled for low- and high-level visual properties. All faces are standardized in terms of size, luminance, and location and size of facial features. Each face begins with a neutral pose and transitions to an expression over the course of 30 frames. Altogether there are 222 stimuli spanning 3 different categories of movement: (1) an affective movement (fearful face); (2) a neutral movement (close-lipped, puffed cheeks with open eyes); and (3) a biologically impossible movement (upward dislocation of eyes and mouth). To determine whether early brain responses sensitive to low-level visual features differed between expressions, we measured the occipital P100 event related potential (ERP), which is known to reflect differences in early stages of visual processing and the N170, which reflects structural encoding of faces. We found no differences between faces at the P100, indicating that different face categories were well matched on low-level image properties. This database provides researchers with a well-controlled set of dynamic faces controlled on low-level image characteristics that are applicable to a range of research questions in social perception. PMID:25028164
NASA Astrophysics Data System (ADS)
Hansen, Christian; Schlichting, Stefan; Zidowitz, Stephan; Köhn, Alexander; Hindennach, Milo; Kleemann, Markus; Peitgen, Heinz-Otto
2008-03-01
Tumor resections from the liver are complex surgical interventions. With recent planning software, risk analyses based on individual liver anatomy can be carried out preoperatively. However, additional tumors within the liver are frequently detected during oncological interventions using intraoperative ultrasound. These tumors are not visible in preoperative data and their existence may require changes to the resection strategy. We propose a novel method that allows an intraoperative risk analysis adaptation by merging newly detected tumors with a preoperative risk analysis. To determine the exact positions and sizes of these tumors we make use of a navigated ultrasound-system. A fast communication protocol enables our application to exchange crucial data with this navigation system during an intervention. A further motivation for our work is to improve the visual presentation of a moving ultrasound plane within a complex 3D planning model including vascular systems, tumors, and organ surfaces. In case the ultrasound plane is located inside the liver, occlusion of the ultrasound plane by the planning model is an inevitable problem for the applied visualization technique. Our system allows the surgeon to focus on the ultrasound image while perceiving context-relevant planning information. To improve orientation ability and distance perception, we include additional depth cues by applying new illustrative visualization algorithms. Preliminary evaluations confirm that in case of intraoperatively detected tumors a risk analysis adaptation is beneficial for precise liver surgery. Our new GPU-based visualization approach provides the surgeon with a simultaneous visualization of planning models and navigated 2D ultrasound data while minimizing occlusion problems.
Making memories: the development of long-term visual knowledge in children with visual agnosia.
Metitieri, Tiziana; Barba, Carmen; Pellacani, Simona; Viggiano, Maria Pia; Guerrini, Renzo
2013-01-01
There are few reports about the effects of perinatal acquired brain lesions on the development of visual perception. These studies demonstrate nonseverely impaired visual-spatial abilities and preserved visual memory. Longitudinal data analyzing the effects of compromised perceptions on long-term visual knowledge in agnosics are limited to lesions having occurred in adulthood. The study of children with focal lesions of the visual pathways provides a unique opportunity to assess the development of visual memory when perceptual input is degraded. We assessed visual recognition and visual memory in three children with lesions to the visual cortex having occurred in early infancy. We then explored the time course of visual memory impairment in two of them at 2 years and 3.7 years from the initial assessment. All children exhibited apperceptive visual agnosia and visual memory impairment. We observed a longitudinal improvement of visual memory modulated by the structural properties of objects. Our findings indicate that processing of degraded perceptions from birth results in impoverished memories. The dynamic interaction between perception and memory during development might modulate the long-term construction of visual representations, resulting in less severe impairment.
Making Memories: The Development of Long-Term Visual Knowledge in Children with Visual Agnosia
Barba, Carmen; Pellacani, Simona; Viggiano, Maria Pia; Guerrini, Renzo
2013-01-01
There are few reports about the effects of perinatal acquired brain lesions on the development of visual perception. These studies demonstrate nonseverely impaired visual-spatial abilities and preserved visual memory. Longitudinal data analyzing the effects of compromised perceptions on long-term visual knowledge in agnosics are limited to lesions having occurred in adulthood. The study of children with focal lesions of the visual pathways provides a unique opportunity to assess the development of visual memory when perceptual input is degraded. We assessed visual recognition and visual memory in three children with lesions to the visual cortex having occurred in early infancy. We then explored the time course of visual memory impairment in two of them at 2 years and 3.7 years from the initial assessment. All children exhibited apperceptive visual agnosia and visual memory impairment. We observed a longitudinal improvement of visual memory modulated by the structural properties of objects. Our findings indicate that processing of degraded perceptions from birth results in impoverished memories. The dynamic interaction between perception and memory during development might modulate the long-term construction of visual representations, resulting in less severe impairment. PMID:24319599
Analyzing the Reading Skills and Visual Perception Levels of First Grade Students
ERIC Educational Resources Information Center
Çayir, Aybala
2017-01-01
The purpose of this study was to analyze primary school first grade students' reading levels and correlate their visual perception skills. For this purpose, students' reading speed, reading comprehension and reading errors were determined using The Informal Reading Inventory. Students' visual perception levels were also analyzed using…
NASA Astrophysics Data System (ADS)
Assadi, Amir H.
2001-11-01
Perceptual geometry is an emerging field of interdisciplinary research whose objectives focus on study of geometry from the perspective of visual perception, and in turn, apply such geometric findings to the ecological study of vision. Perceptual geometry attempts to answer fundamental questions in perception of form and representation of space through synthesis of cognitive and biological theories of visual perception with geometric theories of the physical world. Perception of form and space are among fundamental problems in vision science. In recent cognitive and computational models of human perception, natural scenes are used systematically as preferred visual stimuli. Among key problems in perception of form and space, we have examined perception of geometry of natural surfaces and curves, e.g. as in the observer's environment. Besides a systematic mathematical foundation for a remarkably general framework, the advantages of the Gestalt theory of natural surfaces include a concrete computational approach to simulate or recreate images whose geometric invariants and quantities might be perceived and estimated by an observer. The latter is at the very foundation of understanding the nature of perception of space and form, and the (computer graphics) problem of rendering scenes to visually invoke virtual presence.
Simulating the role of visual selective attention during the development of perceptual completion
Schlesinger, Matthew; Amso, Dima; Johnson, Scott P.
2014-01-01
We recently proposed a multi-channel, image-filtering model for simulating the development of visual selective attention in young infants (Schlesinger, Amso & Johnson, 2007). The model not only captures the performance of 3-month-olds on a visual search task, but also implicates two cortical regions that may play a role in the development of visual selective attention. In the current simulation study, we used the same model to simulate 3-month-olds’ performance on a second measure, the perceptual unity task. Two parameters in the model – corresponding to areas in the occipital and parietal cortices – were systematically varied while the gaze patterns produced by the model were recorded and subsequently analyzed. Three key findings emerged from the simulation study. First, the model successfully replicated the performance of 3-month-olds on the unity perception task. Second, the model also helps to explain the improved performance of 2-month-olds when the size of the occluder in the unity perception task is reduced. Third, in contrast to our previous simulation results, variation in only one of the two cortical regions simulated (i.e. recurrent activity in posterior parietal cortex) resulted in a performance pattern that matched 3-month-olds. These findings provide additional support for our hypothesis that the development of perceptual completion in early infancy is promoted by progressive improvements in visual selective attention and oculomotor skill. PMID:23106728
Simulating the role of visual selective attention during the development of perceptual completion.
Schlesinger, Matthew; Amso, Dima; Johnson, Scott P
2012-11-01
We recently proposed a multi-channel, image-filtering model for simulating the development of visual selective attention in young infants (Schlesinger, Amso & Johnson, 2007). The model not only captures the performance of 3-month-olds on a visual search task, but also implicates two cortical regions that may play a role in the development of visual selective attention. In the current simulation study, we used the same model to simulate 3-month-olds' performance on a second measure, the perceptual unity task. Two parameters in the model - corresponding to areas in the occipital and parietal cortices - were systematically varied while the gaze patterns produced by the model were recorded and subsequently analyzed. Three key findings emerged from the simulation study. First, the model successfully replicated the performance of 3-month-olds on the unity perception task. Second, the model also helps to explain the improved performance of 2-month-olds when the size of the occluder in the unity perception task is reduced. Third, in contrast to our previous simulation results, variation in only one of the two cortical regions simulated (i.e. recurrent activity in posterior parietal cortex) resulted in a performance pattern that matched 3-month-olds. These findings provide additional support for our hypothesis that the development of perceptual completion in early infancy is promoted by progressive improvements in visual selective attention and oculomotor skill. © 2012 Blackwell Publishing Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Songhua; Tourassi, Georgia
2012-01-01
The majority of clinical content-based image retrieval (CBIR) studies disregard human perception subjectivity, aiming to duplicate the consensus expert assessment of the visual similarity on example cases. The purpose of our study is twofold: (i) discern better the extent of human perception subjectivity when assessing the visual similarity of two images with similar semantic content, and (ii) explore the feasibility of personalized predictive modeling of visual similarity. We conducted a human observer study in which five observers of various expertise were shown ninety-nine triplets of mammographic masses with similar BI-RADS descriptors and were asked to select the two masses withmore » the highest visual relevance. Pairwise agreement ranged between poor and fair among the five observers, as assessed by the kappa statistic. The observers' self-consistency rate was remarkably low, based on repeated questions where either the orientation or the presentation order of a mass was changed. Various machine learning algorithms were explored to determine whether they can predict each observer's personalized selection using textural features. Many algorithms performed with accuracy that exceeded each observer's self-consistency rate, as determined using a cross-validation scheme. This accuracy was statistically significantly higher than would be expected by chance alone (two-tailed p-value ranged between 0.001 and 0.01 for all five personalized models). The study confirmed that human perception subjectivity should be taken into account when developing CBIR-based medical applications.« less
The perception of naturalness correlates with low-level visual features of environmental scenes.
Berman, Marc G; Hout, Michael C; Kardan, Omid; Hunter, MaryCarol R; Yourganov, Grigori; Henderson, John M; Hanayik, Taylor; Karimi, Hossein; Jonides, John
2014-01-01
Previous research has shown that interacting with natural environments vs. more urban or built environments can have salubrious psychological effects, such as improvements in attention and memory. Even viewing pictures of nature vs. pictures of built environments can produce similar effects. A major question is: What is it about natural environments that produces these benefits? Problematically, there are many differing qualities between natural and urban environments, making it difficult to narrow down the dimensions of nature that may lead to these benefits. In this study, we set out to uncover visual features that related to individuals' perceptions of naturalness in images. We quantified naturalness in two ways: first, implicitly using a multidimensional scaling analysis and second, explicitly with direct naturalness ratings. Features that seemed most related to perceptions of naturalness were related to the density of contrast changes in the scene, the density of straight lines in the scene, the average color saturation in the scene and the average hue diversity in the scene. We then trained a machine-learning algorithm to predict whether a scene was perceived as being natural or not based on these low-level visual features and we could do so with 81% accuracy. As such we were able to reliably predict subjective perceptions of naturalness with objective low-level visual features. Our results can be used in future studies to determine if these features, which are related to naturalness, may also lead to the benefits attained from interacting with nature.
The Silhouette Zoetrope: A New Blend of Motion, Mirroring, Depth, and Size Illusions
Veras, Christine; Pham, Quang-Cuong
2017-01-01
Here, we report a novel combination of visual illusions in one stimulus device, a contemporary innovation of the traditional zoetrope, called Silhouette Zoetrope. In this new device, an animation of moving silhouettes is created by sequential cutouts placed outside a rotating empty cylinder, with slits illuminating the cutouts successively from the back. This “inside-out” zoetrope incurs the following visual effects: the resulting animated figures are perceived (a) horizontally flipped, (b) inside the cylinder, and (c) appear to be of different size than the actual cutout object. Here, we explore the unique combination of illusions in this new device. We demonstrate how the geometry of the device leads to a retinal image consistent with a mirrored and distorted image and binocular disparities consistent with the perception of an object inside the cylinder. PMID:28473908
Lower pitch is larger, yet falling pitches shrink.
Eitan, Zohar; Schupak, Asi; Gotler, Alex; Marks, Lawrence E
2014-01-01
Experiments using diverse paradigms, including speeded discrimination, indicate that pitch and visually-perceived size interact perceptually, and that higher pitch is congruent with smaller size. While nearly all of these studies used static stimuli, here we examine the interaction of dynamic pitch and dynamic size, using Garner's speeded discrimination paradigm. Experiment 1 examined the interaction of continuous rise/fall in pitch and increase/decrease in object size. Experiment 2 examined the interaction of static pitch and size (steady high/low pitches and large/small visual objects), using an identical procedure. Results indicate that static and dynamic auditory and visual stimuli interact in opposite ways. While for static stimuli (Experiment 2), higher pitch is congruent with smaller size (as suggested by earlier work), for dynamic stimuli (Experiment 1), ascending pitch is congruent with growing size, and descending pitch with shrinking size. In addition, while static stimuli (Experiment 2) exhibit both congruence and Garner effects, dynamic stimuli (Experiment 1) present congruence effects without Garner interference, a pattern that is not consistent with prevalent interpretations of Garner's paradigm. Our interpretation of these results focuses on effects of within-trial changes on processing in dynamic tasks and on the association of changes in apparent size with implied changes in distance. Results suggest that static and dynamic stimuli can differ substantially in their cross-modal mappings, and may rely on different processing mechanisms.
Keefe, Bruce D; Wincenciak, Joanna; Jellema, Tjeerd; Ward, James W; Barraclough, Nick E
2016-07-01
When observing another individual's actions, we can both recognize their actions and infer their beliefs concerning the physical and social environment. The extent to which visual adaptation influences action recognition and conceptually later stages of processing involved in deriving the belief state of the actor remains unknown. To explore this we used virtual reality (life-size photorealistic actors presented in stereoscopic three dimensions) to see how visual adaptation influences the perception of individuals in naturally unfolding social scenes at increasingly higher levels of action understanding. We presented scenes in which one actor picked up boxes (of varying number and weight), after which a second actor picked up a single box. Adaptation to the first actor's behavior systematically changed perception of the second actor. Aftereffects increased with the duration of the first actor's behavior, declined exponentially over time, and were independent of view direction. Inferences about the second actor's expectation of box weight were also distorted by adaptation to the first actor. Distortions in action recognition and actor expectations did not, however, extend across different actions, indicating that adaptation is not acting at an action-independent abstract level but rather at an action-dependent level. We conclude that although adaptation influences more complex inferences about belief states of individuals, this is likely to be a result of adaptation at an earlier action recognition stage rather than adaptation operating at a higher, more abstract level in mentalizing or simulation systems.
Ground-plane influences on size estimation in early visual processing.
Champion, Rebecca A; Warren, Paul A
2010-07-21
Ground-planes have an important influence on the perception of 3D space (Gibson, 1950) and it has been shown that the assumption that a ground-plane is present in the scene plays a role in the perception of object distance (Bruno & Cutting, 1988). Here, we investigate whether this influence is exerted at an early stage of processing, to affect the rapid estimation of 3D size. Participants performed a visual search task in which they searched for a target object that was larger or smaller than distracter objects. Objects were presented against a background that contained either a frontoparallel or slanted 3D surface, defined by texture gradient cues. We measured the effect on search performance of target location within the scene (near vs. far) and how this was influenced by scene orientation (which, e.g., might be consistent with a ground or ceiling plane, etc.). In addition, we investigated how scene orientation interacted with texture gradient information (indicating surface slant), to determine how these separate cues to scene layout were combined. We found that the difference in target detection performance between targets at the front and rear of the simulated scene was maximal when the scene was consistent with a ground-plane - consistent with the use of an elevation cue to object distance. In addition, we found a significant increase in the size of this effect when texture gradient information (indicating surface slant) was present, but no interaction between texture gradient and scene orientation information. We conclude that scene orientation plays an important role in the estimation of 3D size at an early stage of processing, and suggest that elevation information is linearly combined with texture gradient information for the rapid estimation of 3D size. Copyright 2010 Elsevier Ltd. All rights reserved.
Rosenblatt, Steven David; Crane, Benjamin Thomas
2015-01-01
A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37) participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s) at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001) and rotation (p<0.001), and arrows (p = 0.02). For the visual motion stimuli, inertial motion perception was shifted in the direction consistent with the visual stimulus. Arrows had a small effect on self-motion perception driven by a minority of subjects. There was no significant effect of illusory motion on self-motion perception for either translation or rotation (p>0.1 for both). Thus, although a true moving visual field can induce self-motion, results of this study show that illusory motion does not.
Whitwell, Robert L; Goodale, Melvyn A; Merritt, Kate E; Enns, James T
2018-01-01
The two visual systems hypothesis proposes that human vision is supported by an occipito-temporal network for the conscious visual perception of the world and a fronto-parietal network for visually-guided, object-directed actions. Two specific claims about the fronto-parietal network's role in sensorimotor control have generated much data and controversy: (1) the network relies primarily on the absolute metrics of target objects, which it rapidly transforms into effector-specific frames of reference to guide the fingers, hands, and limbs, and (2) the network is largely unaffected by scene-based information extracted by the occipito-temporal network for those same targets. These two claims lead to the counter-intuitive prediction that in-flight anticipatory configuration of the fingers during object-directed grasping will resist the influence of pictorial illusions. The research confirming this prediction has been criticized for confounding the difference between grasping and explicit estimates of object size with differences in attention, sensory feedback, obstacle avoidance, metric sensitivity, and priming. Here, we address and eliminate each of these confounds. We asked participants to reach out and pick up 3D target bars resting on a picture of the Sander Parallelogram illusion and to make explicit estimates of the length of those bars. Participants performed their grasps without visual feedback, and were permitted to grasp the targets after making their size-estimates to afford them an opportunity to reduce illusory error with haptic feedback. The results show unequivocally that the effect of the illusion is stronger on perceptual judgments than on grasping. Our findings from the normally-sighted population provide strong support for the proposal that human vision is comprised of functionally and anatomically dissociable systems. Copyright © 2017 Elsevier Ltd. All rights reserved.
Horn, R R; Williams, A M; Scott, M A; Hodges, N J
2005-07-01
The authors examined the observational learning of 24 participants whom they constrained to use the model by removing intrinsic visual knowledge of results (KR). Matched participants assigned to video (VID), point-light (PL), and no-model (CON) groups performed a soccer-chipping task in which vision was occluded at ball contact. Pre- and posttests were interspersed with alternating periods of demonstration and acquisition. The authors assessed delayed retention 2-3 days later. In support of the visual perception perspective, the participants who observed the models showed immediate and enduring changes to more closely imitate the model's relative motion. While observing the demonstration, the PL group participants were more selective in their visual search than were the VID group participants but did not perform more accurately or learn more.
Audibility and visual biasing in speech perception
NASA Astrophysics Data System (ADS)
Clement, Bart Richard
Although speech perception has been considered a predominantly auditory phenomenon, large benefits from vision in degraded acoustic conditions suggest integration of audition and vision. More direct evidence of this comes from studies of audiovisual disparity that demonstrate vision can bias and even dominate perception (McGurk & MacDonald, 1976). It has been observed that hearing-impaired listeners demonstrate more visual biasing than normally hearing listeners (Walden et al., 1990). It is argued here that stimulus audibility must be equated across groups before true differences can be established. In the present investigation, effects of visual biasing on perception were examined as audibility was degraded for 12 young normally hearing listeners. Biasing was determined by quantifying the degree to which listener identification functions for a single synthetic auditory /ba-da-ga/ continuum changed across two conditions: (1)an auditory-only listening condition; and (2)an auditory-visual condition in which every item of the continuum was synchronized with visual articulations of the consonant-vowel (CV) tokens /ba/ and /ga/, as spoken by each of two talkers. Audibility was altered by presenting the conditions in quiet and in noise at each of three signal-to- noise (S/N) ratios. For the visual-/ba/ context, large effects of audibility were found. As audibility decreased, visual biasing increased. A large talker effect also was found, with one talker eliciting more biasing than the other. An independent lipreading measure demonstrated that this talker was more visually intelligible than the other. For the visual-/ga/ context, audibility and talker effects were less robust, possibly obscured by strong listener effects, which were characterized by marked differences in perceptual processing patterns among participants. Some demonstrated substantial biasing whereas others demonstrated little, indicating a strong reliance on audition even in severely degraded acoustic conditions. Listener effects were not correlated with lipreading performance. The large effect of audibility suggests that conclusions regarding an increased reliance on vision among hearing- impaired listeners were premature, and that accurate comparisons only can be made after equating audibility. Further, if after such control, individual hearing- impaired listeners demonstrate the processing differences that were demonstrated in the present investigation, then these findings have the potential to impact aural rehabilitation strategies.
Scanpath-based analysis of objects conspicuity in context of human vision physiology.
Augustyniak, Piotr
2007-01-01
This paper discusses principal aspects of objects conspicuity investigated with use of an eye tracker and interpreted on the background of human vision physiology. Proper management of objects conspicuity is fundamental in several leading edge applications in the information society like advertisement, web design, man-machine interfacing and ergonomics. Although some common rules of human perception are applied since centuries in the art, the interest of human perception process is motivated today by the need of gather and maintain the recipient attention by putting selected messages in front of the others. Our research uses the visual tasks methodology and series of progressively modified natural images. The modifying details were attributed by their size, color and position while the scanpath-derived gaze points confirmed or not the act of perception. The statistical analysis yielded the probability of detail perception and correlations with the attributes. This probability conforms to the knowledge about the retina anatomy and perception physiology, although we use noninvasive methods only.
Visual perception of fatigued lifting actions.
Fischer, Steven L; Albert, Wayne J; McGarry, Tim
2012-12-01
Fatigue-related changes in lifting kinematics may expose workers to undue injury risks. Early detection of accumulating fatigue offers the prospect of intervention strategies to mitigate such fatigue-related risks. In a first step towards this objective, this study investigated whether fatigue detection was accessible to visual perception and, if so, what was the key visual information required for successful fatigue discrimination. Eighteen participants were tasked with identifying fatigued lifts when viewing 24 trials presented using both video and point-light representations. Each trial comprised a pair of lifting actions containing a fresh and a fatigued lift from the same individual presented in counter-balanced sequence. Confidence intervals demonstrated that the frequency of correct responses for both sexes exceeded chance expectations (50%) for both video (68%±12%) and point-light representations (67%±10%), demonstrating that fatigued lifting kinematics are open to visual perception. There were no significant differences between sexes or viewing condition, the latter result indicating kinematic dynamics as providing sufficient information for successful fatigue discrimination. Moreover, results from single viewer investigation reported fatigue detection (75%) from point-light information describing only the kinematics of the box lifted. These preliminary findings may have important workplace applications if fatigue discrimination rates can be improved upon through future research. Copyright © 2012 Elsevier B.V. All rights reserved.
Stevenson, Ryan A; Toulmin, Jennifer K; Youm, Ariana; Besney, Richard M A; Schulz, Samantha E; Barense, Morgan D; Ferber, Susanne
2017-10-30
Recent empirical evidence suggests that autistic individuals perceive the world differently than their typically-developed peers. One theoretical account, the predictive coding hypothesis, posits that autistic individuals show a decreased reliance on previous perceptual experiences, which may relate to autism symptomatology. We tested this through a well-characterized, audiovisual statistical-learning paradigm in which typically-developed participants were first adapted to consistent temporal relationships between audiovisual stimulus pairs (audio-leading, synchronous, visual-leading) and then performed a simultaneity judgement task with audiovisual stimulus pairs varying in temporal offset from auditory-leading to visual-leading. Following exposure to the visual-leading adaptation phase, participants' perception of synchrony was biased towards visual-leading presentations, reflecting the statistical regularities of their previously experienced environment. Importantly, the strength of adaptation was significantly related to the level of autistic traits that the participant exhibited, measured by the Autism Quotient (AQ). This was specific to the Attention to Detail subscale of the AQ that assesses the perceptual propensity to focus on fine-grain aspects of sensory input at the expense of more integrative perceptions. More severe Attention to Detail was related to weaker adaptation. These results support the predictive coding framework, and suggest that changes in sensory perception commonly reported in autism may contribute to autistic symptomatology.
Garment sizes in perception of body size.
Fan, Jintu; Newton, Edward; Lau, Lilian; Liu, Fu
2003-06-01
This paper reports an experimental investigation of the effect of garment size on perceived body size. The perceived body sizes of three Chinese men (thin, medium, and obese build) wearing different sizes of white T-shirts were assessed using Thompson and Gray's 1995 Nine-figural Scale in 1 (thinnest) to 9 (obese) grade and a newly-proposed method. Within the limit of commercially available T-shirt sizes, for thin and medium persons, perceived body sizes are bigger when wearing T-shirts of larger sizes. For an obese person, however, wearing a large size T-shirt tends to make him look thinner. The study also showed that the newly proposed comparative method is more reliable in comparing body size perception but without measuring the magnitude of the change in body-size grade. The figural scale and the comparative method can be complementary.
The reliability and clinical correlates of figure-ground perception in schizophrenia.
Malaspina, Dolores; Simon, Naomi; Goetz, Raymond R; Corcoran, Cheryl; Coleman, Eliza; Printz, David; Mujica-Parodi, Lilianne; Wolitzky, Rachel
2004-01-01
Schizophrenia subjects are impaired in a number of visual attention paradigms. However, their performance on tests of figure-ground visual perception (FGP), which requires subjects to visually discriminate figures embedded in a rival background, is relatively unstudied. We examined FGP in 63 schizophrenia patients and 27 control subjects and found that the patients performed the FGP test reliably and had significantly lower FGP scores than the control subjects. Figure-ground visual perception was significantly correlated with other neuropsychological test scores and was inversely related to negative symptoms. It was unrelated to antipsychotic medication treatment. Figure-ground visual perception depends on "top down" processing of visual stimuli, and thus this data suggests that dysfunction in the higher-level pathways that modulate visual perceptual processes may also be related to a core defect in schizophrenia.
Timing in audiovisual speech perception: A mini review and new psychophysical data.
Venezia, Jonathan H; Thurman, Steven M; Matchin, William; George, Sahara E; Hickok, Gregory
2016-02-01
Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (~35 % identification of /apa/ compared to ~5 % in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (~130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content.
Timing in Audiovisual Speech Perception: A Mini Review and New Psychophysical Data
Venezia, Jonathan H.; Thurman, Steven M.; Matchin, William; George, Sahara E.; Hickok, Gregory
2015-01-01
Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually-relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (∼35% identification of /apa/ compared to ∼5% in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually-relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (∼130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content. PMID:26669309
Alekseichuk, Ivan; Diers, Kersten; Paulus, Walter; Antal, Andrea
2016-10-15
The aim of this study was to investigate if the blood oxygenation level-dependent (BOLD) changes in the visual cortex can be used as biomarkers reflecting the online and offline effects of transcranial electrical stimulation (tES). Anodal transcranial direct current stimulation (tDCS) and 10Hz transcranial alternating current stimulation (tACS) were applied for 10min duration over the occipital cortex of healthy adults during the presentation of different visual stimuli, using a crossover, double-blinded design. Control experiments were also performed, in which sham stimulation as well as another electrode montage were used. Anodal tDCS over the visual cortex induced a small but significant further increase in BOLD response evoked by a visual stimulus; however, no aftereffect was observed. Ten hertz of tACS did not result in an online effect, but in a widespread offline BOLD decrease over the occipital, temporal, and frontal areas. These findings demonstrate that tES during visual perception affects the neuronal metabolism, which can be detected with functional magnetic resonance imaging (fMRI). Copyright © 2016 Elsevier Inc. All rights reserved.
Effects of configural processing on the perceptual spatial resolution for face features.
Namdar, Gal; Avidan, Galia; Ganel, Tzvi
2015-11-01
Configural processing governs human perception across various domains, including face perception. An established marker of configural face perception is the face inversion effect, in which performance is typically better for upright compared to inverted faces. In two experiments, we tested whether configural processing could influence basic visual abilities such as perceptual spatial resolution (i.e., the ability to detect spatial visual changes). Face-related perceptual spatial resolution was assessed by measuring the just noticeable difference (JND) to subtle positional changes between specific features in upright and inverted faces. The results revealed robust inversion effect for spatial sensitivity to configural-based changes, such as the distance between the mouth and the nose, or the distance between the eyes and the nose. Critically, spatial resolution for face features within the region of the eyes (e.g., the interocular distance between the eyes) was not affected by inversion, suggesting that the eye region operates as a separate 'gestalt' unit which is relatively immune to manipulations that would normally hamper configural processing. Together these findings suggest that face orientation modulates fundamental psychophysical abilities including spatial resolution. Furthermore, they indicate that classic psychophysical methods can be used as a valid measure of configural face processing. Copyright © 2015 Elsevier Ltd. All rights reserved.
Psycho-physiological effects of visual artifacts by stereoscopic display systems
NASA Astrophysics Data System (ADS)
Kim, Sanghyun; Yoshitake, Junki; Morikawa, Hiroyuki; Kawai, Takashi; Yamada, Osamu; Iguchi, Akihiko
2011-03-01
The methods available for delivering stereoscopic (3D) display using glasses can be classified as time-multiplexing and spatial-multiplexing. With both methods, intrinsic visual artifacts result from the generation of the 3D image pair on a flat panel display device. In the case of the time-multiplexing method, an observer perceives three artifacts: flicker, the Mach-Dvorak effect, and a phantom array. These only occur under certain conditions, with flicker appearing in any conditions, the Mach-Dvorak effect during smooth pursuit eye movements (SPM), and a phantom array during saccadic eye movements (saccade). With spatial-multiplexing, the artifacts are temporal-parallax (due to the interlaced video signal), binocular rivalry, and reduced spatial resolution. These artifacts are considered one of the major impediments to the safety and comfort of 3D display users. In this study, the implications of the artifacts for the safety and comfort are evaluated by examining the psychological changes they cause through subjective symptoms of fatigue and the depth sensation. Physiological changes are also measured as objective responses based on analysis of heart and brain activation by visual artifacts. Further, to understand the characteristics of each artifact and the combined effects of the artifacts, four experimental conditions are developed and tested. The results show that perception of artifacts differs according to the visual environment and the display method. Furthermore visual fatigue and the depth sensation are influenced by the individual characteristics of each artifact. Similarly, heart rate variability and regional cerebral oxygenation changes by perception of artifacts in conditions.
Perceived change in orientation from optic flow in the central visual field
NASA Technical Reports Server (NTRS)
Dyre, Brian P.; Andersen, George J.
1988-01-01
The effects of internal depth within a simulation display on perceived changes in orientation have been studied. Subjects monocularly viewed displays simulating observer motion within a volume of randomly positioned points through a window which limited the field of view to 15 deg. Changes in perceived spatial orientation were measured by changes in posture. The extent of internal depth within the display, the presence or absence of visual information specifying change in orientation, and the frequency of motion supplied by the display were examined. It was found that increased sway occurred at frequencies equal to or below 0.375 Hz when motion at these frequencies was displayed. The extent of internal depth had no effect on the perception of changing orientation.
Zold, Camila L.
2015-01-01
The primary visual cortex (V1) is widely regarded as faithfully conveying the physical properties of visual stimuli. Thus, experience-induced changes in V1 are often interpreted as improving visual perception (i.e., perceptual learning). Here we describe how, with experience, cue-evoked oscillations emerge in V1 to convey expected reward time as well as to relate experienced reward rate. We show, in chronic multisite local field potential recordings from rat V1, that repeated presentation of visual cues induces the emergence of visually evoked oscillatory activity. Early in training, the visually evoked oscillations relate to the physical parameters of the stimuli. However, with training, the oscillations evolve to relate the time in which those stimuli foretell expected reward. Moreover, the oscillation prevalence reflects the reward rate recently experienced by the animal. Thus, training induces experience-dependent changes in V1 activity that relate to what those stimuli have come to signify behaviorally: when to expect future reward and at what rate. PMID:26134643
Motion perception tasks as potential correlates to driving difficulty in the elderly
NASA Astrophysics Data System (ADS)
Raghuram, A.; Lakshminarayanan, V.
2006-09-01
Changes in the demographics indicates that the population older than 65 is on the rise because of the aging of the ‘baby boom’ generation. This aging trend and driving related accident statistics reveal the need for procedures and tests that would assess the driving ability of older adults and predict whether they would be safe or unsafe drivers. Literature shows that an attention based test called the useful field of view (UFOV) was a significant predictor of accident rates compared to any other visual function tests. The present study evaluates a qualitative trend on using motion perception tasks as a potential visual perceptual correlates in screening elderly drivers who might have difficulty in driving. Data was collected from 15 older subjects with a mean age of 71. Motion perception tasks included—speed discrimination with radial and lamellar motion, time to collision using prediction motion and estimating direction of heading. A motion index score was calculated which was indicative of performance on all of the above-mentioned motion tasks. Scores on visual attention was assessed using UFOV. A driving habit questionnaire was also administered for a self report on the driving difficulties and accident rates. A qualitative trend based on frequency distributions show that thresholds on the motion perception tasks are successful in identifying subjects who reported to have had difficulty in certain aspects of driving and had accidents. Correlation between UFOV and motion index scores was not significant indicating that probably different aspects of visual information processing that are crucial to driving behaviour are being tapped by these two paradigms. UFOV and motion perception tasks together can be a better predictor for identifying at risk or safe drivers than just using either one of them.
Kinesthetic information disambiguates visual motion signals.
Hu, Bo; Knill, David C
2010-05-25
Numerous studies have shown that extra-retinal signals can disambiguate motion information created by movements of the eye or head. We report a new form of cross-modal sensory integration in which the kinesthetic information generated by active hand movements essentially captures ambiguous visual motion information. Several previous studies have shown that active movement can bias observers' percepts of bi-stable stimuli; however, these effects seem to be best explained by attentional mechanisms. We show that kinesthetic information can change an otherwise stable perception of motion, providing evidence of genuine fusion between visual and kinesthetic information. The experiments take advantage of the aperture problem, in which the motion of a one-dimensional grating pattern behind an aperture, while geometrically ambiguous, appears to move stably in the grating normal direction. When actively moving the pattern, however, the observer sees the motion to be in the hand movement direction. Copyright 2010 Elsevier Ltd. All rights reserved.
Human visual system-based color image steganography using the contourlet transform
NASA Astrophysics Data System (ADS)
Abdul, W.; Carré, P.; Gaborit, P.
2010-01-01
We present a steganographic scheme based on the contourlet transform which uses the contrast sensitivity function (CSF) to control the force of insertion of the hidden information in a perceptually uniform color space. The CIELAB color space is used as it is well suited for steganographic applications because any change in the CIELAB color space has a corresponding effect on the human visual system as is very important for steganographic schemes to be undetectable by the human visual system (HVS). The perceptual decomposition of the contourlet transform gives it a natural advantage over other decompositions as it can be molded with respect to the human perception of different frequencies in an image. The evaluation of the imperceptibility of the steganographic scheme with respect to the color perception of the HVS is done using standard methods such as the structural similarity (SSIM) and CIEDE2000. The robustness of the inserted watermark is tested against JPEG compression.
Age Differences in Visual-Auditory Self-Motion Perception during a Simulated Driving Task
Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L.
2016-01-01
Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e., optic flow) and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e., engine, tire, and wind sounds). Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion. PMID:27199829
Reading Disability and Visual Perception in Families: New Findings.
ERIC Educational Resources Information Center
Oxford, Rebecca L.
Frequently a variety of visual perception difficulties correlate with reading disabilities. A study was made to investigate the relationship between visual perception and reading disability in families, and to explore the genetic aspects of the relationship. One-hundred twenty-five reading-disabled students, ages 7.5 to 12 years, were matched with…
PONS, FERRAN; ANDREU, LLORENC.; SANZ-TORRENT, MONICA; BUIL-LEGAZ, LUCIA; LEWKOWICZ, DAVID J.
2014-01-01
Speech perception involves the integration of auditory and visual articulatory information and, thus, requires the perception of temporal synchrony between this information. There is evidence that children with Specific Language Impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component followed the visual component. None of the groups perceived an audiovisual asynchrony of 366ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception. PMID:22874648
Pons, Ferran; Andreu, Llorenç; Sanz-Torrent, Monica; Buil-Legaz, Lucía; Lewkowicz, David J
2013-06-01
Speech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666 ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component preceded [corrected] the visual component. None of the groups perceived an audiovisual asynchrony of 366 ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception.
Ganz, Aura; Schafer, James; Gandhi, Siddhesh; Puleo, Elaine; Wilson, Carole; Robertson, Meg
2012-01-01
We introduce PERCEPT system, an indoor navigation system for the blind and visually impaired. PERCEPT will improve the quality of life and health of the visually impaired community by enabling independent living. Using PERCEPT, blind users will have independent access to public health facilities such as clinics, hospitals, and wellness centers. Access to healthcare facilities is crucial for this population due to the multiple health conditions that they face such as diabetes and its complications. PERCEPT system trials with 24 blind and visually impaired users in a multistory building show PERCEPT system effectiveness in providing appropriate navigation instructions to these users. The uniqueness of our system is that it is affordable and that its design follows orientation and mobility principles. We hope that PERCEPT will become a standard deployed in all indoor public spaces, especially in healthcare and wellness facilities. PMID:23316225
Visual Imagery without Visual Perception?
ERIC Educational Resources Information Center
Bertolo, Helder
2005-01-01
The question regarding visual imagery and visual perception remain an open issue. Many studies have tried to understand if the two processes share the same mechanisms or if they are independent, using different neural substrates. Most research has been directed towards the need of activation of primary visual areas during imagery. Here we review…
Perception and Attention for Visualization
ERIC Educational Resources Information Center
Haroz, Steve
2013-01-01
This work examines how a better understanding of visual perception and attention can impact visualization design. In a collection of studies, I explore how different levels of the visual system can measurably affect a variety of visualization metrics. The results show that expert preference, user performance, and even computational performance are…
Barton, Brian; Brewer, Alyssa A.
2017-01-01
The cortical hierarchy of the human visual system has been shown to be organized around retinal spatial coordinates throughout much of low- and mid-level visual processing. These regions contain visual field maps (VFMs) that each follows the organization of the retina, with neighboring aspects of the visual field processed in neighboring cortical locations. On a larger, macrostructural scale, groups of such sensory cortical field maps (CFMs) in both the visual and auditory systems are organized into roughly circular cloverleaf clusters. CFMs within clusters tend to share properties such as receptive field distribution, cortical magnification, and processing specialization. Here we use fMRI and population receptive field (pRF) modeling to investigate the extent of VFM and cluster organization with an examination of higher-level visual processing in temporal cortex and compare these measurements to mid-level visual processing in dorsal occipital cortex. In human temporal cortex, the posterior superior temporal sulcus (pSTS) has been implicated in various neuroimaging studies as subserving higher-order vision, including face processing, biological motion perception, and multimodal audiovisual integration. In human dorsal occipital cortex, the transverse occipital sulcus (TOS) contains the V3A/B cluster, which comprises two VFMs subserving mid-level motion perception and visuospatial attention. For the first time, we present the organization of VFMs in pSTS in a cloverleaf cluster. This pSTS cluster contains four VFMs bilaterally: pSTS-1:4. We characterize these pSTS VFMs as relatively small at ∼125 mm2 with relatively large pRF sizes of ∼2–8° of visual angle across the central 10° of the visual field. V3A and V3B are ∼230 mm2 in surface area, with pRF sizes here similarly ∼1–8° of visual angle across the same region. In addition, cortical magnification measurements show that a larger extent of the pSTS VFM surface areas are devoted to the peripheral visual field than those in the V3A/B cluster. Reliability measurements of VFMs in pSTS and V3A/B reveal that these cloverleaf clusters are remarkably consistent and functionally differentiable. Our findings add to the growing number of measurements of widespread sensory CFMs organized into cloverleaf clusters, indicating that CFMs and cloverleaf clusters may both be fundamental organizing principles in cortical sensory processing. PMID:28293182
Spatial Alignment and Response Hand in Geometric and Motion Illusions
Scocchia, Lisa; Paroli, Michela; Stucchi, Natale A.; Sedda, Anna
2017-01-01
Perception of visual illusions is susceptible to manipulation of their spatial properties. Further, illusions can sometimes affect visually guided actions, especially the movement planning phase. Remarkably, visual properties of objects related to actions, such as affordances, can prime more accurate perceptual judgements. In spite of the amount of knowledge available on affordances and on the influence of illusions on actions (or lack of thereof), virtually nothing is known about the reverse: the influence of action-related parameters on the perception of visual illusions. Here, we tested a hypothesis that the response mode (that can be linked to action-relevant features) can affect perception of the Poggendorff (geometric) and of the Vanishing Point (motion) illusion. We explored the role of hand dominance (right dominant versus left non-dominant hand) and its interaction with stimulus spatial alignment (i.e., congruency between visual stimulus and the hand used for responses). Seventeen right-handed participants performed our tasks with their right and left hands, and the stimuli were presented in regular and mirror-reversed views. It turned out that the regular version of the Poggendorff display generates a stronger illusion compared to the mirror version, and that participants are less accurate and show more variability when they use their left hand in responding to the Vanishing Point. In summary, our results show that there is a marginal effect of hand precision in motion related illusions, which is absent for geometrical illusions. In the latter, attentional anisometry seems to play a greater role in generating the illusory effect. Taken together, our findings suggest that changes in the response mode (here: manual action-related parameters) do not necessarily affect illusion perception. Therefore, although intuitively speaking there should be at least unidirectional effects of perception on action, and possible interactions between the two systems, this simple study still suggests their relative independence, except for the case when the less skilled (non-dominant) hand and arguably more deliberate responses are used. PMID:28769830
Spatiotemporal characteristics of retinal response to network-mediated photovoltaic stimulation.
Ho, Elton; Smith, Richard; Goetz, Georges; Lei, Xin; Galambos, Ludwig; Kamins, Theodore I; Harris, James; Mathieson, Keith; Palanker, Daniel; Sher, Alexander
2018-02-01
Subretinal prostheses aim at restoring sight to patients blinded by photoreceptor degeneration using electrical activation of the surviving inner retinal neurons. Today, such implants deliver visual information with low-frequency stimulation, resulting in discontinuous visual percepts. We measured retinal responses to complex visual stimuli delivered at video rate via a photovoltaic subretinal implant and by visible light. Using a multielectrode array to record from retinal ganglion cells (RGCs) in the healthy and degenerated rat retina ex vivo, we estimated their spatiotemporal properties from the spike-triggered average responses to photovoltaic binary white noise stimulus with 70-μm pixel size at 20-Hz frame rate. The average photovoltaic receptive field size was 194 ± 3 μm (mean ± SE), similar to that of visual responses (221 ± 4 μm), but response latency was significantly shorter with photovoltaic stimulation. Both visual and photovoltaic receptive fields had an opposing center-surround structure. In the healthy retina, ON RGCs had photovoltaic OFF responses, and vice versa. This reversal is consistent with depolarization of photoreceptors by electrical pulses, as opposed to their hyperpolarization under increasing light, although alternative mechanisms cannot be excluded. In degenerate retina, both ON and OFF photovoltaic responses were observed, but in the absence of visual responses, it is not clear what functional RGC types they correspond to. Degenerate retina maintained the antagonistic center-surround organization of receptive fields. These fast and spatially localized network-mediated ON and OFF responses to subretinal stimulation via photovoltaic pixels with local return electrodes raise confidence in the possibility of providing more functional prosthetic vision. NEW & NOTEWORTHY Retinal prostheses currently in clinical use have struggled to deliver visual information at naturalistic frequencies, resulting in discontinuous percepts. We demonstrate modulation of the retinal ganglion cells (RGC) activity using complex spatiotemporal stimuli delivered via subretinal photovoltaic implant at 20 Hz in healthy and in degenerate retina. RGCs exhibit fast and localized ON and OFF network-mediated responses, with antagonistic center-surround organization of their receptive fields.
Study of Pattern of Change in Handwriting Class Characters with Different Grades of Myopia
Hedge, Shruti Prabhat; Sriram
2015-01-01
Introduction Handwriting is a visuo-motor skill highly dependent on visual skills. Any defect in the visual inputs could affect a change in the handwriting. Understanding the variation in handwriting characters caused by visual acuity change can help in identifying learning disabilities in children and also assess the disability in elderly. In our study we try to analyse and catalogue these changes in the handwriting of a person. Materials and Methods The study was conducted among 100 subjects having normal visual acuity. They were asked to perform a set of writing tasks, after which the same tasks were repeated after inducing different grades of myopia. Changes in the handwriting class characters were analysed and compared in all grades of myopia. Results In the study it was found that the letter size, pastiosity, word omissions, inability to stay on line all increase with changes in visual acuity. However these finding are not proportional to the grade of myopia. Conclusion From the findings of the study it can be concluded that myopia significantly influences the handwriting and any change in visual acuity would induce corresponding changes in handwriting. There is increase in letter size, pastiosity where as the ability to stay on line and space between the lines decrease in different grades of myopia. The changes are not linear and cannot be used to predict the grade of myopia but can be used as parameters suggestive of refractive error. PMID:26816917
Invariant recognition drives neural representations of action sequences
Poggio, Tomaso
2017-01-01
Recognizing the actions of others from visual stimuli is a crucial aspect of human perception that allows individuals to respond to social cues. Humans are able to discriminate between similar actions despite transformations, like changes in viewpoint or actor, that substantially alter the visual appearance of a scene. This ability to generalize across complex transformations is a hallmark of human visual intelligence. Advances in understanding action recognition at the neural level have not always translated into precise accounts of the computational principles underlying what representations of action sequences are constructed by human visual cortex. Here we test the hypothesis that invariant action discrimination might fill this gap. Recently, the study of artificial systems for static object perception has produced models, Convolutional Neural Networks (CNNs), that achieve human level performance in complex discriminative tasks. Within this class, architectures that better support invariant object recognition also produce image representations that better match those implied by human and primate neural data. However, whether these models produce representations of action sequences that support recognition across complex transformations and closely follow neural representations of actions remains unknown. Here we show that spatiotemporal CNNs accurately categorize video stimuli into action classes, and that deliberate model modifications that improve performance on an invariant action recognition task lead to data representations that better match human neural recordings. Our results support our hypothesis that performance on invariant discrimination dictates the neural representations of actions computed in the brain. These results broaden the scope of the invariant recognition framework for understanding visual intelligence from perception of inanimate objects and faces in static images to the study of human perception of action sequences. PMID:29253864
Do Visual Illusions Probe the Visual Brain?: Illusions in Action without a Dorsal Visual Stream
ERIC Educational Resources Information Center
Coello, Yann; Danckert, James; Blangero, Annabelle; Rossetti, Yves
2007-01-01
Visual illusions have been shown to affect perceptual judgements more so than motor behaviour, which was interpreted as evidence for a functional division of labour within the visual system. The dominant perception-action theory argues that perception involves a holistic processing of visual objects or scenes, performed within the ventral,…
Threat as a feature in visual semantic object memory.
Calley, Clifford S; Motes, Michael A; Chiang, H-Sheng; Buhl, Virginia; Spence, Jeffrey S; Abdi, Hervé; Anand, Raksha; Maguire, Mandy; Estevez, Leonardo; Briggs, Richard; Freeman, Thomas; Kraut, Michael A; Hart, John
2013-08-01
Threatening stimuli have been found to modulate visual processes related to perception and attention. The present functional magnetic resonance imaging (fMRI) study investigated whether threat modulates visual object recognition of man-made and naturally occurring categories of stimuli. Compared with nonthreatening pictures, threatening pictures of real items elicited larger fMRI BOLD signal changes in medial visual cortices extending inferiorly into the temporo-occipital (TO) "what" pathways. This region elicited greater signal changes for threatening items compared to nonthreatening from both the natural-occurring and man-made stimulus supraordinate categories, demonstrating a featural component to these visual processing areas. Two additional loci of signal changes within more lateral inferior TO areas (bilateral BA18 and 19 as well as the right ventral temporal lobe) were detected for a category-feature interaction, with stronger responses to man-made (category) threatening (feature) stimuli than to natural threats. The findings are discussed in terms of visual recognition of processing efficiently or rapidly groups of items that confer an advantage for survival. Copyright © 2012 Wiley Periodicals, Inc.
Equal Insistence of Proportion of Colour on a 2D Surface
NASA Astrophysics Data System (ADS)
Staig-Graham, B. N.
2006-06-01
Katz conducted experiments on Insistence and Equal Insistence, using an episcotister, chromatic, and achromatic papers which he viewed under different intensities of a light sources and chromatic illumination. His principle of Equal Insistence, combined with Goethe's reputed proportions of surface colours according to their luminosity, and Strzeminski's concept of Unism in painting inspire the author's current painting practice. However, a whole new route of research has been opened by the introduction of Time as a phenomenon of Equal Insitence and Image Perception Fading, under contolled conditions of observer movement at different distances, viewing angles, and illumination. Visual knowledge of Equal Insistence indicates, so far, several apparent changes to the properties of surface colours, and its actual effect upon the shape and size of paintings and symbolism. Typical of the investigation are the achromatic images of an elephant and a mouse.
Kamke, Marc R; Van Luyn, Jeanette; Constantinescu, Gabriella; Harris, Jill
2014-01-01
Evidence suggests that deafness-induced changes in visual perception, cognition and attention may compensate for a hearing loss. Such alterations, however, may also negatively influence adaptation to a cochlear implant. This study investigated whether involuntary attentional capture by salient visual stimuli is altered in children who use a cochlear implant. Thirteen experienced implant users (aged 8-16 years) and age-matched normally hearing children were presented with a rapid sequence of simultaneous visual and auditory events. Participants were tasked with detecting numbers presented in a specified color and identifying a change in the tonal frequency whilst ignoring irrelevant visual distractors. Compared to visual distractors that did not possess the target-defining characteristic, target-colored distractors were associated with a decrement in visual performance (response time and accuracy), demonstrating a contingent capture of involuntary attention. Visual distractors did not, however, impair auditory task performance. Importantly, detection performance for the visual and auditory targets did not differ between the groups. These results suggest that proficient cochlear implant users demonstrate normal capture of visuospatial attention by stimuli that match top-down control settings.
Recognition and surprise alter the human visual evoked response.
Neville, H; Snyder, E; Woods, D; Galambos, R
1982-01-01
Event-related brain potentials (ERPs) to colored slides contained a late positive component that was significantly enhanced when adults recognized the person, place, or painting in the photograph. Additionally, two late components change in amplitude, corresponding to the amount of surprise reported. Because subjects received no instructions to differentiate among the slides, these changes in brain potentials reflect natural classifications made according to their perceptions and evaluations of the pictorial material. This may be a useful paradigm with which to assess perception, memory, and orienting capacities in populations such as infants who cannot follow verbal instructions. Images PMID:6952260
Visual processing affects the neural basis of auditory discrimination.
Kislyuk, Daniel S; Möttönen, Riikka; Sams, Mikko
2008-12-01
The interaction between auditory and visual speech streams is a seamless and surprisingly effective process. An intriguing example is the "McGurk effect": The acoustic syllable /ba/ presented simultaneously with a mouth articulating /ga/ is typically heard as /da/ [McGurk, H., & MacDonald, J. Hearing lips and seeing voices. Nature, 264, 746-748, 1976]. Previous studies have demonstrated the interaction of auditory and visual streams at the auditory cortex level, but the importance of these interactions for the qualitative perception change remained unclear because the change could result from interactions at higher processing levels as well. In our electroencephalogram experiment, we combined the McGurk effect with mismatch negativity (MMN), a response that is elicited in the auditory cortex at a latency of 100-250 msec by any above-threshold change in a sequence of repetitive sounds. An "odd-ball" sequence of acoustic stimuli consisting of frequent /va/ syllables (standards) and infrequent /ba/ syllables (deviants) was presented to 11 participants. Deviant stimuli in the unisensory acoustic stimulus sequence elicited a typical MMN, reflecting discrimination of acoustic features in the auditory cortex. When the acoustic stimuli were dubbed onto a video of a mouth constantly articulating /va/, the deviant acoustic /ba/ was heard as /va/ due to the McGurk effect and was indistinguishable from the standards. Importantly, such deviants did not elicit MMN, indicating that the auditory cortex failed to discriminate between the acoustic stimuli. Our findings show that visual stream can qualitatively change the auditory percept at the auditory cortex level, profoundly influencing the auditory cortex mechanisms underlying early sound discrimination.
Tebartz van Elst, Ludger; Bach, Michael; Blessing, Julia; Riedel, Andreas; Bubl, Emanuel
2015-01-01
A common neurodevelopmental disorder, autism spectrum disorder (ASD), is defined by specific patterns in social perception, social competence, communication, highly circumscribed interests, and a strong subjective need for behavioral routines. Furthermore, distinctive features of visual perception, such as markedly reduced eye contact and a tendency to focus more on small, visual items than on holistic perception, have long been recognized as typical ASD characteristics. Recent debate in the scientific community discusses whether the physiology of low-level visual perception might explain such higher visual abnormalities. While reports of this enhanced, "eagle-like" visual acuity contained methodological errors and could not be substantiated, several authors have reported alterations in even earlier stages of visual processing, such as contrast perception and motion perception at the occipital cortex level. Therefore, in this project, we have investigated the electrophysiology of very early visual processing by analyzing the pattern electroretinogram-based contrast gain, the background noise amplitude, and the psychophysical visual acuities of participants with high-functioning ASD and controls with equal education. Based on earlier findings, we hypothesized that alterations in early vision would be present in ASD participants. This study included 33 individuals with ASD (11 female) and 33 control individuals (12 female). The groups were matched in terms of age, gender, and education level. We found no evidence of altered electrophysiological retinal contrast processing or psychophysical measured visual acuities. There appears to be no evidence for abnormalities in retinal visual processing in ASD patients, at least with respect to contrast detection.
Visualizing Internet routing changes.
Lad, Mohit; Massey, Dan; Zhang, Lixia
2006-01-01
Today's Internet provides a global data delivery service to millions of end users and routing protocols play a critical role in this service. It is important to be able to identify and diagnose any problems occurring in Internet routing. However, the Internet's sheer size makes this task difficult. One cannot easily extract out the most important or relevant routing information from the large amounts of data collected from multiple routers. To tackle this problem, we have developed Link-Rank, a tool to visualize Internet routing changes at the global scale. Link-Rank weighs links in a topological graph by the number of routes carried over each link and visually captures changes in link weights in the form of a topological graph with adjustable size. Using Link-Rank, network operators can easily observe important routing changes from massive amounts of routing data, discover otherwise unnoticed routing problems, understand the impact of topological events, and infer root causes of observed routing changes.
Herrera-Guzmán, I; Peña-Casanova, J; Lara, J P; Gudayol-Ferré, E; Böhm, P
2004-08-01
The assessment of visual perception and cognition forms an important part of any general cognitive evaluation. We have studied the possible influence of age, sex, and education on a normal elderly Spanish population (90 healthy subjects) in performance in visual perception tasks. To evaluate visual perception and cognition, we have used the subjects performance with The Visual Object and Space Perception Battery (VOSP). The test consists of 8 subtests: 4 measure visual object perception (Incomplete Letters, Silhouettes, Object Decision, and Progressive Silhouettes) while the other 4 measure visual space perception (Dot Counting, Position Discrimination, Number Location, and Cube Analysis). The statistical procedures employed were either simple or multiple linear regression analyses (subtests with normal distribution) and Mann-Whitney tests, followed by ANOVA with Scheffe correction (subtests without normal distribution). Age and sex were found to be significant modifying factors in the Silhouettes, Object Decision, Progressive Silhouettes, Position Discrimination, and Cube Analysis subtests. Educational level was found to be a significant predictor of function for the Silhouettes and Object Decision subtests. The results of the sample were adjusted in line with the differences observed. Our study also offers preliminary normative data for the administration of the VOSP to an elderly Spanish population. The results are discussed and compared with similar studies performed in different cultural backgrounds.
Visual perception of ADHD children with sensory processing disorder.
Jung, Hyerim; Woo, Young Jae; Kang, Je Wook; Choi, Yeon Woo; Kim, Kyeong Mi
2014-04-01
The aim of the present study was to investigate the visual perception difference between ADHD children with and without sensory processing disorder, and the relationship between sensory processing and visual perception of the children with ADHD. Participants were 47 outpatients, aged 6-8 years, diagnosed with ADHD. After excluding those who met exclusion criteria, 38 subjects were clustered into two groups, ADHD children with and without sensory processing disorder (SPD), using SSP reported by their parents, then subjects completed K-DTVP-2. Spearman correlation analysis was run to determine the relationship between sensory processing and visual perception, and Mann-Whitney-U test was conducted to compare the K-DTVP-2 score of two groups respectively. The ADHD children with SPD performed inferiorly to ADHD children without SPD in the on 3 quotients of K-DTVP-2. The GVP of K-DTVP-2 score was related to Movement Sensitivity section (r=0.368(*)) and Low Energy/Weak section of SSP (r=0.369*). The result of the present study suggests that among children with ADHD, the visual perception is lower in those children with co-morbid SPD. Also, visual perception may be related to sensory processing, especially in the reactions of vestibular and proprioceptive senses. Regarding academic performance, it is necessary to consider how sensory processing issues affect visual perception in children with ADHD.
Tanaka, Hideaki
2016-01-01
Cosmetic makeup significantly influences facial perception. Because faces consist of similar physical structures, cosmetic makeup is typically used to highlight individual features, particularly those of the eyes (i.e., eye shadow) and mouth (i.e., lipstick). Though event-related potentials have been utilized to study various aspects of facial processing, the influence of cosmetics on specific ERP components remains unclear. The present study aimed to investigate the relationship between the application of cosmetic makeup and the amplitudes of the P1 and N170 event-related potential components during facial perception tasks. Moreover, the influence of visual perception on N170 amplitude, was evaluated under three makeup conditions: Eye Shadow, Lipstick, and No Makeup. Electroencephalography was used to monitor 17 participants who were exposed to visual stimuli under each these three makeup conditions. The results of the present study subsequently demonstrated that the Lipstick condition elicited a significantly greater N170 amplitude than the No Makeup condition, while P1 amplitude was unaffected by any of the conditions. Such findings indicate that the application of cosmetic makeup alters general facial perception but exerts no influence on the perception of low-level visual features. Collectively, these results support the notion that the application of makeup induces subtle alterations in the processing of facial stimuli, with a particular effect on the processing of specific facial components (i.e., the mouth), as reflected by changes in N170 amplitude.
Tanaka, Hideaki
2016-01-01
Cosmetic makeup significantly influences facial perception. Because faces consist of similar physical structures, cosmetic makeup is typically used to highlight individual features, particularly those of the eyes (i.e., eye shadow) and mouth (i.e., lipstick). Though event-related potentials have been utilized to study various aspects of facial processing, the influence of cosmetics on specific ERP components remains unclear. The present study aimed to investigate the relationship between the application of cosmetic makeup and the amplitudes of the P1 and N170 event-related potential components during facial perception tasks. Moreover, the influence of visual perception on N170 amplitude, was evaluated under three makeup conditions: Eye Shadow, Lipstick, and No Makeup. Electroencephalography was used to monitor 17 participants who were exposed to visual stimuli under each these three makeup conditions. The results of the present study subsequently demonstrated that the Lipstick condition elicited a significantly greater N170 amplitude than the No Makeup condition, while P1 amplitude was unaffected by any of the conditions. Such findings indicate that the application of cosmetic makeup alters general facial perception but exerts no influence on the perception of low-level visual features. Collectively, these results support the notion that the application of makeup induces subtle alterations in the processing of facial stimuli, with a particular effect on the processing of specific facial components (i.e., the mouth), as reflected by changes in N170 amplitude. PMID:27656161
Mantokoudis, Georgios; Dähler, Claudia; Dubach, Patrick; Kompis, Martin; Caversaccio, Marco D; Senn, Pascal
2013-01-01
To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI) users. Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM) sentence test. We presented video simulations using different video resolutions (1280 × 720, 640 × 480, 320 × 240, 160 × 120 px), frame rates (30, 20, 10, 7, 5 frames per second (fps)), speech velocities (three different speakers), webcameras (Logitech Pro9000, C600 and C500) and image/sound delays (0-500 ms). All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live Skype™ video connection and live face-to-face communication were assessed. Higher frame rate (>7 fps), higher camera resolution (>640 × 480 px) and shorter picture/sound delay (<100 ms) were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009) in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11) showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032). Webcameras have the potential to improve telecommunication of hearing-impaired individuals.
Differential temporal dynamics during visual imagery and perception.
Dijkstra, Nadine; Mostert, Pim; Lange, Floris P de; Bosch, Sander; van Gerven, Marcel Aj
2018-05-29
Visual perception and imagery rely on similar representations in the visual cortex. During perception, visual activity is characterized by distinct processing stages, but the temporal dynamics underlying imagery remain unclear. Here, we investigated the dynamics of visual imagery in human participants using magnetoencephalography. Firstly, we show that, compared to perception, imagery decoding becomes significant later and representations at the start of imagery already overlap with later time points. This suggests that during imagery, the entire visual representation is activated at once or that there are large differences in the timing of imagery between trials. Secondly, we found consistent overlap between imagery and perceptual processing around 160 ms and from 300 ms after stimulus onset. This indicates that the N170 gets reactivated during imagery and that imagery does not rely on early perceptual representations. Together, these results provide important insights for our understanding of the neural mechanisms of visual imagery. © 2018, Dijkstra et al.
Visual Form Perception Can Be a Cognitive Correlate of Lower Level Math Categories for Teenagers
Cui, Jiaxin; Zhang, Yiyun; Cheng, Dazhi; Li, Dawei; Zhou, Xinlin
2017-01-01
Numerous studies have assessed the cognitive correlates of performance in mathematics, but little research has been conducted to systematically examine the relations between visual perception as the starting point of visuospatial processing and typical mathematical performance. In the current study, we recruited 223 seventh graders to perform a visual form perception task (figure matching), numerosity comparison, digit comparison, exact computation, approximate computation, and curriculum-based mathematical achievement tests. Results showed that, after controlling for gender, age, and five general cognitive processes (choice reaction time, visual tracing, mental rotation, spatial working memory, and non-verbal matrices reasoning), visual form perception had unique contributions to numerosity comparison, digit comparison, and exact computation, but had no significant relation with approximate computation or curriculum-based mathematical achievement. These results suggest that visual form perception is an important independent cognitive correlate of lower level math categories, including the approximate number system, digit comparison, and exact computation. PMID:28824513
Most, Tova; Aviner, Chen
2009-01-01
This study evaluated the benefits of cochlear implant (CI) with regard to emotion perception of participants differing in their age of implantation, in comparison to hearing aid users and adolescents with normal hearing (NH). Emotion perception was examined by having the participants identify happiness, anger, surprise, sadness, fear, and disgust. The emotional content was placed upon the same neutral sentence. The stimuli were presented in auditory, visual, and combined auditory-visual modes. The results revealed better auditory identification by the participants with NH in comparison to all groups of participants with hearing loss (HL). No differences were found among the groups with HL in each of the 3 modes. Although auditory-visual perception was better than visual-only perception for the participants with NH, no such differentiation was found among the participants with HL. The results question the efficiency of some currently used CIs in providing the acoustic cues required to identify the speaker's emotional state.
Seen, Unseen or Overlooked? How Can Visual Perception Develop through a Multimodal Enquiry?
ERIC Educational Resources Information Center
Payne, Rachel
2012-01-01
This article outlines an exploration into the development of visual perception through analysing the process of taking photographs of the mundane as small-scale research. A preoccupation with social construction of the visual lies at the heart of the investigation by correlating the perceptive process to Mitchell's (2002) counter thesis for visual…
Parrish, Audrey E; Brosnan, Sarah F; Beran, Michael J
2015-10-01
Studying visual illusions is critical to understanding typical visual perception. We investigated whether rhesus monkeys (Macaca mulatta) and capuchin monkeys (Cebus apella) perceived the Delboeuf illusion in a similar manner as human adults (Homo sapiens). To test this, in Experiment 1, we presented monkeys and humans with a relative discrimination task that required subjects to choose the larger of 2 central dots that were sometimes encircled by concentric rings. As predicted, humans demonstrated evidence of the Delboeuf illusion, overestimating central dots when small rings surrounded them and underestimating the size of central dots when large rings surrounded them. However, monkeys did not show evidence of the illusion. To rule out an alternate explanation, in Experiment 2, we presented all species with an absolute classification task that required them to classify a central dot as "small" or "large." We presented a range of ring sizes to determine whether the Delboeuf illusion would occur for any dot-to-ring ratios. Here, we found evidence of the Delboeuf illusion in all 3 species. Humans and monkeys underestimated central dot size to a progressively greater degree with progressively larger rings. The Delboeuf illusion now has been extended to include capuchin monkeys and rhesus monkeys, and through such comparative investigations we can better evaluate hypotheses regarding illusion perception among nonhuman animals. (c) 2015 APA, all rights reserved).
Neural mechanisms underlying sound-induced visual motion perception: An fMRI study.
Hidaka, Souta; Higuchi, Satomi; Teramoto, Wataru; Sugita, Yoichi
2017-07-01
Studies of crossmodal interactions in motion perception have reported activation in several brain areas, including those related to motion processing and/or sensory association, in response to multimodal (e.g., visual and auditory) stimuli that were both in motion. Recent studies have demonstrated that sounds can trigger illusory visual apparent motion to static visual stimuli (sound-induced visual motion: SIVM): A visual stimulus blinking at a fixed location is perceived to be moving laterally when an alternating left-right sound is also present. Here, we investigated brain activity related to the perception of SIVM using a 7T functional magnetic resonance imaging technique. Specifically, we focused on the patterns of neural activities in SIVM and visually induced visual apparent motion (VIVM). We observed shared activations in the middle occipital area (V5/hMT), which is thought to be involved in visual motion processing, for SIVM and VIVM. Moreover, as compared to VIVM, SIVM resulted in greater activation in the superior temporal area and dominant functional connectivity between the V5/hMT area and the areas related to auditory and crossmodal motion processing. These findings indicate that similar but partially different neural mechanisms could be involved in auditory-induced and visually-induced motion perception, and neural signals in auditory, visual, and, crossmodal motion processing areas closely and directly interact in the perception of SIVM. Copyright © 2017 Elsevier B.V. All rights reserved.
Spatial Frequency Requirements and Gaze Strategy in Visual-Only and Audiovisual Speech Perception
ERIC Educational Resources Information Center
Wilson, Amanda H.; Alsius, Agnès; Parè, Martin; Munhall, Kevin G.
2016-01-01
Purpose: The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. Method: We presented vowel-consonant-vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent…
NASA Technical Reports Server (NTRS)
Hosman, R. J. A. W.; Vandervaart, J. C.
1984-01-01
An experiment to investigate visual roll attitude and roll rate perception is described. The experiment was also designed to assess the improvements of perception due to cockpit motion. After the onset of the motion, subjects were to make accurate and quick estimates of the final magnitude of the roll angle step response by pressing the appropriate button of a keyboard device. The differing time-histories of roll angle, roll rate and roll acceleration caused by a step response stimulate the different perception processes related the central visual field, peripheral visual field and vestibular organs in different, yet exactly known ways. Experiments with either of the visual displays or cockpit motion and some combinations of these were run to asses the roles of the different perception processes. Results show that the differences in response time are much more pronounced than the differences in perception accuracy.
Auditory-visual fusion in speech perception in children with cochlear implants
Schorr, Efrat A.; Fox, Nathan A.; van Wassenhove, Virginie; Knudsen, Eric I.
2005-01-01
Speech, for most of us, is a bimodal percept whenever we both hear the voice and see the lip movements of a speaker. Children who are born deaf never have this bimodal experience. We tested children who had been deaf from birth and who subsequently received cochlear implants for their ability to fuse the auditory information provided by their implants with visual information about lip movements for speech perception. For most of the children with implants (92%), perception was dominated by vision when visual and auditory speech information conflicted. For some, bimodal fusion was strong and consistent, demonstrating a remarkable plasticity in their ability to form auditory-visual associations despite the atypical stimulation provided by implants. The likelihood of consistent auditory-visual fusion declined with age at implant beyond 2.5 years, suggesting a sensitive period for bimodal integration in speech perception. PMID:16339316
Predictions penetrate perception: Converging insights from brain, behaviour and disorder
O’Callaghan, Claire; Kveraga, Kestutis; Shine, James M; Adams, Reginald B.; Bar, Moshe
2018-01-01
It is argued that during ongoing visual perception, the brain is generating top-down predictions to facilitate, guide and constrain the processing of incoming sensory input. Here we demonstrate that these predictions are drawn from a diverse range of cognitive processes, in order to generate the richest and most informative prediction signals. This is consistent with a central role for cognitive penetrability in visual perception. We review behavioural and mechanistic evidence that indicate a wide spectrum of domains—including object recognition, contextual associations, cognitive biases and affective state—that can directly influence visual perception. We combine these insights from the healthy brain with novel observations from neuropsychiatric disorders involving visual hallucinations, which highlight the consequences of imbalance between top-down signals and incoming sensory information. Together, these lines of evidence converge to indicate that predictive penetration, be it cognitive, social or emotional, should be considered a fundamental framework that supports visual perception. PMID:27222169
Using virtual reality to augment perception, enhance sensorimotor adaptation, and change our minds
Wright, W. Geoffrey
2014-01-01
Technological advances that involve human sensorimotor processes can have both intended and unintended effects on the central nervous system (CNS). This mini review focuses on the use of virtual environments (VE) to augment brain functions by enhancing perception, eliciting automatic motor behavior, and inducing sensorimotor adaptation. VE technology is becoming increasingly prevalent in medical rehabilitation, training simulators, gaming, and entertainment. Although these VE applications have often been shown to optimize outcomes, whether it be to speed recovery, reduce training time, or enhance immersion and enjoyment, there are inherent drawbacks to environments that can potentially change sensorimotor calibration. Across numerous VE studies over the years, we have investigated the effects of combining visual and physical motion on perception, motor control, and adaptation. Recent results from our research involving exposure to dynamic passive motion within a visually-depicted VE reveal that short-term exposure to augmented sensorimotor discordance can result in systematic aftereffects that last beyond the exposure period. Whether these adaptations are advantageous or not, remains to be seen. Benefits as well as risks of using VE-driven sensorimotor stimulation to enhance brain processes will be discussed. PMID:24782724
A Qualitative Study of the Change-of-Shift Report at the Patients' Bedside.
Grimshaw, John; Hatch, Daniel; Willard, Melissa; Abraham, Sam
Concerns about patient bedside change-of-shift reporting at a community hospital in northern Indiana stimulated the development of this qualitative phenomenological study. A review of the literature revealed a research deficit in acute care nurses' perceptions of bedside reporting in relation to compliance. The research question addressed in this study was, "What are acute care nurses' perceptions of the change-of-shift report at the patients' bedside?" Personal interviews were conducted on 7 medical, surgical, and intensive care unit nurse participants at a community hospital in northern Indiana. Five themes were identified from the collected data, which included the time factor, continuity of care, visualization, and challenges in the communication of discreet information.
Attention modulates spatial priority maps in the human occipital, parietal and frontal cortices
Sprague, Thomas C.; Serences, John T.
2014-01-01
Computational theories propose that attention modulates the topographical landscape of spatial ‘priority’ maps in regions of visual cortex so that the location of an important object is associated with higher activation levels. While single-unit recording studies have demonstrated attention-related increases in the gain of neural responses and changes in the size of spatial receptive fields, the net effect of these modulations on the topography of region-level priority maps has not been investigated. Here, we used fMRI and a multivariate encoding model to reconstruct spatial representations of attended and ignored stimuli using activation patterns across entire visual areas. These reconstructed spatial representations reveal the influence of attention on the amplitude and size of stimulus representations within putative priority maps across the visual hierarchy. Our results suggest that attention increases the amplitude of stimulus representations in these spatial maps, particularly in higher visual areas, but does not substantively change their size. PMID:24212672
Primary and multisensory cortical activity is correlated with audiovisual percepts.
Benoit, Margo McKenna; Raij, Tommi; Lin, Fa-Hsuan; Jääskeläinen, Iiro P; Stufflebeam, Steven
2010-04-01
Incongruent auditory and visual stimuli can elicit audiovisual illusions such as the McGurk effect where visual /ka/ and auditory /pa/ fuse into another percept such as/ta/. In the present study, human brain activity was measured with adaptation functional magnetic resonance imaging to investigate which brain areas support such audiovisual illusions. Subjects viewed trains of four movies beginning with three congruent /pa/ stimuli to induce adaptation. The fourth stimulus could be (i) another congruent /pa/, (ii) a congruent /ka/, (iii) an incongruent stimulus that evokes the McGurk effect in susceptible individuals (lips /ka/ voice /pa/), or (iv) the converse combination that does not cause the McGurk effect (lips /pa/ voice/ ka/). This paradigm was predicted to show increased release from adaptation (i.e. stronger brain activation) when the fourth movie and the related percept was increasingly different from the three previous movies. A stimulus change in either the auditory or the visual stimulus from /pa/ to /ka/ (iii, iv) produced within-modality and cross-modal responses in primary auditory and visual areas. A greater release from adaptation was observed for incongruent non-McGurk (iv) compared to incongruent McGurk (iii) trials. A network including the primary auditory and visual cortices, nonprimary auditory cortex, and several multisensory areas (superior temporal sulcus, intraparietal sulcus, insula, and pre-central cortex) showed a correlation between perceiving the McGurk effect and the fMRI signal, suggesting that these areas support the audiovisual illusion. Copyright 2009 Wiley-Liss, Inc.
Primary and Multisensory Cortical Activity is Correlated with Audiovisual Percepts
Benoit, Margo McKenna; Raij, Tommi; Lin, Fa-Hsuan; Jääskeläinen, Iiro P.; Stufflebeam, Steven
2012-01-01
Incongruent auditory and visual stimuli can elicit audiovisual illusions such as the McGurk effect where visual /ka/ and auditory /pa/ fuse into another percept such as/ta/. In the present study, human brain activity was measured with adaptation functional magnetic resonance imaging to investigate which brain areas support such audiovisual illusions. Subjects viewed trains of four movies beginning with three congruent /pa/ stimuli to induce adaptation. The fourth stimulus could be (i) another congruent /pa/, (ii) a congruent /ka/, (iii) an incongruent stimulus that evokes the McGurk effect in susceptible individuals (lips /ka/ voice /pa/), or (iv) the converse combination that does not cause the McGurk effect (lips /pa/ voice/ ka/). This paradigm was predicted to show increased release from adaptation (i.e. stronger brain activation) when the fourth movie and the related percept was increasingly different from the three previous movies. A stimulus change in either the auditory or the visual stimulus from /pa/ to /ka/ (iii, iv) produced within-modality and cross-modal responses in primary auditory and visual areas. A greater release from adaptation was observed for incongruent non-McGurk (iv) compared to incongruent McGurk (iii) trials. A network including the primary auditory and visual cortices, nonprimary auditory cortex, and several multisensory areas (superior temporal sulcus, intraparietal sulcus, insula, and pre-central cortex) showed a correlation between perceiving the McGurk effect and the fMRI signal, suggesting that these areas support the audiovisual illusion. PMID:19780040
The Perception of Cooperativeness Without Any Visual or Auditory Communication.
Chang, Dong-Seon; Burger, Franziska; Bülthoff, Heinrich H; de la Rosa, Stephan
2015-12-01
Perceiving social information such as the cooperativeness of another person is an important part of human interaction. But can people perceive the cooperativeness of others even without any visual or auditory information? In a novel experimental setup, we connected two people with a rope and made them accomplish a point-collecting task together while they could not see or hear each other. We observed a consistently emerging turn-taking behavior in the interactions and installed a confederate in a subsequent experiment who either minimized or maximized this behavior. Participants experienced this only through the haptic force-feedback of the rope and made evaluations about the confederate after each interaction. We found that perception of cooperativeness was significantly affected only by the manipulation of this turn-taking behavior. Gender- and size-related judgments also significantly differed. Our results suggest that people can perceive social information such as the cooperativeness of other people even in situations where possibilities for communication are minimal.
The Perception of Cooperativeness Without Any Visual or Auditory Communication
Chang, Dong-Seon; Burger, Franziska; de la Rosa, Stephan
2015-01-01
Perceiving social information such as the cooperativeness of another person is an important part of human interaction. But can people perceive the cooperativeness of others even without any visual or auditory information? In a novel experimental setup, we connected two people with a rope and made them accomplish a point-collecting task together while they could not see or hear each other. We observed a consistently emerging turn-taking behavior in the interactions and installed a confederate in a subsequent experiment who either minimized or maximized this behavior. Participants experienced this only through the haptic force-feedback of the rope and made evaluations about the confederate after each interaction. We found that perception of cooperativeness was significantly affected only by the manipulation of this turn-taking behavior. Gender- and size-related judgments also significantly differed. Our results suggest that people can perceive social information such as the cooperativeness of other people even in situations where possibilities for communication are minimal. PMID:27551362
Kuntz, Jessica R; Karl, Jenni M; Doan, Jon B; Whishaw, Ian Q
2018-04-01
Reach-to-grasp movements feature the integration of a reach directed by the extrinsic (location) features of a target and a grasp directed by the intrinsic (size, shape) features of a target. The action-perception theory suggests that integration and scaling of a reach-to-grasp movement, including its trajectory and the concurrent digit shaping, are features that depend upon online action pathways of the dorsal visuomotor stream. Scaling is much less accurate for a pantomime reach-to-grasp movement, a pretend reach with the target object absent. Thus, the action-perception theory proposes that pantomime movement is mediated by perceptual pathways of the ventral visuomotor stream. A distinguishing visual feature of a real reach-to-grasp movement is gaze anchoring, in which a participant visually fixates the target throughout the reach and disengages, often by blinking or looking away/averting the head, at about the time that the target is grasped. The present study examined whether gaze anchoring is associated with pantomime reaching. The eye and hand movements of participants were recorded as they reached for a ball of one of three sizes, located on a pedestal at arms' length, or pantomimed the same reach with the ball and pedestal absent. The kinematic measures for real reach-to-grasp movements were coupled to the location and size of the target, whereas the kinematic measures for pantomime reach-to-grasp, although grossly reflecting target features, were significantly altered. Gaze anchoring was also tightly coupled to the target for real reach-to-grasp movements, but there was no systematic focus for gaze, either in relation with the virtual target, the previous location of the target, or the participant's reaching hand, for pantomime reach-to-grasp. The presence of gaze anchoring during real vs. its absence in pantomime reach-to-grasp supports the action-perception theory that real, but not pantomime, reaches are online visuomotor actions and is discussed in relation with the neural control of real and pantomime reach-to-grasp movements.
Rise and fall of the two visual systems theory.
Rossetti, Yves; Pisella, Laure; McIntosh, Robert D
2017-06-01
Among the many dissociations describing the visual system, the dual theory of two visual systems, respectively dedicated to perception and action, has yielded a lot of support. There are psychophysical, anatomical and neuropsychological arguments in favor of this theory. Several behavioral studies that used sensory and motor psychophysical parameters observed differences between perceptive and motor responses. The anatomical network of the visual system in the non-human primate was very readily organized according to two major pathways, dorsal and ventral. Neuropsychological studies, exploring optic ataxia and visual agnosia as characteristic deficits of these two pathways, led to the proposal of a functional double dissociation between visuomotor and visual perceptual functions. After a major wave of popularity that promoted great advances, particularly in knowledge of visuomotor functions, the guiding theory is now being reconsidered. Firstly, the idea of a double dissociation between optic ataxia and visual form agnosia, as cleanly separating visuomotor from visual perceptual functions, is no longer tenable; optic ataxia does not support a dissociation between perception and action and might be more accurately viewed as a negative image of action blindsight. Secondly, dissociations between perceptive and motor responses highlighted in the framework of this theory concern a very elementary level of action, even automatically guided action routines. Thirdly, the very rich interconnected network of the visual brain yields few arguments in favor of a strict perception/action dissociation. Overall, the dissociation between motor function and perceptive function explored by these behavioral and neuropsychological studies can help define an automatic level of action organization deficient in optic ataxia and preserved in action blindsight, and underlines the renewed need to consider the perception-action circle as a functional ensemble. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Visual Acuity does not Moderate Effect Sizes of Higher-Level Cognitive Tasks
Houston, James R.; Bennett, Ilana J.; Allen, Philip A.; Madden, David J.
2016-01-01
Background Declining visual capacities in older adults have been posited as a driving force behind adult age differences in higher-order cognitive functions (e.g., the “common cause” hypothesis of Lindenberger & Baltes, 1994). McGowan, Patterson and Jordan (2013) also found that a surprisingly large number of published cognitive aging studies failed to include adequate measures of visual acuity. However, a recent meta-analysis of three studies (LaFleur & Salthouse, 2014) failed to find evidence that visual acuity moderated or mediated age differences in higher-level cognitive processes. In order to provide a more extensive test of whether visual acuity moderates age differences in higher-level cognitive processes, we conducted a more extensive meta-analysis of topic. Methods Using results from 456 studies, we calculated effect sizes for the main effect of age across four cognitive domains (attention, executive function, memory, and perception/language) separately for five levels of visual acuity criteria (no criteria, undisclosed criteria, self-reported acuity, 20/80-20/31, and 20/30 or better). Results As expected, age had a significant effect on each cognitive domain. However, these age effects did not further differ as a function of visual acuity criteria. Conclusion The current meta-analytic, cross-sectional results suggest that visual acuity is not significantly related to age group differences in higher-level cognitive performance—thereby replicating LaFleur and Salthouse (2014). Further efforts are needed to determine whether other measures of visual functioning (e.g. contrast sensitivity, luminance) affect age differences in cognitive functioning. PMID:27070044
ERIC Educational Resources Information Center
Erdener, Dogu; Burnham, Denis
2018-01-01
Despite the body of research on auditory-visual speech perception in infants and schoolchildren, development in the early childhood period remains relatively uncharted. In this study, English-speaking children between three and four years of age were investigated for: (i) the development of visual speech perception--lip-reading and visual…
ERIC Educational Resources Information Center
Klein, Sheryl; Guiltner, Val; Sollereder, Patti; Cui, Ying
2011-01-01
Occupational therapists assess fine motor, visual motor, visual perception, and visual skill development, but knowledge of the relationships between scores on sensorimotor performance measures and handwriting legibility and speed is limited. Ninety-nine students in grades three to six with learning and/or behavior problems completed the Upper-Limb…
Touch to see: neuropsychological evidence of a sensory mirror system for touch.
Bolognini, Nadia; Olgiati, Elena; Xaiz, Annalisa; Posteraro, Lucio; Ferraro, Francesco; Maravita, Angelo
2012-09-01
The observation of touch can be grounded in the activation of brain areas underpinning direct tactile experience, namely the somatosensory cortices. What is the behavioral impact of such a mirror sensory activity on visual perception? To address this issue, we investigated the causal interplay between observed and felt touch in right brain-damaged patients, as a function of their underlying damaged visual and/or tactile modalities. Patients and healthy controls underwent a detection task, comprising visual stimuli depicting touches or without a tactile component. Touch and No-touch stimuli were presented in egocentric or allocentric perspectives. Seeing touches, regardless of the viewing perspective, differently affects visual perception depending on which sensory modality is damaged: In patients with a selective visual deficit, but without any tactile defect, the sight of touch improves the visual impairment; this effect is associated with a lesion to the supramarginal gyrus. In patients with a tactile deficit, but intact visual perception, the sight of touch disrupts visual processing, inducing a visual extinction-like phenomenon. This disruptive effect is associated with the damage of the postcentral gyrus. Hence, a damage to the somatosensory system can lead to a dysfunctional visual processing, and an intact somatosensory processing can aid visual perception.
Self body-size perception in an insect
NASA Astrophysics Data System (ADS)
Ben-Nun, Amir; Guershon, Moshe; Ayali, Amir
2013-05-01
Animals negotiating complex environments encounter a wide range of obstacles of different shapes and sizes. It is greatly beneficial for the animal to react to such obstacles in a precise, context-specific manner, in order to avoid harm or even simply to minimize energy expenditure. An essential key challenge is, therefore, an estimation of the animal's own physical characteristics, such as body size. A further important aspect of self body-size perception (or SBSP) is the need to update it in accordance with changes in the animal's size and proportions. Despite the major role of SBSP in functional behavior, little is known about if and how it is mediated. Here, we demonstrate that insects are also capable of self perception of body size and that this is a vital factor in allowing them to adjust their behavior following the sudden and dramatic growth associated with periodic molting. We reveal that locusts' SBSP is strongly correlated with their body size. However, we show that the dramatic change in size accompanying adult emergence is not sufficient to create a new and updated SBSP. Rather, this is created and then consolidated only following the individuals' experience and interaction with the physical environment. Behavioral or pharmacological manipulations can both result in maintenance of the old larval SBSP. Our results emphasize the importance of learning and memory-related processes in the development and update of SBSP, and highlight the advantage of insects as good models for a detailed study on the neurobiological and molecular aspects of SBSP.
Muñoz-Ruata, J; Caro-Martínez, E; Martínez Pérez, L; Borja, M
2010-12-01
Perception disorders are frequently observed in persons with intellectual disability (ID) and their influence on cognition has been discussed. The objective of this study is to clarify the mechanisms behind these alterations by analysing the visual event related potentials early component, the N1 wave, which is related to perception alterations in several pathologies. Additionally, the relationship between N1 and neuropsychological visual tests was studied with the aim to understand its functional significance in ID persons. A group of 69 subjects, with etiologically heterogeneous mild ID, performed an odd-ball task of active discrimination of geometric figures. N1a (frontal) and N1b (post-occipital) waves were obtained from the evoked potentials. They also performed several neuropsychological tests. Only component N1a, produced by the target stimulus, showed significant correlations with the visual integration, visual semantic association, visual analogical reasoning tests, Perceptual Reasoning Index (Wechsler Intelligence Scale for Children Fourth Edition) and intelligence quotient. The systematic correlations, produced by the target stimulus in perceptual abilities tasks, with the N1a (frontal) and not with N1b (posterior), suggest that the visual perception process involves frontal participation. These correlations support the idea that the N1a and N1b are not equivalent. The relationship between frontal functions and early stages of visual perception is revised and discussed, as well as the frontal contribution with the neuropsychological tests used. A possible relationship between the frontal activity dysfunction in ID and perceptive problems is suggested. Perceptive alteration observed in persons with ID could indeed be because of altered sensory areas, but also to a failure in the frontal participation of perceptive processes conceived as elaborations inside reverberant circuits of perception-action. © 2010 The Authors. Journal of Intellectual Disability Research © 2010 Blackwell Publishing Ltd.
Li, Yi; Chen, Yuren
2016-12-30
To make driving assistance system more humanized, this study focused on the prediction and assistance of drivers' perception-response time on mountain highway curves. Field tests were conducted to collect real-time driving data and driver vision information. A driver-vision lane model quantified curve elements in drivers' vision. A multinomial log-linear model was established to predict perception-response time with traffic/road environment information, driver-vision lane model, and mechanical status (last second). A corresponding assistance model showed a positive impact on drivers' perception-response times on mountain highway curves. Model results revealed that the driver-vision lane model and visual elements did have important influence on drivers' perception-response time. Compared with roadside passive road safety infrastructure, proper visual geometry design, timely visual guidance, and visual information integrality of a curve are significant factors for drivers' perception-response time.
ERIC Educational Resources Information Center
Buldu, Mehmet; Shaban, Mohamed S.
2010-01-01
This study portrayed a picture of kindergarten through 3rd-grade teachers who teach visual arts, their perceptions of the value of visual arts, their visual arts teaching practices, visual arts experiences provided to young learners in school, and major factors and/or influences that affect their teaching of visual arts. The sample for this study…
Temporal and spatial localization of prediction-error signals in the visual brain.
Johnston, Patrick; Robinson, Jonathan; Kokkinakis, Athanasios; Ridgeway, Samuel; Simpson, Michael; Johnson, Sam; Kaufman, Jordy; Young, Andrew W
2017-04-01
It has been suggested that the brain pre-empts changes in the environment through generating predictions, although real-time electrophysiological evidence of prediction violations in the domain of visual perception remain elusive. In a series of experiments we showed participants sequences of images that followed a predictable implied sequence or whose final image violated the implied sequence. Through careful design we were able to use the same final image transitions across predictable and unpredictable conditions, ensuring that any differences in neural responses were due only to preceding context and not to the images themselves. EEG and MEG recordings showed that early (N170) and mid-latency (N300) visual evoked potentials were robustly modulated by images that violated the implied sequence across a range of types of image change (expression deformations, rigid-rotations and visual field location). This modulation occurred irrespective of stimulus object category. Although the stimuli were static images, MEG source reconstruction of the early latency signal (N/M170) localized expectancy violation signals to brain areas associated with motion perception. Our findings suggest that the N/M170 can index mismatches between predicted and actual visual inputs in a system that predicts trajectories based on ongoing context. More generally we suggest that the N/M170 may reflect a "family" of brain signals generated across widespread regions of the visual brain indexing the resolution of top-down influences and incoming sensory data. This has important implications for understanding the N/M170 and investigating how the brain represents context to generate perceptual predictions. Copyright © 2017 Elsevier B.V. All rights reserved.
de Sousa, Alexandra A.; Proulx, Michael J.
2014-01-01
An overall relationship between brain size and cognitive ability exists across primates. Can more specific information about neural function be gleaned from cortical area volumes? Numerous studies have found significant relationships between brain structures and behaviors. However, few studies have speculated about brain structure-function relationships from the microanatomical to the macroanatomical level. Here we address this problem in comparative neuroanatomy, where the functional relevance of overall brain size and the sizes of cortical regions have been poorly understood, by considering comparative psychology, with measures of visual acuity and the perception of visual illusions. We outline a model where the macroscopic size (volume or surface area) of a cortical region (such as the primary visual cortex, V1) is related to the microstructure of discrete brain regions. The hypothesis developed here is that an absolutely larger V1 can process more information with greater fidelity due to having more neurons to represent a field of space. This is the first time that the necessary comparative neuroanatomical research at the microstructural level has been brought to bear on the issue. The evidence suggests that as the size of V1 increases: the number of neurons increases, the neuron density decreases, and the density of neuronal connections increases. Thus, we describe how information about gross neuromorphology, using V1 as a model for the study of other cortical areas, may permit interpretations of cortical function. PMID:25009469
Does visual attention drive the dynamics of bistable perception?
Dieter, Kevin C.; Brascamp, Jan; Tadin, Duje; Blake, Randolph
2016-01-01
How does attention interact with incoming sensory information to determine what we perceive? One domain in which this question has received serious consideration is that of bistable perception: a captivating class of phenomena that involves fluctuating visual experience in the face of physically unchanging sensory input. Here, some investigations have yielded support for the idea that attention alone determines what is seen, while others have implicated entirely attention-independent processes in driving alternations during bistable perception. We review the body of literature addressing this divide and conclude that in fact both sides are correct – depending on the form of bistable perception being considered. Converging evidence suggests that visual attention is required for alternations in the type of bistable perception called binocular rivalry, while alternations during other types of bistable perception appear to continue without requiring attention. We discuss some implications of this differential effect of attention for our understanding of the mechanisms underlying bistable perception, and examine how these mechanisms operate during our everyday visual experiences. PMID:27230785
Does visual attention drive the dynamics of bistable perception?
Dieter, Kevin C; Brascamp, Jan; Tadin, Duje; Blake, Randolph
2016-10-01
How does attention interact with incoming sensory information to determine what we perceive? One domain in which this question has received serious consideration is that of bistable perception: a captivating class of phenomena that involves fluctuating visual experience in the face of physically unchanging sensory input. Here, some investigations have yielded support for the idea that attention alone determines what is seen, while others have implicated entirely attention-independent processes in driving alternations during bistable perception. We review the body of literature addressing this divide and conclude that in fact both sides are correct-depending on the form of bistable perception being considered. Converging evidence suggests that visual attention is required for alternations in the type of bistable perception called binocular rivalry, while alternations during other types of bistable perception appear to continue without requiring attention. We discuss some implications of this differential effect of attention for our understanding of the mechanisms underlying bistable perception, and examine how these mechanisms operate during our everyday visual experiences.
Perceptual learning in a non-human primate model of artificial vision
Killian, Nathaniel J.; Vurro, Milena; Keith, Sarah B.; Kyada, Margee J.; Pezaris, John S.
2016-01-01
Visual perceptual grouping, the process of forming global percepts from discrete elements, is experience-dependent. Here we show that the learning time course in an animal model of artificial vision is predicted primarily from the density of visual elements. Three naïve adult non-human primates were tasked with recognizing the letters of the Roman alphabet presented at variable size and visualized through patterns of discrete visual elements, specifically, simulated phosphenes mimicking a thalamic visual prosthesis. The animals viewed a spatially static letter using a gaze-contingent pattern and then chose, by gaze fixation, between a matching letter and a non-matching distractor. Months of learning were required for the animals to recognize letters using simulated phosphene vision. Learning rates increased in proportion to the mean density of the phosphenes in each pattern. Furthermore, skill acquisition transferred from trained to untrained patterns, not depending on the precise retinal layout of the simulated phosphenes. Taken together, the findings suggest that learning of perceptual grouping in a gaze-contingent visual prosthesis can be described simply by the density of visual activation. PMID:27874058
Visual cortex entrains to sign language.
Brookshire, Geoffrey; Lu, Jenny; Nusbaum, Howard C; Goldin-Meadow, Susan; Casasanto, Daniel
2017-06-13
Despite immense variability across languages, people can learn to understand any human language, spoken or signed. What neural mechanisms allow people to comprehend language across sensory modalities? When people listen to speech, electrophysiological oscillations in auditory cortex entrain to slow ([Formula: see text]8 Hz) fluctuations in the acoustic envelope. Entrainment to the speech envelope may reflect mechanisms specialized for auditory perception. Alternatively, flexible entrainment may be a general-purpose cortical mechanism that optimizes sensitivity to rhythmic information regardless of modality. Here, we test these proposals by examining cortical coherence to visual information in sign language. First, we develop a metric to quantify visual change over time. We find quasiperiodic fluctuations in sign language, characterized by lower frequencies than fluctuations in speech. Next, we test for entrainment of neural oscillations to visual change in sign language, using electroencephalography (EEG) in fluent speakers of American Sign Language (ASL) as they watch videos in ASL. We find significant cortical entrainment to visual oscillations in sign language <5 Hz, peaking at [Formula: see text]1 Hz. Coherence to sign is strongest over occipital and parietal cortex, in contrast to speech, where coherence is strongest over the auditory cortex. Nonsigners also show coherence to sign language, but entrainment at frontal sites is reduced relative to fluent signers. These results demonstrate that flexible cortical entrainment to language does not depend on neural processes that are specific to auditory speech perception. Low-frequency oscillatory entrainment may reflect a general cortical mechanism that maximizes sensitivity to informational peaks in time-varying signals.
Perceiving the present and a systematization of illusions.
Changizi, Mark A; Hsieh, Andrew; Nijhawan, Romi; Kanai, Ryota; Shimojo, Shinsuke
2008-04-05
Over the history of the study of visual perception there has been great success at discovering countless visual illusions. There has been less success in organizing the overwhelming variety of illusions into empirical generalizations (much less explaining them all via a unifying theory). Here, this article shows that it is possible to systematically organize more than 50 kinds of illusion into a 7 × 4 matrix of 28 classes. In particular, this article demonstrates that (1) smaller sizes, (2) slower speeds, (3) greater luminance contrast, (4) farther distance, (5) lower eccentricity, (6) greater proximity to the vanishing point, and (7) greater proximity to the focus of expansion all tend to have similar perceptual effects, namely, to (A) increase perceived size, (B) increase perceived speed, (C) decrease perceived luminance contrast, and (D) decrease perceived distance. The detection of these empirical regularities was motivated by a hypothesis, called "perceiving the present," that the visual system possesses mechanisms for compensating neural delay during forward motion. This article shows how this hypothesis predicts the empirical regularity. 2008 Cognitive Science Society, Inc.
Plow, Ela B; Obretenova, Souzana N; Fregni, Felipe; Pascual-Leone, Alvaro; Merabet, Lotfi B
2012-01-01
Vision Restoration Therapy (VRT) aims to improve visual field function by systematically training regions of residual vision associated with the activity of suboptimal firing neurons within the occipital cortex. Transcranial direct current stimulation (tDCS) has been shown to modulate cortical excitability. Assess the possible efficacy of tDCS combined with VRT. The authors conducted a randomized, double-blind, demonstration-of-concept pilot study where participants were assigned to either VRT and tDCS or VRT and sham. The anode was placed over the occipital pole to target both affected and unaffected lobes. One hour training sessions were carried out 3 times per week for 3 months in a laboratory. Outcome measures included objective and subjective changes in visual field, recording of visual fixation performance, and vision-related activities of daily living (ADLs) and quality of life (QOL). Although 12 participants were enrolled, only 8 could be analyzed. The VRT and tDCS group demonstrated significantly greater expansion in visual field and improvement on ADLs compared with the VRT and sham group. Contrary to expectations, subjective perception of visual field change was greater in the VRT and sham group. QOL did not change for either group. The observed changes in visual field were unrelated to compensatory eye movements, as shown with fixation monitoring. The combination of occipital cortical tDCS with visual field rehabilitation appears to enhance visual functional outcomes compared with visual rehabilitation alone. TDCS may enhance inherent mechanisms of plasticity associated with training.
Perception of Stand-on-ability: Do Geographical Slants Feel Steeper Than They Look?
Hajnal, Alen; Wagman, Jeffrey B; Doyon, Jonathan K; Clark, Joseph D
2016-07-01
Past research has shown that haptically perceived surface slant by foot is matched with visually perceived slant by a factor of 0.81. Slopes perceived visually appear shallower than when stood on without looking. We sought to identify the sources of this discrepancy by asking participants to judge whether they would be able to stand on an inclined ramp. In the first experiment, visual perception was compared to pedal perception in which participants took half a step with one foot onto an occluded ramp. Visual perception closely matched the actual maximal slope angle that one could stand on, whereas pedal perception underestimated it. Participants may have been less stable in the pedal condition while taking half a step onto the ramp. We controlled for this by having participants hold onto a sturdy tripod in the pedal condition (Experiment 2). This did not eliminate the difference between visual and haptic perception, but repeating the task while sitting on a chair did (Experiment 3). Beyond balance requirements, pedal perception may also be constrained by the limited range of motion at the ankle and knee joints while standing. Indeed, when we restricted range of motion by wearing an ankle brace pedal perception underestimated the affordance (Experiment 4). Implications for ecological theory were offered by discussing the notion of functional equivalence and the role of exploration in perception. © The Author(s) 2016.
Visual perception and imagery: a new molecular hypothesis.
Bókkon, I
2009-05-01
Here, we put forward a redox molecular hypothesis about the natural biophysical substrate of visual perception and visual imagery. This hypothesis is based on the redox and bioluminescent processes of neuronal cells in retinotopically organized cytochrome oxidase-rich visual areas. Our hypothesis is in line with the functional roles of reactive oxygen and nitrogen species in living cells that are not part of haphazard process, but rather a very strict mechanism used in signaling pathways. We point out that there is a direct relationship between neuronal activity and the biophoton emission process in the brain. Electrical and biochemical processes in the brain represent sensory information from the external world. During encoding or retrieval of information, electrical signals of neurons can be converted into synchronized biophoton signals by bioluminescent radical and non-radical processes. Therefore, information in the brain appears not only as an electrical (chemical) signal but also as a regulated biophoton (weak optical) signal inside neurons. During visual perception, the topological distribution of photon stimuli on the retina is represented by electrical neuronal activity in retinotopically organized visual areas. These retinotopic electrical signals in visual neurons can be converted into synchronized biophoton signals by radical and non-radical processes in retinotopically organized mitochondria-rich areas. As a result, regulated bioluminescent biophotons can create intrinsic pictures (depictive representation) in retinotopically organized cytochrome oxidase-rich visual areas during visual imagery and visual perception. The long-term visual memory is interpreted as epigenetic information regulated by free radicals and redox processes. This hypothesis does not claim to solve the secret of consciousness, but proposes that the evolution of higher levels of complexity made the intrinsic picture representation of the external visual world possible by regulated redox and bioluminescent reactions in the visual system during visual perception and visual imagery.
Knowledge is power: how conceptual knowledge transforms visual cognition.
Collins, Jessica A; Olson, Ingrid R
2014-08-01
In this review, we synthesize the existing literature demonstrating the dynamic interplay between conceptual knowledge and visual perceptual processing. We consider two theoretical frameworks that demonstrate interactions between processes and brain areas traditionally considered perceptual or conceptual. Specifically, we discuss categorical perception, in which visual objects are represented according to category membership, and highlight studies showing that category knowledge can penetrate early stages of visual analysis. We next discuss the embodied account of conceptual knowledge, which holds that concepts are instantiated in the same neural regions required for specific types of perception and action, and discuss the limitations of this framework. We additionally consider studies showing that gaining abstract semantic knowledge about objects and faces leads to behavioral and electrophysiological changes that are indicative of more efficient stimulus processing. Finally, we consider the role that perceiver goals and motivation may play in shaping the interaction between conceptual and perceptual processing. We hope to demonstrate how pervasive such interactions between motivation, conceptual knowledge, and perceptual processing are in our understanding of the visual environment, and to demonstrate the need for future research aimed at understanding how such interactions arise in the brain.
Serial dependence promotes object stability during occlusion
Liberman, Alina; Zhang, Kathy; Whitney, David
2016-01-01
Object identities somehow appear stable and continuous over time despite eye movements, disruptions in visibility, and constantly changing visual input. Recent results have demonstrated that the perception of orientation, numerosity, and facial identity is systematically biased (i.e., pulled) toward visual input from the recent past. The spatial region over which current orientations or face identities are pulled by previous orientations or identities, respectively, is known as the continuity field, which is temporally tuned over the past several seconds (Fischer & Whitney, 2014). This perceptual pull could contribute to the visual stability of objects over short time periods, but does it also address how perceptual stability occurs during visual discontinuities? Here, we tested whether the continuity field helps maintain perceived object identity during occlusion. Specifically, we found that the perception of an oriented Gabor that emerged from behind an occluder was significantly pulled toward the random (and unrelated) orientation of the Gabor that was seen entering the occluder. Importantly, this serial dependence was stronger for predictable, continuously moving trajectories, compared to unpredictable ones or static displacements. This result suggests that our visual system takes advantage of expectations about a stable world, helping to maintain perceived object continuity despite interrupted visibility. PMID:28006066
Knowledge is Power: How Conceptual Knowledge Transforms Visual Cognition
Collins, Jessica A.; Olson, Ingrid R.
2014-01-01
In this review we synthesize the existing literature demonstrating the dynamic interplay between conceptual knowledge and visual perceptual processing. We consider two theoretical frameworks demonstrating interactions between processes and brain areas traditionally considered perceptual or conceptual. Specifically, we discuss categorical perception, in which visual objects are represented according to category membership, and highlight studies showing that category knowledge can penetrate early stages of visual analysis. We next discuss the embodied account of conceptual knowledge, which holds that concepts are instantiated in the same neural regions required for specific types of perception and action, and discuss the limitations of this framework. We additionally consider studies showing that gaining abstract semantic knowledge about objects and faces leads to behavioral and electrophysiological changes that are indicative of more efficient stimulus processing. Finally, we consider the role that perceiver goals and motivation may play in shaping the interaction between conceptual and perceptual processing. We hope to demonstrate how pervasive such interactions between motivation, conceptual knowledge, and perceptual processing are to our understanding of the visual environment, and demonstrate the need for future research aimed at understanding how such interactions arise in the brain. PMID:24402731
Cox, Jolene A; Beanland, Vanessa; Filtness, Ashleigh J
2017-10-03
The ability to detect changing visual information is a vital component of safe driving. In addition to detecting changing visual information, drivers must also interpret its relevance to safety. Environmental changes considered to have high safety relevance will likely demand greater attention and more timely responses than those considered to have lower safety relevance. The aim of this study was to explore factors that are likely to influence perceptions of risk and safety regarding changing visual information in the driving environment. Factors explored were the environment in which the change occurs (i.e., urban vs. rural), the type of object that changes, and the driver's age, experience, and risk sensitivity. Sixty-three licensed drivers aged 18-70 years completed a hazard rating task, which required them to rate the perceived hazardousness of changing specific elements within urban and rural driving environments. Three attributes of potential hazards were systematically manipulated: the environment (urban, rural); the type of object changed (road sign, car, motorcycle, pedestrian, traffic light, animal, tree); and its inherent safety risk (low risk, high risk). Inherent safety risk was manipulated by either varying the object's placement, on/near or away from the road, or altering an infrastructure element that would require a change to driver behavior. Participants also completed two driving-related risk perception tasks, rating their relative crash risk and perceived risk of aberrant driving behaviors. Driver age was not significantly associated with hazard ratings, but individual differences in perceived risk of aberrant driving behaviors predicted hazard ratings, suggesting that general driving-related risk sensitivity plays a strong role in safety perception. In both urban and rural scenes, there were significant associations between hazard ratings and inherent safety risk, with low-risk changes perceived as consistently less hazardous than high-risk impact changes; however, the effect was larger for urban environments. There were also effects of object type, with certain objects rated as consistently more safety relevant. In urban scenes, changes involving pedestrians were rated significantly more hazardous than all other objects, and in rural scenes, changes involving animals were rated as significantly more hazardous. Notably, hazard ratings were found to be higher in urban compared with rural driving environments, even when changes were matched between environments. This study demonstrates that drivers perceive rural roads as less risky than urban roads, even when similar scenarios occur in both environments. Age did not affect hazard ratings. Instead, the findings suggest that the assessment of risk posed by hazards is influenced more by individual differences in risk sensitivity. This highlights the need for driver education to account for appraisal of hazards' risk and relevance, in addition to hazard detection, when considering factors that promote road safety.
Night-shift work increases cold pain perception.
Pieh, Christoph; Jank, Robert; Waiß, Christoph; Pfeifer, Christian; Probst, Thomas; Lahmann, Claas; Oberndorfer, Stefan
2018-05-01
Although night-shift work (NSW) is associated with a higher risk for several physical and mental disorders, the impact of NSW on pain perception is still unclear. This study investigates the impact of NSW on cold pain perception considering the impact of mood and sleepiness. Quantitative sensory testing (QST) was performed in healthy night-shift workers. Cold pain threshold as well as tonic cold pain was assessed after one habitual night (T1), after a 12-hour NSW (T2) and after one recovery night (T3). Sleep quality was measured with the Pittsburgh Sleep Quality Index (PSQI) before T1, sleepiness with the Stanford Sleepiness Scale (SSS) and mood with a German short-version of the Profile of Mood States (ASTS) at T1, T2 and T3. Depending on the distribution of the data, ANOVAs or Friedman tests as well as t- or Wilcoxon tests were performed. Nineteen healthy shift-workers (13 females; 29.7 ± 7.5 years old; 8.1 ± 6.6 years in shift work, PSQI: 4.7 ± 2.2) were included. Tonic cold pain showed a significant difference between T1 (48.2 ± 27.5 mm), T2 (61.7 ± 26.6 mm; effect size: Cohen's d=.49; percent change 28%), and T3 (52.1 ± 28.7 mm) on a 0-100 mm Visual Analog Scale (p = 0.007). Cold pain threshold changed from 11.0 ± 7.9 °C (T1) to 14.5 ± 8.8 °C (T2) (p = 0.04), however, an ANOVA comparing T1, T2, and T3 was not significant (p = 0.095). Sleepiness (SSS) and mood (ASTS) changed significantly between T1, T2 and T3 (p-values < 0.01). The change of mood but not of sleepiness correlated with the difference in tonic cold pain from T1 to T2 (R: 0.53; R 2 : 0.29; p = 0.022). NSW increases cold pain perception. The same tonic cold pain stimulus is rated 28% more painful after NSW and normalizes after a recovery night. Increases in cold pain perception due to NSW appear to be more strongly related to changes in mood as compared to changes in sleepiness. Copyright © 2018 Elsevier B.V. All rights reserved.
Object perception is selectively slowed by a visually similar working memory load.
Robinson, Alan; Manzi, Alberto; Triesch, Jochen
2008-12-22
The capacity of visual working memory has been extensively characterized, but little work has investigated how occupying visual memory influences other aspects of cognition and perception. Here we show a novel effect: maintaining an item in visual working memory slows processing of similar visual stimuli during the maintenance period. Subjects judged the gender of computer rendered faces or the naturalness of body postures while maintaining different visual memory loads. We found that when stimuli of the same class (faces or bodies) were maintained in memory, perceptual judgments were slowed. Interestingly, this is the opposite of what would be predicted from traditional priming. Our results suggest there is interference between visual working memory and perception, caused by visual similarity between new perceptual input and items already encoded in memory.
Optical images of visible and invisible percepts in the primary visual cortex of primates
Macknik, Stephen L.; Haglund, Michael M.
1999-01-01
We optically imaged a visual masking illusion in primary visual cortex (area V-1) of rhesus monkeys to ask whether activity in the early visual system more closely reflects the physical stimulus or the generated percept. Visual illusions can be a powerful way to address this question because they have the benefit of dissociating the stimulus from perception. We used an illusion in which a flickering target (a bar oriented in visual space) is rendered invisible by two counter-phase flickering bars, called masks, which flank and abut the target. The target and masks, when shown separately, each generated correlated activity on the surface of the cortex. During the illusory condition, however, optical signals generated in the cortex by the target disappeared although the image of the masks persisted. The optical image thus was correlated with perception but not with the physical stimulus. PMID:10611363
Hu, Meng; Liang, Hualou
2013-04-01
Generalized flash suppression (GFS), in which a salient visual stimulus can be rendered invisible despite continuous retinal input, provides a rare opportunity to directly study the neural mechanism of visual perception. Previous work based on linear methods, such as spectral analysis, on local field potential (LFP) during GFS has shown that the LFP power at distinctive frequency bands are differentially modulated by perceptual suppression. Yet, the linear method alone may be insufficient for the full assessment of neural dynamic due to the fundamentally nonlinear nature of neural signals. In this study, we set forth to analyze the LFP data collected from multiple visual areas in V1, V2 and V4 of macaque monkeys while performing the GFS task using a nonlinear method - adaptive multi-scale entropy (AME) - to reveal the neural dynamic of perceptual suppression. In addition, we propose a new cross-entropy measure at multiple scales, namely adaptive multi-scale cross-entropy (AMCE), to assess the nonlinear functional connectivity between two cortical areas. We show that: (1) multi-scale entropy exhibits percept-related changes in all three areas, with higher entropy observed during perceptual suppression; (2) the magnitude of the perception-related entropy changes increases systematically over successive hierarchical stages (i.e. from lower areas V1 to V2, up to higher area V4); and (3) cross-entropy between any two cortical areas reveals higher degree of asynchrony or dissimilarity during perceptual suppression, indicating a decreased functional connectivity between cortical areas. These results, taken together, suggest that perceptual suppression is related to a reduced functional connectivity and increased uncertainty of neural responses, and the modulation of perceptual suppression is more effective at higher visual cortical areas. AME is demonstrated to be a useful technique in revealing the underlying dynamic of nonlinear/nonstationary neural signal.
Competitive Dynamics in MSTd: A Mechanism for Robust Heading Perception Based on Optic Flow
Layton, Oliver W.; Fajen, Brett R.
2016-01-01
Human heading perception based on optic flow is not only accurate, it is also remarkably robust and stable. These qualities are especially apparent when observers move through environments containing other moving objects, which introduce optic flow that is inconsistent with observer self-motion and therefore uninformative about heading direction. Moving objects may also occupy large portions of the visual field and occlude regions of the background optic flow that are most informative about heading perception. The fact that heading perception is biased by no more than a few degrees under such conditions attests to the robustness of the visual system and warrants further investigation. The aim of the present study was to investigate whether recurrent, competitive dynamics among MSTd neurons that serve to reduce uncertainty about heading over time offer a plausible mechanism for capturing the robustness of human heading perception. Simulations of existing heading models that do not contain competitive dynamics yield heading estimates that are far more erratic and unstable than human judgments. We present a dynamical model of primate visual areas V1, MT, and MSTd based on that of Layton, Mingolla, and Browning that is similar to the other models, except that the model includes recurrent interactions among model MSTd neurons. Competitive dynamics stabilize the model’s heading estimate over time, even when a moving object crosses the future path. Soft winner-take-all dynamics enhance units that code a heading direction consistent with the time history and suppress responses to transient changes to the optic flow field. Our findings support recurrent competitive temporal dynamics as a crucial mechanism underlying the robustness and stability of perception of heading. PMID:27341686
Spering, Miriam; Montagnini, Anna
2011-04-22
Many neurophysiological studies in monkeys have indicated that visual motion information for the guidance of perception and smooth pursuit eye movements is - at an early stage - processed in the same visual pathway in the brain, crucially involving the middle temporal area (MT). However, these studies left some questions unanswered: Are perception and pursuit driven by the same or independent neuronal signals within this pathway? Are the perceptual interpretation of visual motion information and the motor response to visual signals limited by the same source of neuronal noise? Here, we review psychophysical studies that were motivated by these questions and compared perception and pursuit behaviorally in healthy human observers. We further review studies that focused on the interaction between perception and pursuit. The majority of results point to similarities between perception and pursuit, but dissociations were also reported. We discuss recent developments in this research area and conclude with suggestions for common and separate principles for the guidance of perceptual and motor responses to visual motion information. Copyright © 2010 Elsevier Ltd. All rights reserved.
Visual adaptation and face perception
Webster, Michael A.; MacLeod, Donald I. A.
2011-01-01
The appearance of faces can be strongly affected by the characteristics of faces viewed previously. These perceptual after-effects reflect processes of sensory adaptation that are found throughout the visual system, but which have been considered only relatively recently in the context of higher level perceptual judgements. In this review, we explore the consequences of adaptation for human face perception, and the implications of adaptation for understanding the neural-coding schemes underlying the visual representation of faces. The properties of face after-effects suggest that they, in part, reflect response changes at high and possibly face-specific levels of visual processing. Yet, the form of the after-effects and the norm-based codes that they point to show many parallels with the adaptations and functional organization that are thought to underlie the encoding of perceptual attributes like colour. The nature and basis for human colour vision have been studied extensively, and we draw on ideas and principles that have been developed to account for norms and normalization in colour vision to consider potential similarities and differences in the representation and adaptation of faces. PMID:21536555
Visual adaptation and face perception.
Webster, Michael A; MacLeod, Donald I A
2011-06-12
The appearance of faces can be strongly affected by the characteristics of faces viewed previously. These perceptual after-effects reflect processes of sensory adaptation that are found throughout the visual system, but which have been considered only relatively recently in the context of higher level perceptual judgements. In this review, we explore the consequences of adaptation for human face perception, and the implications of adaptation for understanding the neural-coding schemes underlying the visual representation of faces. The properties of face after-effects suggest that they, in part, reflect response changes at high and possibly face-specific levels of visual processing. Yet, the form of the after-effects and the norm-based codes that they point to show many parallels with the adaptations and functional organization that are thought to underlie the encoding of perceptual attributes like colour. The nature and basis for human colour vision have been studied extensively, and we draw on ideas and principles that have been developed to account for norms and normalization in colour vision to consider potential similarities and differences in the representation and adaptation of faces.
Toward Model Building for Visual Aesthetic Perception
Lughofer, Edwin; Zeng, Xianyi
2017-01-01
Several models of visual aesthetic perception have been proposed in recent years. Such models have drawn on investigations into the neural underpinnings of visual aesthetics, utilizing neurophysiological techniques and brain imaging techniques including functional magnetic resonance imaging, magnetoencephalography, and electroencephalography. The neural mechanisms underlying the aesthetic perception of the visual arts have been explained from the perspectives of neuropsychology, brain and cognitive science, informatics, and statistics. Although corresponding models have been constructed, the majority of these models contain elements that are difficult to be simulated or quantified using simple mathematical functions. In this review, we discuss the hypotheses, conceptions, and structures of six typical models for human aesthetic appreciation in the visual domain: the neuropsychological, information processing, mirror, quartet, and two hierarchical feed-forward layered models. Additionally, the neural foundation of aesthetic perception, appreciation, or judgement for each model is summarized. The development of a unified framework for the neurobiological mechanisms underlying the aesthetic perception of visual art and the validation of this framework via mathematical simulation is an interesting challenge in neuroaesthetics research. This review aims to provide information regarding the most promising proposals for bridging the gap between visual information processing and brain activity involved in aesthetic appreciation. PMID:29270194
Visual imagery without visual perception: lessons from blind subjects
NASA Astrophysics Data System (ADS)
Bértolo, Helder
2014-08-01
The question regarding visual imagery and visual perception remain an open issue. Many studies have tried to understand if the two processes share the same mechanisms or if they are independent, using different neural substrates. Most research has been directed towards the need of activation of primary visual areas during imagery. Here we review some of the works providing evidence for both claims. It seems that studying visual imagery in blind subjects can be used as a way of answering some of those questions, namely if it is possible to have visual imagery without visual perception. We present results from the work of our group using visual activation in dreams and its relation with EEG's spectral components, showing that congenitally blind have visual contents in their dreams and are able to draw them; furthermore their Visual Activation Index is negatively correlated with EEG alpha power. This study supports the hypothesis that it is possible to have visual imagery without visual experience.
Performance considerations for high-definition head-mounted displays
NASA Technical Reports Server (NTRS)
Edwards, Oliver J.; Larimer, James; Gille, Jennifer
1992-01-01
Design image-optimization for helmet-mounted displays (HMDs) for military systems is presently discussed within the framework of a systems-engineering approach that encompasses (1) a description of natural targets in the field; (2) the characteristics of human visual perception; and (3) device specifications that directly relate to these ecological and human-factors parameters. Attention is given to target size and contrast and the relationship of the modulation transfer function to image resolution.
Getzmann, Stephan; Wascher, Edmund
2017-02-01
Speech understanding in the presence of concurring sound is a major challenge especially for older persons. In particular, conversational turn-takings usually result in switch costs, as indicated by declined speech perception after changes in the relevant target talker. Here, we investigated whether visual cues indicating the future position of a target talker may reduce the costs of switching in younger and older adults. We employed a speech perception task, in which sequences of short words were simultaneously presented by three talkers, and analysed behavioural measures and event-related potentials (ERPs). Informative cues resulted in increased performance after a spatial change in target talker compared to uninformative cues, not indicating the future target position. Especially the older participants benefited from knowing the future target position in advance, indicated by reduced response times after informative cues. The ERP analysis revealed an overall reduced N2, and a reduced P3b to changes in the target talker location in older participants, suggesting reduced inhibitory control and context updating. On the other hand, a pronounced frontal late positive complex (f-LPC) to the informative cues indicated increased allocation of attentional resources to changes in target talker in the older group, in line with the decline-compensation hypothesis. Thus, knowing where to listen has the potential to compensate for age-related decline in attentional switching in a highly variable cocktail-party environment. Copyright © 2016 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Krahmer, Emiel; Swerts, Marc
2007-01-01
Speakers employ acoustic cues (pitch accents) to indicate that a word is important, but may also use visual cues (beat gestures, head nods, eyebrow movements) for this purpose. Even though these acoustic and visual cues are related, the exact nature of this relationship is far from well understood. We investigate whether producing a visual beat…
Sakurai, Ryota; Fujiwara, Yoshinori; Ishihara, Masami; Yasunaga, Masashi; Ogawa, Susumu; Suzuki, Hiroyuki; Imanaka, Kuniyasu
2017-07-01
Older adults tend to overestimate their step-over ability. However, it is unclear as to whether this is caused by inaccurate self-estimation of physical ability or inaccurate perception of height. We, therefore, measured both visual height perception ability and self-estimation of step-over ability among young and older adults. Forty-seven older and 16 young adults performed a height perception test (HPT) and a step-over test (SOT). Participants visually judged the height of vertical bars from distances of 7 and 1 m away in the HPT, then self-estimated and, subsequently, actually performed a step-over action in the SOT. The results showed no significant difference between young and older adults in visual height perception. In the SOT, young adults tended to underestimate their step-over ability, whereas older adults either overestimated their abilities or underestimated them to a lesser extent than did the young adults. Moreover, visual height perception was not correlated with the self-estimation of step-over ability in both young and older adults. These results suggest that the self-overestimation of step-over ability which appeared in some healthy older adults may not be caused by the nature of visual height perception, but by other factor(s), such as the likely age-related nature of self-estimation of physical ability, per se.
Organization of area hV5/MT+ in subjects with homonymous visual field defects.
Papanikolaou, Amalia; Keliris, Georgios A; Papageorgiou, T Dorina; Schiefer, Ulrich; Logothetis, Nikos K; Smirnakis, Stelios M
2018-04-06
Damage to the primary visual cortex (V1) leads to a visual field loss (scotoma) in the retinotopically corresponding part of the visual field. Nonetheless, a small amount of residual visual sensitivity persists within the blind field. This residual capacity has been linked to activity observed in the middle temporal area complex (V5/MT+). However, it remains unknown whether the organization of hV5/MT+ changes following early visual cortical lesions. We studied the organization of area hV5/MT+ of five patients with dense homonymous defects in a quadrant of the visual field as a result of partial V1+ or optic radiation lesions. To do so, we developed a new method, which models the boundaries of population receptive fields directly from the BOLD signal of each voxel in the visual cortex. We found responses in hV5/MT+ arising inside the scotoma for all patients and identified two possible sources of activation: 1) responses might originate from partially lesioned parts of area V1 corresponding to the scotoma, and 2) responses can also originate independent of area V1 input suggesting the existence of functional V1-bypassing pathways. Apparently, visually driven activity observed in hV5/MT+ is not sufficient to mediate conscious vision. More surprisingly, visually driven activity in corresponding regions of V1 and early extrastriate areas including hV5/MT+ did not guarantee visual perception in the group of patients with post-geniculate lesions that we examined. This suggests that the fine coordination of visual activity patterns across visual areas may be an important determinant of whether visual perception persists following visual cortical lesions. Copyright © 2018 Elsevier Inc. All rights reserved.
Konstantinou, Nikos; Beal, Eleanor; King, Jean-Remi; Lavie, Nilli
2014-10-01
We establish a new dissociation between the roles of working memory (WM) cognitive control and visual maintenance in selective attention as measured by the efficiency of distractor rejection. The extent to which focused selective attention can prevent distraction has been shown to critically depend on the level and type of load involved in the task. High perceptual load that consumes perceptual capacity leads to reduced distractor processing, whereas high WM load that reduces WM ability to exert priority-based executive cognitive control over the task results in increased distractor processing (e.g., Lavie, Trends in Cognitive Sciences, 9(2), 75-82, 2005). WM also serves to maintain task-relevant visual representations, and such visual maintenance is known to recruit the same sensory cortices as those involved in perception (e.g., Pasternak & Greenlee, Nature Reviews Neuroscience, 6(2), 97-107, 2005). These findings led us to hypothesize that loading WM with visual maintenance would reduce visual capacity involved in perception, thus resulting in reduced distractor processing-similar to perceptual load and opposite to WM cognitive control load. Distractor processing was assessed in a response competition task, presented during the memory interval (or during encoding; Experiment 1a) of a WM task. Loading visual maintenance or encoding by increased set size for a memory sample of shapes, colors, and locations led to reduced distractor response competition effects. In contrast, loading WM cognitive control with verbal rehearsal of a random letter set led to increased distractor effects. These findings confirm load theory predictions and provide a novel functional distinction between the roles of WM maintenance and cognitive control in selective attention.
Relationship between individual differences in speech processing and cognitive functions.
Ou, Jinghua; Law, Sam-Po; Fung, Roxana
2015-12-01
A growing body of research has suggested that cognitive abilities may play a role in individual differences in speech processing. The present study took advantage of a widespread linguistic phenomenon of sound change to systematically assess the relationships between speech processing and various components of attention and working memory in the auditory and visual modalities among typically developed Cantonese-speaking individuals. The individual variations in speech processing are captured in an ongoing sound change-tone merging in Hong Kong Cantonese, in which typically developed native speakers are reported to lose the distinctions between some tonal contrasts in perception and/or production. Three groups of participants were recruited, with a first group of good perception and production, a second group of good perception but poor production, and a third group of good production but poor perception. Our findings revealed that modality-independent abilities of attentional switching/control and working memory might contribute to individual differences in patterns of speech perception and production as well as discrimination latencies among typically developed speakers. The findings not only have the potential to generalize to speech processing in other languages, but also broaden our understanding of the omnipresent phenomenon of language change in all languages.
Efficient Multi-Concept Visual Classifier Adaptation in Changing Environments
2016-09-01
yet to be discussed in existing supervised multi-concept visual perception systems used in robotics applications.1,5–7 Anno - tation of images is...Autonomous robot navigation in highly populated pedestrian zones. J Field Robotics. 2015;32(4):565–589. 3. Milella A, Reina G, Underwood J . A self...learning framework for statistical ground classification using RADAR and monocular vision. J Field Robotics. 2015;32(1):20–41. 4. Manjanna S, Dudek G
The levels of perceptual processing and the neural correlates of increasing subjective visibility.
Binder, Marek; Gociewicz, Krzysztof; Windey, Bert; Koculak, Marcin; Finc, Karolina; Nikadon, Jan; Derda, Monika; Cleeremans, Axel
2017-10-01
According to the levels-of-processing hypothesis, transitions from unconscious to conscious perception may depend on stimulus processing level, with more gradual changes for low-level stimuli and more dichotomous changes for high-level stimuli. In an event-related fMRI study we explored this hypothesis using a visual backward masking procedure. Task requirements manipulated level of processing. Participants reported the magnitude of the target digit in the high-level task, its color in the low-level task, and rated subjective visibility of stimuli using the Perceptual Awareness Scale. Intermediate stimulus visibility was reported more frequently in the low-level task, confirming prior behavioral results. Visible targets recruited insulo-fronto-parietal regions in both tasks. Task effects were observed in visual areas, with higher activity in the low-level task across all visibility levels. Thus, the influence of level of processing on conscious perception may be mediated by attentional modulation of activity in regions representing features of consciously experienced stimuli. Copyright © 2017 Elsevier Inc. All rights reserved.
Pailian, Hrag; Libertus, Melissa E; Feigenson, Lisa; Halberda, Justin
2016-08-01
Research in adults has aimed to characterize constraints on the capacity of Visual Working Memory (VWM), in part because of the system's broader impacts throughout cognition. However, less is known about how VWM develops in childhood. Existing work has reached conflicting conclusions as to whether VWM storage capacity increases after infancy, and if so, when and by how much. One challenge is that previous studies did not control for developmental changes in attention and executive processing, which also may undergo improvement. We investigated the development of VWM storage capacity in children from 3 to 8 years of age, and in adults, while controlling for developmental change in exogenous and endogenous attention and executive control. Our results reveal that, when controlling for improvements in these abilities, VWM storage capacity increases across development and approaches adult-like levels between ages 6 and 8 years. More generally, this work highlights the value of estimating working memory, attention, perception, and decision-making components together.
Masking reduces orientation selectivity in rat visual cortex
Alwis, Dasuni S.; Richards, Katrina L.
2016-01-01
In visual masking the perception of a target stimulus is impaired by a preceding (forward) or succeeding (backward) mask stimulus. The illusion is of interest because it allows uncoupling of the physical stimulus, its neuronal representation, and its perception. To understand the neuronal correlates of masking, we examined how masks affected the neuronal responses to oriented target stimuli in the primary visual cortex (V1) of anesthetized rats (n = 37). Target stimuli were circular gratings with 12 orientations; mask stimuli were plaids created as a binarized sum of all possible target orientations. Spatially, masks were presented either overlapping or surrounding the target. Temporally, targets and masks were presented for 33 ms, but the stimulus onset asynchrony (SOA) of their relative appearance was varied. For the first time, we examine how spatially overlapping and center-surround masking affect orientation discriminability (rather than visibility) in V1. Regardless of the spatial or temporal arrangement of stimuli, the greatest reductions in firing rate and orientation selectivity occurred for the shortest SOAs. Interestingly, analyses conducted separately for transient and sustained target response components showed that changes in orientation selectivity do not always coincide with changes in firing rate. Given the near-instantaneous reductions observed in orientation selectivity even when target and mask do not spatially overlap, we suggest that monotonic visual masking is explained by a combination of neural integration and lateral inhibition. PMID:27535373
Monocular Advantage for Face Perception Implicates Subcortical Mechanisms in Adult Humans
Gabay, Shai; Nestor, Adrian; Dundas, Eva; Behrmann, Marlene
2014-01-01
The ability to recognize faces accurately and rapidly is an evolutionarily adaptive process. Most studies examining the neural correlates of face perception in adult humans have focused on a distributed cortical network of face-selective regions. There is, however, robust evidence from phylogenetic and ontogenetic studies that implicates subcortical structures, and recently, some investigations in adult humans indicate subcortical correlates of face perception as well. The questions addressed here are whether low-level subcortical mechanisms for face perception (in the absence of changes in expression) are conserved in human adults, and if so, what is the nature of these subcortical representations. In a series of four experiments, we presented pairs of images to the same or different eyes. Participants’ performance demonstrated that subcortical mechanisms, indexed by monocular portions of the visual system, play a functional role in face perception. These mechanisms are sensitive to face-like configurations and afford a coarse representation of a face, comprised of primarily low spatial frequency information, which suffices for matching faces but not for more complex aspects of face perception such as sex differentiation. Importantly, these subcortical mechanisms are not implicated in the perception of other visual stimuli, such as cars or letter strings. These findings suggest a conservation of phylogenetically and ontogenetically lower-order systems in adult human face perception. The involvement of subcortical structures in face recognition provokes a reconsideration of current theories of face perception, which are reliant on cortical level processing, inasmuch as it bolsters the cross-species continuity of the biological system for face recognition. PMID:24236767
NASA Astrophysics Data System (ADS)
Lushnikov, D. S.; Zherdev, A. Y.; Odinokov, S. B.; Markin, V. V.; Smirnov, A. V.
2017-05-01
Visual security elements used in color holographic stereograms - three-dimensional colored security holograms - and methods their production is describes in this article. These visual security elements include color micro text, color-hidden image, the horizontal and vertical flip - flop effects by change color and image. The article also presents variants of optical systems that allow record the visual security elements as part of the holographic stereograms. The methods for solving of the optical problems arising in the recording visual security elements are presented. Also noted perception features of visual security elements for verification of security holograms by using these elements. The work was partially funded under the Agreement with the RF Ministry of Education and Science № 14.577.21.0197, grant RFMEFI57715X0197.
ERIC Educational Resources Information Center
Chen, Y.; Norton, D. J.; McBain, R.; Gold, J.; Frazier, J. A.; Coyle, J. T.
2012-01-01
An important issue for understanding visual perception in autism concerns whether individuals with this neurodevelopmental disorder possess an advantage in processing local visual information, and if so, what is the nature of this advantage. Perception of movement speed is a visual process that relies on computation of local spatiotemporal signals…
ERIC Educational Resources Information Center
Murr, Christopher D.; Blanchard, R. Denise
2011-01-01
Advances in classroom technology have lowered barriers for the visually impaired to study geography, yet few participate. Employing stereotype threat theory, we examined whether beliefs held by the visually impaired affect perceptions toward completing courses and majors in visually oriented disciplines. A test group received a low-level threat…
ERIC Educational Resources Information Center
Zhou, Li; Smith, Derrick W.; Parker, Amy T.; Griffin-Shirley, Nora
2011-01-01
This study surveyed teachers of students with visual impairments in Texas on their perceptions of a set of assistive technology competencies developed for teachers of students with visual impairments by Smith and colleagues (2009). Differences in opinion between practicing teachers of students with visual impairments and Smith's group of…
Development of a computerized visual search test.
Reid, Denise; Babani, Harsha; Jon, Eugenia
2009-09-01
Visual attention and visual search are the features of visual perception, essential for attending and scanning one's environment while engaging in daily occupations. This study describes the development of a novel web-based test of visual search. The development information including the format of the test will be described. The test was designed to provide an alternative to existing cancellation tests. Data from two pilot studies will be reported that examined some aspects of the test's validity. To date, our assessment of the test shows that it discriminates between healthy and head-injured persons. More research and development work is required to examine task performance changes in relation to task complexity. It is suggested that the conceptual design for the test is worthy of further investigation.
Three-quarter view preference for three-dimensional objects in 8-month-old infants.
Yamashita, Wakayo; Niimi, Ryosuke; Kanazawa, So; Yamaguchi, Masami K; Yokosawa, Kazuhiko
2014-04-04
This study examined infants' visual perception of three-dimensional common objects. It has been reported that human adults perceive object images in a view-dependent manner: three-quarter views are often preferred to other views, and the sensitivity to object orientation is lower for three-quarter views than for other views. We tested whether such characteristics were observed in 6- to 8-month-old infants by measuring their preferential looking behavior. In Experiment 1 we examined 190- to 240-day-olds' sensitivity to orientation change and in Experiment 2 we examined these infants' preferential looking for the three-quarter view. The 240-day-old infants showed a pattern of results similar to adults for some objects, while the 190-day-old infants did not. The 240-day-old infants' perception of object view is (partly) similar to that of adults. These results suggest that human visual perception of three-dimensional objects develops at 6 to 8 months of age.
ERIC Educational Resources Information Center
Pons, Ferran; Andreu, Llorenc; Sanz-Torrent, Monica; Buil-Legaz, Lucia; Lewkowicz, David J.
2013-01-01
Speech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the…
Human Visual Search Does Not Maximize the Post-Saccadic Probability of Identifying Targets
Morvan, Camille; Maloney, Laurence T.
2012-01-01
Researchers have conjectured that eye movements during visual search are selected to minimize the number of saccades. The optimal Bayesian eye movement strategy minimizing saccades does not simply direct the eye to whichever location is judged most likely to contain the target but makes use of the entire retina as an information gathering device during each fixation. Here we show that human observers do not minimize the expected number of saccades in planning saccades in a simple visual search task composed of three tokens. In this task, the optimal eye movement strategy varied, depending on the spacing between tokens (in the first experiment) or the size of tokens (in the second experiment), and changed abruptly once the separation or size surpassed a critical value. None of our observers changed strategy as a function of separation or size. Human performance fell far short of ideal, both qualitatively and quantitatively. PMID:22319428
The role of vision in auditory distance perception.
Calcagno, Esteban R; Abregú, Ezequiel L; Eguía, Manuel C; Vergara, Ramiro
2012-01-01
In humans, multisensory interaction is an important strategy for improving the detection of stimuli of different nature and reducing the variability of response. It is known that the presence of visual information affects the auditory perception in the horizontal plane (azimuth), but there are few researches that study the influence of vision in the auditory distance perception. In general, the data obtained from these studies are contradictory and do not completely define the way in which visual cues affect the apparent distance of a sound source. Here psychophysical experiments on auditory distance perception in humans are performed, including and excluding visual cues. The results show that the apparent distance from the source is affected by the presence of visual information and that subjects can store in their memory a representation of the environment that later improves the perception of distance.
3D Visualizations of Abstract DataSets
2010-08-01
contrasts no shadows, drop shadows and drop lines. 15. SUBJECT TERMS 3D displays, 2.5D displays, abstract network visualizations, depth perception , human...altitude perception in airspace management and airspace route planning—simulated reality visualizations that employ altitude and heading as well as...cues employed by display designers for depicting real-world scenes on a flat surface can be applied to create a perception of depth for abstract
Integrative cortical dysfunction and pervasive motion perception deficit in fragile X syndrome.
Kogan, C S; Bertone, A; Cornish, K; Boutet, I; Der Kaloustian, V M; Andermann, E; Faubert, J; Chaudhuri, A
2004-11-09
Fragile X syndrome (FXS) is associated with neurologic deficits recently attributed to the magnocellular pathway of the lateral geniculate nucleus. To test the hypotheses that FXS individuals 1) have a pervasive visual motion perception impairment affecting neocortical circuits in the parietal lobe and 2) have deficits in integrative neocortical mechanisms necessary for perception of complex stimuli. Psychophysical tests of visual motion and form perception defined by either first-order (luminance) or second-order (texture) attributes were used to probe early and later occipito-temporal and occipito-parietal functioning. When compared to developmental- and age-matched controls, FXS individuals displayed severe impairments in first- and second-order motion perception. This deficit was accompanied by near normal perception for first-order form stimuli but not second-order form stimuli. Impaired visual motion processing for first- and second-order stimuli suggests that both early- and later-level neurologic function of the parietal lobe are affected in Fragile X syndrome (FXS). Furthermore, this deficit likely stems from abnormal input from the magnocellular compartment of the lateral geniculate nucleus. Impaired visual form and motion processing for complex visual stimuli with normal processing for simple (i.e., first-order) form stimuli suggests that FXS individuals have normal early form processing accompanied by a generalized impairment in neurologic mechanisms necessary for integrating all early visual input.
Endogenous Sequential Cortical Activity Evoked by Visual Stimuli
Miller, Jae-eun Kang; Hamm, Jordan P.; Jackson, Jesse; Yuste, Rafael
2015-01-01
Although the functional properties of individual neurons in primary visual cortex have been studied intensely, little is known about how neuronal groups could encode changing visual stimuli using temporal activity patterns. To explore this, we used in vivo two-photon calcium imaging to record the activity of neuronal populations in primary visual cortex of awake mice in the presence and absence of visual stimulation. Multidimensional analysis of the network activity allowed us to identify neuronal ensembles defined as groups of cells firing in synchrony. These synchronous groups of neurons were themselves activated in sequential temporal patterns, which repeated at much higher proportions than chance and were triggered by specific visual stimuli such as natural visual scenes. Interestingly, sequential patterns were also present in recordings of spontaneous activity without any sensory stimulation and were accompanied by precise firing sequences at the single-cell level. Moreover, intrinsic dynamics could be used to predict the occurrence of future neuronal ensembles. Our data demonstrate that visual stimuli recruit similar sequential patterns to the ones observed spontaneously, consistent with the hypothesis that already existing Hebbian cell assemblies firing in predefined temporal sequences could be the microcircuit substrate that encodes visual percepts changing in time. PMID:26063915
Platz, Thomas; Schüttauf, Johannes; Aschenbach, Julia; Mengdehl, Christine; Lotze, Martin
2016-01-01
The study sought to alter visual spatial attention in young healthy subjects by a neuronavigated inhibitory rTMS protocol (cTBS-600) to right brain areas thought to be involved in visual attentional processes, i.e. the temporoparietal junction (TPJ) and the posterior middle frontal gyrus (pMFG), and to test the reversibility of effects by an additional consecutive cTBS to the homologue left brain cortical areas. Healthy subjects showed a leftward bias of the egocentric perspective for both visual-perceptive and visual-exploratory tasks specifically for items presented in the left hemifield. cTBS to the right TPJ, and less systematically to the right pMFG reduced this bias for visuo-spatial and exploratory visuo-motor behaviour. Further, a consecutive cTBS to the left TPJ changed the bias again towards the left for a visual-perceptive task. The evidence supports the notion of an involvement of the right TPJ (and pMFG) in spatial visual attention. The observations further indicate that inhibitory non-invasive brain stimulation (cTBS) to the left TPJ has a potential for reversing a rightward bias of spatial attention when the right TPJ is dysfunctional. Accordingly, the findings could have implications for therapeutic rTMS development for right brain damaged patients with visual neglect.
Exogenous Attention Enables Perceptual Learning
Szpiro, Sarit F. A.; Carrasco, Marisa
2015-01-01
Practice can improve visual perception, and these improvements are considered to be a form of brain plasticity. Training-induced learning is time-consuming and requires hundreds of trials across multiple days. The process of learning acquisition is understudied. Can learning acquisition be potentiated by manipulating visual attentional cues? We developed a protocol in which we used task-irrelevant cues for between-groups manipulation of attention during training. We found that training with exogenous attention can enable the acquisition of learning. Remarkably, this learning was maintained even when observers were subsequently tested under neutral conditions, which indicates that a change in perception was involved. Our study is the first to isolate the effects of exogenous attention and to demonstrate its efficacy to enable learning. We propose that exogenous attention boosts perceptual learning by enhancing stimulus encoding. PMID:26502745
ERIC Educational Resources Information Center
Braden, Roberts A., Ed.; And Others
These proceedings contain 37 papers from 51 authors noted for their expertise in the field of visual literacy. The collection is divided into three sections: (1) "Examining Visual Literacy" (including, in addition to a 7-year International Visual Literacy Association bibliography covering the period from 1983-1989, papers on the perception of…
ERIC Educational Resources Information Center
Coelho, Chase J.; Nusbaum, Howard C.; Rosenbaum, David A.; Fenn, Kimberly M.
2012-01-01
Early research on visual imagery led investigators to suggest that mental visual images are just weak versions of visual percepts. Later research helped investigators understand that mental visual images differ in deeper and more subtle ways from visual percepts. Research on motor imagery has yet to reach this mature state, however. Many authors…
Diaz, Gabriel; Cooper, Joseph; Rothkopf, Constantin; Hayhoe, Mary
2013-01-16
Despite general agreement that prediction is a central aspect of perception, there is relatively little evidence concerning the basis on which visual predictions are made. Although both saccadic and pursuit eye-movements reveal knowledge of the future position of a moving visual target, in many of these studies targets move along simple trajectories through a fronto-parallel plane. Here, using a naturalistic and racquet-based interception task in a virtual environment, we demonstrate that subjects make accurate predictions of visual target motion, even when targets follow trajectories determined by the complex dynamics of physical interactions and the head and body are unrestrained. Furthermore, we found that, following a change in ball elasticity, subjects were able to accurately adjust their prebounce predictions of the ball's post-bounce trajectory. This suggests that prediction is guided by experience-based models of how information in the visual image will change over time.
Diaz, Gabriel; Cooper, Joseph; Rothkopf, Constantin; Hayhoe, Mary
2013-01-01
Despite general agreement that prediction is a central aspect of perception, there is relatively little evidence concerning the basis on which visual predictions are made. Although both saccadic and pursuit eye-movements reveal knowledge of the future position of a moving visual target, in many of these studies targets move along simple trajectories through a fronto-parallel plane. Here, using a naturalistic and racquet-based interception task in a virtual environment, we demonstrate that subjects make accurate predictions of visual target motion, even when targets follow trajectories determined by the complex dynamics of physical interactions and the head and body are unrestrained. Furthermore, we found that, following a change in ball elasticity, subjects were able to accurately adjust their prebounce predictions of the ball's post-bounce trajectory. This suggests that prediction is guided by experience-based models of how information in the visual image will change over time. PMID:23325347
Dunham, Lisette; Dekhtyar, Michael; Gruener, Gregory; CichoskiKelly, Eileen; Deitz, Jennifer; Elliott, Donna; Stuber, Margaret L; Skochelak, Susan E
2017-01-01
Phenomenon: The learning environment is the physical, social, and psychological context in which a student learns. A supportive learning environment contributes to student well-being and enhances student empathy, professionalism, and academic success, whereas an unsupportive learning environment may lead to burnout, exhaustion, and cynicism. Student perceptions of the medical school learning environment may change over time and be associated with students' year of training and may differ significantly depending on the student's gender or race/ethnicity. Understanding the changes in perceptions of the learning environment related to student characteristics and year of training could inform interventions that facilitate positive experiences in undergraduate medical education. The Medical School Learning Environment Survey (MSLES) was administered to 4,262 students who matriculated at one of 23 U.S. and Canadian medical schools in 2010 and 2011. Students completed the survey at the end of each year of medical school as part of a battery of surveys in the Learning Environment Study. A mixed-effects longitudinal model, t tests, Cohen's d effect size, and analysis of variance assessed the relationship between MSLES score, year of training, and demographic variables. After controlling for gender, race/ethnicity, and school, students reported worsening perceptions toward the medical school learning environment, with the worst perceptions in the 3rd year of medical school as students begin their clinical experiences, and some recovery in the 4th year after Match Day. The drop in MSLES scores associated with the transition to the clinical learning environment (-0.26 point drop in addition to yearly change, effect size = 0.52, p < .0001) is more than 3 times greater than the drop between the 1st and 2nd year (0.07 points, effect size = 0.14, p < .0001). The largest declines were from items related to work-life balance and informal student relationships. There was some, but not complete, recovery in perceptions of the medical school learning environment in the 4th year. Insights: Perceptions of the medical school learning environment worsen as students continue through medical school, with a stronger decline in perception scores as students' transition to the clinical learning environment. Students reported the greatest drop in finding time for outside activities and students helping one another in the 3rd year. Perceptions differed based on gender and race/ethnicity. Future studies should investigate the specific features of medical schools that contribute most significantly to student perceptions of the medical school learning environment, both positive and negative, to pinpoint potential interventions and improvements.
Models of Speed Discrimination
NASA Technical Reports Server (NTRS)
1997-01-01
The prime purpose of this project was to investigate various theoretical issues concerning the integration of information across visual space. To date, most of the research efforts in the study of the visual system seem to have been focused in two almost non-overlaping directions. One research focus has been the low level perception as studied by psychophysics. The other focus has been the study of high level vision exemplified by the study of object perception. Most of the effort in psychophysics has been devoted to the search for the fundamental "features" of perception. The general idea is that the most peripheral processes of the visual system decompose the input into features that are then used for classification and recognition. The experimental and theoretical focus has been on finding and describing these analyzers that decompose images into useful components. Various models are then compared to the physiological measurements performed on neurons in the sensory systems. In the study of higher level perception, the work has been focused on the representation of objects and on the connections between various physical effects and object perception. In this category we find the perception of 3D from a variety of physical measurements including motion, shading and other physical phenomena. With few exceptions, there seem to be very limited development of theories describing how the visual system might combine the output of the analyzers to form the representation of visual objects. Therefore, the processes underlying the integration of information over space represent critical aspects of vision system. The understanding of these processes will have implications on our expectations for the underlying physiological mechanisms, as well as for our models of the internal representation for visual percepts. In this project, we explored several mechanisms related to spatial summation, attention, and eye movements. The project comprised three components: 1. Modeling visual search for the detection of speed deviation. 2. Perception of moving objects. 3. Exploring the role of eye movements in various visual tasks.
The big picture: effects of surround on immersion and size perception.
Baranowski, Andreas M; Hecht, Heiko
2014-01-01
Despite the fear of the entertainment industry that illegal downloads of films might ruin their business, going to the movies continues to be a popular leisure activity. One reason why people prefer to watch movies in cinemas may be the surround of the movie screen or its physically huge size. To disentangle the factors that might contribute to the size impression, we tested several measures of subjective size and immersion in different viewing environments. For this purpose we built a model cinema that provided visual angle information comparable with that of a real cinema. Subjects watched identical movie clips in a real cinema, a model cinema, and on a display monitor in isolation. Whereas the isolated display monitor was inferior, the addition of a contextual model improved the viewing immersion to the extent that it was comparable with the movie theater experience, provided the viewing angle remained the same. In a further study we built an identical but even smaller model cinema to unconfound visual angle and viewing distance. Both model cinemas produced similar results. There was a trend for the larger screen to be more immersive; however, viewing angle did not play a role in how the movie was evaluated.
Talker variability in audio-visual speech perception
Heald, Shannon L. M.; Nusbaum, Howard C.
2014-01-01
A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition). So far, this talker variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker’s face, speech recognition is improved under adverse listening (e.g., noise or distortion) conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker’s face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker’s face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred. PMID:25076919
Talker variability in audio-visual speech perception.
Heald, Shannon L M; Nusbaum, Howard C
2014-01-01
A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition). So far, this talker variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker's face, speech recognition is improved under adverse listening (e.g., noise or distortion) conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker's face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker's face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred.
Mantokoudis, Georgios; Dähler, Claudia; Dubach, Patrick; Kompis, Martin; Caversaccio, Marco D.; Senn, Pascal
2013-01-01
Objective To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI) users. Methods Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM) sentence test. We presented video simulations using different video resolutions (1280×720, 640×480, 320×240, 160×120 px), frame rates (30, 20, 10, 7, 5 frames per second (fps)), speech velocities (three different speakers), webcameras (Logitech Pro9000, C600 and C500) and image/sound delays (0–500 ms). All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live Skype™ video connection and live face-to-face communication were assessed. Results Higher frame rate (>7 fps), higher camera resolution (>640×480 px) and shorter picture/sound delay (<100 ms) were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009) in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11) showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032). Conclusion Webcameras have the potential to improve telecommunication of hearing-impaired individuals. PMID:23359119
Draper, Catherine E; Davidowitz, Kesiah J; Goedecke, Julia H
2016-02-01
A higher tolerance for a larger body size has been associated with obesity in black South African (SA) women. The aim of the present study was to explore perceptions regarding body size and weight loss in a sample of black women from a low-income community in Cape Town, SA. Qualitative pilot study including five focus groups. Data were analysed using thematic analysis. Khayelitsha, Cape Town, SA. Twenty-one black SA women. The majority of participants had positive perceptions of overweight/obesity, which were influenced by community and cultural perceptions, but some inconsistencies were observed as overweight/obesity was also associated with ill health. Participants identified many benefits to weight loss, but due to the association with sickness, they were concerned about being stigmatised in their community. Although participants had knowledge about healthy eating, the main barrier to eating healthily included the perceived higher cost of healthier food and food insecurity. All participants saw exercise as a strategy to lose weight and improve health, and were interested in participating in a community-based exercise intervention, but negative community perceptions and conflicting views regarding who should lead the intervention were identified as barriers. These findings highlight the complexities surrounding participants' perceptions regarding body size, weight loss and weight-loss interventions, and emphasise low socio-economic status as a barrier to change. The study also highlights the strong influence of cultural ideals and community perceptions on personal perceptions. These findings underscore the necessity for culturally appropriate weight-loss interventions in low-income, transitioning communities.
Temporal parameters and time course of perceptual latency priming.
Scharlau, Ingrid; Neumann, Odmar
2003-06-01
Visual stimuli (primes) reduce the perceptual latency of a target appearing at the same location (perceptual latency priming, PLP). Three experiments assessed the time course of PLP by masked and, in Experiment 3, unmasked primes. Experiments 1 and 2 investigated the temporal parameters that determine the size of priming. Stimulus onset asynchrony was found to exert the main influence accompanied by a small effect of prime duration. Experiment 3 used a large range of priming onset asynchronies. We suggest to explain PLP by the Asynchronous Updating Model which relates it to the asynchrony of 2 central coding processes, preattentive coding of basic visual features and attentional orienting as a prerequisite for perceptual judgments and conscious perception.
Attention Increases Spike Count Correlations between Visual Cortical Areas.
Ruff, Douglas A; Cohen, Marlene R
2016-07-13
Visual attention, which improves perception of attended locations or objects, has long been known to affect many aspects of the responses of neuronal populations in visual cortex. There are two nonmutually exclusive hypotheses concerning the neuronal mechanisms that underlie these perceptual improvements. The first hypothesis, that attention improves the information encoded by a population of neurons in a particular cortical area, has considerable physiological support. The second hypothesis is that attention improves perception by selectively communicating relevant visual information. This idea has been tested primarily by measuring interactions between neurons on very short timescales, which are mathematically nearly independent of neuronal interactions on longer timescales. We tested the hypothesis that attention changes the way visual information is communicated between cortical areas on longer timescales by recording simultaneously from neurons in primary visual cortex (V1) and the middle temporal area (MT) in rhesus monkeys. We used two independent and complementary approaches. Our correlative experiment showed that attention increases the trial-to-trial response variability that is shared between the two areas. In our causal experiment, we electrically microstimulated V1 and found that attention increased the effect of stimulation on MT responses. Together, our results suggest that attention affects both the way visual stimuli are encoded within a cortical area and the extent to which visual information is communicated between areas on behaviorally relevant timescales. Visual attention dramatically improves the perception of attended stimuli. Attention has long been thought to act by selecting relevant visual information for further processing. It has been hypothesized that this selection is accomplished by increasing communication between neurons that encode attended information in different cortical areas. We recorded simultaneously from neurons in primary visual cortex and the middle temporal area while rhesus monkeys performed an attention task. We found that attention increased shared variability between neurons in the two areas and that attention increased the effect of microstimulation in V1 on the firing rates of MT neurons. Our results provide support for the hypothesis that attention increases communication between neurons in different brain areas on behaviorally relevant timescales. Copyright © 2016 the authors 0270-6474/16/367523-12$15.00/0.
Attention Increases Spike Count Correlations between Visual Cortical Areas
Cohen, Marlene R.
2016-01-01
Visual attention, which improves perception of attended locations or objects, has long been known to affect many aspects of the responses of neuronal populations in visual cortex. There are two nonmutually exclusive hypotheses concerning the neuronal mechanisms that underlie these perceptual improvements. The first hypothesis, that attention improves the information encoded by a population of neurons in a particular cortical area, has considerable physiological support. The second hypothesis is that attention improves perception by selectively communicating relevant visual information. This idea has been tested primarily by measuring interactions between neurons on very short timescales, which are mathematically nearly independent of neuronal interactions on longer timescales. We tested the hypothesis that attention changes the way visual information is communicated between cortical areas on longer timescales by recording simultaneously from neurons in primary visual cortex (V1) and the middle temporal area (MT) in rhesus monkeys. We used two independent and complementary approaches. Our correlative experiment showed that attention increases the trial-to-trial response variability that is shared between the two areas. In our causal experiment, we electrically microstimulated V1 and found that attention increased the effect of stimulation on MT responses. Together, our results suggest that attention affects both the way visual stimuli are encoded within a cortical area and the extent to which visual information is communicated between areas on behaviorally relevant timescales. SIGNIFICANCE STATEMENT Visual attention dramatically improves the perception of attended stimuli. Attention has long been thought to act by selecting relevant visual information for further processing. It has been hypothesized that this selection is accomplished by increasing communication between neurons that encode attended information in different cortical areas. We recorded simultaneously from neurons in primary visual cortex and the middle temporal area while rhesus monkeys performed an attention task. We found that attention increased shared variability between neurons in the two areas and that attention increased the effect of microstimulation in V1 on the firing rates of MT neurons. Our results provide support for the hypothesis that attention increases communication between neurons in different brain areas on behaviorally relevant timescales. PMID:27413161
Disentangling visual imagery and perception of real-world objects
Lee, Sue-Hyun; Kravitz, Dwight J.; Baker, Chris I.
2011-01-01
During mental imagery, visual representations can be evoked in the absence of “bottom-up” sensory input. Prior studies have reported similar neural substrates for imagery and perception, but studies of brain-damaged patients have revealed a double dissociation with some patients showing preserved imagery in spite of impaired perception and others vice versa. Here, we used fMRI and multi-voxel pattern analysis to investigate the specificity, distribution, and similarity of information for individual seen and imagined objects to try and resolve this apparent contradiction. In an event-related design, participants either viewed or imagined individual named object images on which they had been trained prior to the scan. We found that the identity of both seen and imagined objects could be decoded from the pattern of activity throughout the ventral visual processing stream. Further, there was enough correspondence between imagery and perception to allow discrimination of individual imagined objects based on the response during perception. However, the distribution of object information across visual areas was strikingly different during imagery and perception. While there was an obvious posterior-anterior gradient along the ventral visual stream for seen objects, there was an opposite gradient for imagined objects. Moreover, the structure of representations (i.e. the pattern of similarity between responses to all objects) was more similar during imagery than perception in all regions along the visual stream. These results suggest that while imagery and perception have similar neural substrates, they involve different network dynamics, resolving the tension between previous imaging and neuropsychological studies. PMID:22040738
How does parents' visual perception of their child's weight status affect their feeding style?
Yilmaz, Resul; Erkorkmaz, Ünal; Ozcetin, Mustafa; Karaaslan, Erhan
2013-01-01
Eating style is one of the prominente factors that determine energy intake. One of the influencing factors that determine parental feeding style is parental perception of the weight status of the child. The aim of this study is to evaluate the relationship between maternal visual perception of their children's weight status and their feeding style. A cross-sectional survey was completed with only mother's of 380 preschool children with age of 5 to 7 (6.14 years). Visual perception scores were measured with a sketch and maternal feeding style was measured with validated "Parental Feeding Style Questionnaire". The parental feeding dimensions "emotional feeding" and "encouragement to eat" subscale scores were low in overweight children according to visual perception classification. "Emotional feeding" and "permissive control" subscale scores were statistically different in children classified as correctly perceived and incorrectly low perceived group due to maternal misperception. Various feeding styles were related to maternal visual perception. The best approach to preventing obesity and underweight may be to focus on achieving correct parental perception of the weight status of their children, thus improving parental skills and leading them to implement proper feeding styles. Copyright © AULA MEDICA EDICIONES 2013. Published by AULA MEDICA. All rights reserved.
Neuronal integration in visual cortex elevates face category tuning to conscious face perception
Fahrenfort, Johannes J.; Snijders, Tineke M.; Heinen, Klaartje; van Gaal, Simon; Scholte, H. Steven; Lamme, Victor A. F.
2012-01-01
The human brain has the extraordinary capability to transform cluttered sensory input into distinct object representations. For example, it is able to rapidly and seemingly without effort detect object categories in complex natural scenes. Surprisingly, category tuning is not sufficient to achieve conscious recognition of objects. What neural process beyond category extraction might elevate neural representations to the level where objects are consciously perceived? Here we show that visible and invisible faces produce similar category-selective responses in the ventral visual cortex. The pattern of neural activity evoked by visible faces could be used to decode the presence of invisible faces and vice versa. However, only visible faces caused extensive response enhancements and changes in neural oscillatory synchronization, as well as increased functional connectivity between higher and lower visual areas. We conclude that conscious face perception is more tightly linked to neural processes of sustained information integration and binding than to processes accommodating face category tuning. PMID:23236162
Navarro, A; Cristaldo, P E; Díaz, M P; Eynard, A R
2000-01-01
Food pictures are suitable visual tools for quantize food and nutrient consumption avoiding bias due to self-assessments. To determine the perception of food portion size and to establish the efficacy of food pictures for dietaries assessments. A food frequency questionnaire (FFQ) including 118 food items of daily consumption was applied to 30 adults representative of Córdoba, Argentina, population. Among several food models (paper maché, plastics) and pictures, those which more accurately filled the purpose were selected. 3 small, median and large standard portion size were determined. Data were evaluated with descriptive statistics tools and Chi square adherence test. The assessment of 51 percent of the food was assayed in concordance with the reference size. In general, the remainder was overestimated. The 90 percent of volunteers concluded that the pictures were the best visual resource. The photographic atlas of food is an useful material for quantize the dietary consumption, suitable for many types of dietaries assessments. In conclusion, comparison among pictures of three portions previously standardized for each food is highly recommendable.
When size matters: attention affects performance by contrast or response gain.
Herrmann, Katrin; Montaser-Kouhsari, Leila; Carrasco, Marisa; Heeger, David J
2010-12-01
Covert attention, the selective processing of visual information in the absence of eye movements, improves behavioral performance. We found that attention, both exogenous (involuntary) and endogenous (voluntary), can affect performance by contrast or response gain changes, depending on the stimulus size and the relative size of the attention field. These two variables were manipulated in a cueing task while stimulus contrast was varied. We observed a change in behavioral performance consonant with a change in contrast gain for small stimuli paired with spatial uncertainty and a change in response gain for large stimuli presented at one location (no uncertainty) and surrounded by irrelevant flanking distracters. A complementary neuroimaging experiment revealed that observers' attention fields were wider with than without spatial uncertainty. Our results support important predictions of the normalization model of attention and reconcile previous, seemingly contradictory findings on the effects of visual attention.
Moon illusion and spiral aftereffect: illusions due to the loom-zoom system?
Hershenson, M
1982-12-01
The moon illusion and the spiral aftereffect are illusions in which apparent size and apparent distance vary inversely. Because this relationship is exactly opposite to that predicted by the static size--distance invariance hypothesis, the illusions have been called "paradoxical." The illusions may be understood as products of a loom-zoom system, a hypothetical visual subsystem that, in its normal operation, acts according to its structural constraint, the constancy axiom, to produce perceptions that satisfy the constraints of stimulation, the kinetic size--distance invariance hypothesis. When stimulated by its characteristic stimulus of symmetrical expansion or contraction, the loom-zoom system produces the perception of a rigid object moving in depth. If this system is stimulated by a rotating spiral, a negative motion-aftereffect is produced when rotation ceases. If fixation is then shifted to a fixed-sized disc, the aftereffect process alters perceived distance and the loom-zoom system alters perceived size such that the disc appears to expand and approach or to contract and recede, depending on the direction of rotation of the spiral. If the loom-zoom system is stimulated by a moon-terrain configuration, the equidistance tendency produces a foreshortened perceived distance for the moon as an inverse function of elevation and acts in conjunction with the loom-zoom system to produce the increased perceived size of the moon.
Size Fluctuations of Near Critical Nuclei and Gibbs Free Energy for Nucleation of BDA on Cu(001)
NASA Astrophysics Data System (ADS)
Schwarz, Daniel; van Gastel, Raoul; Zandvliet, Harold J. W.; Poelsema, Bene
2012-07-01
We present a low-energy electron microscopy study of nucleation and growth of BDA on Cu(001) at low supersaturation. At sufficiently high coverage, a dilute BDA phase coexists with c(8×8) crystallites. The real-time microscopic information allows a direct visualization of near-critical nuclei, determination of the supersaturation and the line tension of the crystallites, and, thus, derivation of the Gibbs free energy for nucleation. The resulting critical nucleus size nicely agrees with the measured value. Nuclei up to 4-6 times larger still decay with finite probability, urging reconsideration of the classic perception of a critical nucleus.
Size fluctuations of near critical nuclei and Gibbs free energy for nucleation of BDA on Cu(001).
Schwarz, Daniel; van Gastel, Raoul; Zandvliet, Harold J W; Poelsema, Bene
2012-07-06
We present a low-energy electron microscopy study of nucleation and growth of BDA on Cu(001) at low supersaturation. At sufficiently high coverage, a dilute BDA phase coexists with c(8×8) crystallites. The real-time microscopic information allows a direct visualization of near-critical nuclei, determination of the supersaturation and the line tension of the crystallites, and, thus, derivation of the Gibbs free energy for nucleation. The resulting critical nucleus size nicely agrees with the measured value. Nuclei up to 4-6 times larger still decay with finite probability, urging reconsideration of the classic perception of a critical nucleus.
The Comparison of Visual Working Memory Representations with Perceptual Inputs
ERIC Educational Resources Information Center
Hyun, Joo-seok; Woodman, Geoffrey F.; Vogel, Edward K.; Hollingworth, Andrew; Luck, Steven J.
2009-01-01
The human visual system can notice differences between memories of previous visual inputs and perceptions of new visual inputs, but the comparison process that detects these differences has not been well characterized. In this study, the authors tested the hypothesis that differences between the memory of a stimulus array and the perception of a…
The role of convexity in perception of symmetry and in visual short-term memory.
Bertamini, Marco; Helmy, Mai Salah; Hulleman, Johan
2013-01-01
Visual perception of shape is affected by coding of local convexities and concavities. For instance, a recent study reported that deviations from symmetry carried by convexities were easier to detect than deviations carried by concavities. We removed some confounds and extended this work from a detection of reflection of a contour (i.e., bilateral symmetry), to a detection of repetition of a contour (i.e., translational symmetry). We tested whether any convexity advantage is specific to bilateral symmetry in a two-interval (Experiment 1) and a single-interval (Experiment 2) detection task. In both, we found a convexity advantage only for repetition. When we removed the need to choose which region of the contour to monitor (Experiment 3) the effect disappeared. In a second series of studies, we again used shapes with multiple convex or concave features. Participants performed a change detection task in which only one of the features could change. We did not find any evidence that convexities are special in visual short-term memory, when the to-be-remembered features only changed shape (Experiment 4), when they changed shape and changed from concave to convex and vice versa (Experiment 5), or when these conditions were mixed (Experiment 6). We did find a small advantage for coding convexity as well as concavity over an isolated (and thus ambiguous) contour. The latter is consistent with the known effect of closure on processing of shape. We conclude that convexity plays a role in many perceptual tasks but that it does not have a basic encoding advantage over concavity.
Audiovisual Temporal Recalibration for Speech in Synchrony Perception and Speech Identification
NASA Astrophysics Data System (ADS)
Asakawa, Kaori; Tanaka, Akihiro; Imai, Hisato
We investigated whether audiovisual synchrony perception for speech could change after observation of the audiovisual temporal mismatch. Previous studies have revealed that audiovisual synchrony perception is re-calibrated after exposure to a constant timing difference between auditory and visual signals in non-speech. In the present study, we examined whether this audiovisual temporal recalibration occurs at the perceptual level even for speech (monosyllables). In Experiment 1, participants performed an audiovisual simultaneity judgment task (i.e., a direct measurement of the audiovisual synchrony perception) in terms of the speech signal after observation of the speech stimuli which had a constant audiovisual lag. The results showed that the “simultaneous” responses (i.e., proportion of responses for which participants judged the auditory and visual stimuli to be synchronous) at least partly depended on exposure lag. In Experiment 2, we adopted the McGurk identification task (i.e., an indirect measurement of the audiovisual synchrony perception) to exclude the possibility that this modulation of synchrony perception was solely attributable to the response strategy using stimuli identical to those of Experiment 1. The characteristics of the McGurk effect reported by participants depended on exposure lag. Thus, it was shown that audiovisual synchrony perception for speech could be modulated following exposure to constant lag both in direct and indirect measurement. Our results suggest that temporal recalibration occurs not only in non-speech signals but also in monosyllabic speech at the perceptual level.
Spatial perception predicts laparoscopic skills on virtual reality laparoscopy simulator.
Hassan, I; Gerdes, B; Koller, M; Dick, B; Hellwig, D; Rothmund, M; Zielke, A
2007-06-01
This study evaluates the influence of visual-spatial perception on laparoscopic performance of novices with a virtual reality simulator (LapSim(R)). Twenty-four novices completed standardized tests of visual-spatial perception (Lameris Toegepaste Natuurwetenschappelijk Onderzoek [TNO] Test(R) and Stumpf-Fay Cube Perspectives Test(R)) and laparoscopic skills were assessed objectively, while performing 1-h practice sessions on the LapSim(R), comprising of coordination, cutting, and clip application tasks. Outcome variables included time to complete the tasks, economy of motion as well as total error scores, respectively. The degree of visual-spatial perception correlated significantly with laparoscopic performance on the LapSim(R) scores. Participants with a high degree of spatial perception (Group A) performed the tasks faster than those (Group B) who had a low degree of spatial perception (p = 0.001). Individuals with a high degree of spatial perception also scored better for economy of motion (p = 0.021), tissue damage (p = 0.009), and total error (p = 0.007). Among novices, visual-spatial perception is associated with manual skills performed on a virtual reality simulator. This result may be important for educators to develop adequate training programs that can be individually adapted.
Behrens, Janina R.; Kraft, Antje; Irlbacher, Kerstin; Gerhardt, Holger; Olma, Manuel C.; Brandt, Stephan A.
2017-01-01
Understanding processes performed by an intact visual cortex as the basis for developing methods that enhance or restore visual perception is of great interest to both researchers and medical practitioners. Here, we explore whether contrast sensitivity, a main function of the primary visual cortex (V1), can be improved in healthy subjects by repetitive, noninvasive anodal transcranial direct current stimulation (tDCS). Contrast perception was measured via threshold perimetry directly before and after intervention (tDCS or sham stimulation) on each day over 5 consecutive days (24 subjects, double-blind study). tDCS improved contrast sensitivity from the second day onwards, with significant effects lasting 24 h. After the last stimulation on day 5, the anodal group showed a significantly greater improvement in contrast perception than the sham group (23 vs. 5%). We found significant long-term effects in only the central 2–4° of the visual field 4 weeks after the last stimulation. We suspect a combination of two factors contributes to these lasting effects. First, the V1 area that represents the central retina was located closer to the polarization electrode, resulting in higher current density. Second, the central visual field is represented by a larger cortical area relative to the peripheral visual field (cortical magnification). This is the first study showing that tDCS over V1 enhances contrast perception in healthy subjects for several weeks. This study contributes to the investigation of the causal relationship between the external modulation of neuronal membrane potential and behavior (in our case, visual perception). Because the vast majority of human studies only show temporary effects after single tDCS sessions targeting the visual system, our study underpins the potential for lasting effects of repetitive tDCS-induced modulation of neuronal excitability. PMID:28860969
Cultural differences in room size perception
Bülthoff, Heinrich H.; de la Rosa, Stephan; Dodds, Trevor J.
2017-01-01
Cultural differences in spatial perception have been little investigated, which gives rise to the impression that spatial cognitive processes might be universal. Contrary to this idea, we demonstrate cultural differences in spatial volume perception of computer generated rooms between Germans and South Koreans. We used a psychophysical task in which participants had to judge whether a rectangular room was larger or smaller than a square room of reference. We systematically varied the room rectangularity (depth to width aspect ratio) and the viewpoint (middle of the short wall vs. long wall) from which the room was viewed. South Koreans were significantly less biased by room rectangularity and viewpoint than their German counterparts. These results are in line with previous notions of general cognitive processing strategies being more context dependent in East Asian societies than Western ones. We point to the necessity of considering culturally-specific cognitive processing strategies in visual spatial cognition research. PMID:28426729