Changes in the distribution of sustained attention alter the perceived structure of visual space.
Fortenbaugh, Francesca C; Robertson, Lynn C; Esterman, Michael
2017-02-01
Visual spatial attention is a critical process that allows for the selection and enhanced processing of relevant objects and locations. While studies have shown attentional modulations of perceived location and the representation of distance information across multiple objects, there remains disagreement regarding what influence spatial attention has on the underlying structure of visual space. The present study utilized a method of magnitude estimation in which participants must judge the location of briefly presented targets within the boundaries of their individual visual fields in the absence of any other objects or boundaries. Spatial uncertainty of target locations was used to assess perceived locations across distributed and focused attention conditions without the use of external stimuli, such as visual cues. Across two experiments we tested locations along the cardinal and 45° oblique axes. We demonstrate that focusing attention within a region of space can expand the perceived size of visual space; even in cases where doing so makes performance less accurate. Moreover, the results of the present studies show that when fixation is actively maintained, focusing attention along a visual axis leads to an asymmetrical stretching of visual space that is predominantly focused across the central half of the visual field, consistent with an expansive gradient along the focus of voluntary attention. These results demonstrate that focusing sustained attention peripherally during active fixation leads to an asymmetrical expansion of visual space within the central visual field. Published by Elsevier Ltd.
ERIC Educational Resources Information Center
Zhou, Liu; He, Zijiang J.; Ooi, Teng Leng
2013-01-01
Dimly lit targets in the dark are perceived as located about an implicit slanted surface that delineates the visual system's intrinsic bias (Ooi, Wu, & He, 2001). If the intrinsic bias reflects the internal model of visual space--as proposed here--its influence should extend beyond target localization. Our first 2 experiments demonstrated that…
Inhibition of return shortens perceived duration of a brief visual event.
Osugi, Takayuki; Takeda, Yuji; Murakami, Ikuya
2016-11-01
We investigated the influence of attentional inhibition on the perceived duration of a brief visual event. Although attentional capture by an exogenous cue is known to prolong the perceived duration of an attended visual event, it remains unclear whether time perception is also affected by subsequent attentional inhibition at the location previously cued by an exogenous cue, an attentional phenomenon known as inhibition of return. In this study, we combined spatial cuing and duration judgment. After one second from the appearance of an uninformative peripheral cue either to the left or to the right, a target appeared at a cued side in one-third of the trials, which indeed yielded inhibition of return, and at the opposite side in another one-third of the trials. In the remaining trials, a cue appeared at a central box and one second later, a target appeared at either the left or right side. The target at the previously cued location was perceived to last shorter than the target presented at the opposite location, and shorter than the target presented after the central cue presentation. Therefore, attentional inhibition produced by a classical paradigm of inhibition of return decreased the perceived duration of a brief visual event. Copyright © 2016 Elsevier Ltd. All rights reserved.
If it's not there, where is it? Locating illusory conjunctions.
Hazeltine, R E; Prinzmetal, W; Elliott, W
1997-02-01
There is evidence that complex objects are decomposed by the visual system into features, such as shape and color. Consistent with this theory is the phenomenon of illusory conjunctions, which occur when features are incorrectly combined to form an illusory object. We analyzed the perceived location of illusory conjunctions to study the roles of color and shape in the location of visual objects. In Experiments 1 and 2, participants located illusory conjunctions about halfway between the veridical locations of the component features. Experiment 3 showed that the distribution of perceived locations was not the mixture of two distributions centered at the 2 feature locations. Experiment 4 replicated these results with an identification task rather than a detection task. We concluded that the locations of illusory conjunctions were not arbitrary but were determined by both constituent shape and color.
Spatial effects of shifting prisms on properties of posterior parietal cortex neurons
Karkhanis, Anushree N; Heider, Barbara; Silva, Fabian Muñoz; Siegel, Ralph M
2014-01-01
The posterior parietal cortex contains neurons that respond to visual stimulation and motor behaviour. The objective of the current study was to test short-term adaptation in neurons in macaque area 7a and the dorsal prelunate during visually guided reaching using Fresnel prisms that displaced the visual field. The visual perturbation shifted the eye position and created a mismatch between perceived and actual reach location. Two non-human primates were trained to reach to visual targets before, during and after prism exposure while fixating the reach target in different locations. They were required to reach to the physical location of the reach target and not the perceived, displaced location. While behavioural adaptation to the prisms occurred within a few trials, the majority of neurons responded to the distortion either with substantial changes in spatial eye position tuning or changes in overall firing rate. These changes persisted even after prism removal. The spatial changes were not correlated with the direction of induced prism shift. The transformation of gain fields between conditions was estimated by calculating the translation and rotation in Euler angles. Rotations and translations of the horizontal and vertical spatial components occurred in a systematic manner for the population of neurons suggesting that the posterior parietal cortex retains a constant representation of the visual field remapping between experimental conditions. PMID:24928956
Dukic, T; Hanson, L; Falkmer, T
2006-01-15
The study examined the effects of manual control locations on two groups of randomly selected young and old drivers in relation to visual time off road, steering wheel deviation and safety perception. Measures of visual time off road, steering wheel deviations and safety perception were performed with young and old drivers during real traffic. The results showed an effect of both driver's age and button location on the dependent variables. Older drivers spent longer visual time off road when pushing the buttons and had larger steering wheel deviations. Moreover, the greater the eccentricity between the normal line of sight and the button locations, the longer the visual time off road and the larger the steering wheel deviations. No interaction effect between button location and age was found with regard to visual time off road. Button location had an effect on perceived safety: the further away from the normal line of sight the lower the rating.
How visual cues for when to listen aid selective auditory attention.
Varghese, Lenny A; Ozmeral, Erol J; Best, Virginia; Shinn-Cunningham, Barbara G
2012-06-01
Visual cues are known to aid auditory processing when they provide direct information about signal content, as in lip reading. However, some studies hint that visual cues also aid auditory perception by guiding attention to the target in a mixture of similar sounds. The current study directly tests this idea for complex, nonspeech auditory signals, using a visual cue providing only timing information about the target. Listeners were asked to identify a target zebra finch bird song played at a random time within a longer, competing masker. Two different maskers were used: noise and a chorus of competing bird songs. On half of all trials, a visual cue indicated the timing of the target within the masker. For the noise masker, the visual cue did not affect performance when target and masker were from the same location, but improved performance when target and masker were in different locations. In contrast, for the chorus masker, visual cues improved performance only when target and masker were perceived as coming from the same direction. These results suggest that simple visual cues for when to listen improve target identification by enhancing sounds near the threshold of audibility when the target is energetically masked and by enhancing segregation when it is difficult to direct selective attention to the target. Visual cues help little when target and masker already differ in attributes that enable listeners to engage selective auditory attention effectively, including differences in spectrotemporal structure and in perceived location.
The Multisensory Attentional Consequences of Tool Use: A Functional Magnetic Resonance Imaging Study
Holmes, Nicholas P.; Spence, Charles; Hansen, Peter C.; Mackay, Clare E.; Calvert, Gemma A.
2008-01-01
Background Tool use in humans requires that multisensory information is integrated across different locations, from objects seen to be distant from the hand, but felt indirectly at the hand via the tool. We tested the hypothesis that using a simple tool to perceive vibrotactile stimuli results in the enhanced processing of visual stimuli presented at the distal, functional part of the tool. Such a finding would be consistent with a shift of spatial attention to the location where the tool is used. Methodology/Principal Findings We tested this hypothesis by scanning healthy human participants' brains using functional magnetic resonance imaging, while they used a simple tool to discriminate between target vibrations, accompanied by congruent or incongruent visual distractors, on the same or opposite side to the tool. The attentional hypothesis was supported: BOLD response in occipital cortex, particularly in the right hemisphere lingual gyrus, varied significantly as a function of tool position, increasing contralaterally, and decreasing ipsilaterally to the tool. Furthermore, these modulations occurred despite the fact that participants were repeatedly instructed to ignore the visual stimuli, to respond only to the vibrotactile stimuli, and to maintain visual fixation centrally. In addition, the magnitude of multisensory (visual-vibrotactile) interactions in participants' behavioural responses significantly predicted the BOLD response in occipital cortical areas that were also modulated as a function of both visual stimulus position and tool position. Conclusions/Significance These results show that using a simple tool to locate and to perceive vibrotactile stimuli is accompanied by a shift of spatial attention to the location where the functional part of the tool is used, resulting in enhanced processing of visual stimuli at that location, and decreased processing at other locations. This was most clearly observed in the right hemisphere lingual gyrus. Such modulations of visual processing may reflect the functional importance of visuospatial information during human tool use. PMID:18958150
Unconscious Cross-Modal Priming of Auditory Sound Localization by Visual Words
ERIC Educational Resources Information Center
Ansorge, Ulrich; Khalid, Shah; Laback, Bernhard
2016-01-01
Little is known about the cross-modal integration of unconscious and conscious information. In the current study, we therefore tested whether the spatial meaning of an unconscious visual word, such as "up", influences the perceived location of a subsequently presented auditory target. Although cross-modal integration of unconscious…
Etchemendy, Pablo E; Spiousas, Ignacio; Calcagno, Esteban R; Abregú, Ezequiel; Eguia, Manuel C; Vergara, Ramiro O
2018-06-01
In this study we evaluated whether a method of direct location is an appropriate response method for measuring auditory distance perception of far-field sound sources. We designed an experimental set-up that allows participants to indicate the distance at which they perceive the sound source by moving a visual marker. We termed this method Cross-Modal Direct Location (CMDL) since the response procedure involves the visual modality while the stimulus is presented through the auditory modality. Three experiments were conducted with sound sources located from 1 to 6 m. The first one compared the perceived distances obtained using either the CMDL device or verbal report (VR), which is the response method more frequently used for reporting auditory distance in the far field, and found differences on response compression and bias. In Experiment 2, participants reported visual distance estimates to the visual marker that were found highly accurate. Then, we asked the same group of participants to report VR estimates of auditory distance and found that the spatial visual information, obtained from the previous task, did not influence their reports. Finally, Experiment 3 compared the same responses that Experiment 1 but interleaving the methods, showing a weak, but complex, mutual influence. However, the estimates obtained with each method remained statistically different. Our results show that the auditory distance psychophysical functions obtained with the CMDL method are less susceptible to previously reported underestimation for distances over 2 m.
Object formation in visual working memory: Evidence from object-based attention.
Zhou, Jifan; Zhang, Haihang; Ding, Xiaowei; Shui, Rende; Shen, Mowei
2016-09-01
We report on how visual working memory (VWM) forms intact perceptual representations of visual objects using sub-object elements. Specifically, when objects were divided into fragments and sequentially encoded into VWM, the fragments were involuntarily integrated into objects in VWM, as evidenced by the occurrence of both positive and negative object-based attention effects: In Experiment 1, when subjects' attention was cued to a location occupied by the VWM object, the target presented at the location of that object was perceived as occurring earlier than that presented at the location of a different object. In Experiment 2, responses to a target were significantly slower when a distractor was presented at the same location as the cued object (Experiment 2). These results suggest that object fragments can be integrated into objects within VWM in a manner similar to that of visual perception. Copyright © 2016 Elsevier B.V. All rights reserved.
Location cue validity affects inhibition of return of visual processing.
Wright, R D; Richard, C M
2000-01-01
Inhibition-of-return is the process by which visual search for an object positioned among others is biased toward novel rather than previously inspected items. It is thought to occur automatically and to increase search efficiency. We examined this phenomenon by studying the facilitative and inhibitory effects of location cueing on target-detection response times in a search task. The results indicated that facilitation was a reflexive consequence of cueing whereas inhibition appeared to depend on cue informativeness. More specifically, the inhibition-of-return effect occurred only when the cue provided no information about the impending target's location. We suggest that the results are consistent with the notion of two levels of visual processing. The first involves rapid and reflexive operations that underlie the facilitative effects of location cueing on target detection. The second involves a rapid but goal-driven inhibition procedure that the perceiver can invoke if doing so will enhance visual search performance.
Pre-Service Visual Art Teachers' Perceptions of Assessment in Online Learning
ERIC Educational Resources Information Center
Allen, Jeanne Maree; Wright, Suzie; Innes, Maureen
2014-01-01
This paper reports on a study conducted into how one cohort of Master of Teaching pre-service visual art teachers perceived their learning in a fully online learning environment. Located in an Australian urban university, this qualitative study provided insights into a number of areas associated with higher education online learning, including…
Perceived Visual Distortions in Juvenile Amblyopes During/Following Routine Amblyopia Treatment.
Piano, Marianne E F; Bex, Peter J; Simmers, Anita J
2016-08-01
To establish the point prevalence of perceived visual distortions (PVDs) in amblyopic children; the association between severity of PVDs and clinical parameters of amblyopia; and the relationship between PVDs and amblyopia treatment outcomes. Perceived visual distortions were measured using a 16-point dichoptic alignment paradigm in 148 visually normal children (aged, 9.18 ± 2.51 years), and 82 amblyopic children (aged, 6.33 ± 1.48 years) receiving or following amblyopia treatment. Global distortion (GD; vector sum of mean-centered individual alignment error between physical and perceived target location) and Global uncertainty (GU; SD of GD over two experiment runs) were compared to age-matched control data, and correlated against clinical parameters of amblyopia (type, monocular visual acuity, pretreatment interocular acuity difference, refractive error, age at diagnosis, motor fusion, stereopsis, near angle of deviation) and amblyopia treatment outcomes (refractive adaption duration, treatment duration, occlusion dosage, posttreatment interocular acuity difference, number of lines improvement). Point prevalence of PVDs in amblyopes was 56.1%. Strabismic amblyopes experienced more severe distortions than anisometropic or microtropic amblyopes (GD Kruskal Wallis H = 16.89, P < 0.001; GU Kruskal Wallis H = 15.31, P < 0.001). Perceived visual distortions severity moderately correlated with the strength of binocular function, (e.g., log stereoacuity [GD rho = 0.419, P < 0.001; GU rho = 0.384, P < 0.001)], and strongly with near angle of deviation (GD rho = 0.578, P < 0.001; GU rho = 0.384, P < 0.001). There was no relationship between severity of PVDs and amblyopia treatment outcomes, or the amblyopic visual acuity deficit. Perceived visual distortions persisted in more than one-half of treated amblyopic cases whose treatment was deemed successful. Perceived visual distortions are common symptoms of amblyopia and are correlated with binocular (stereoacuity, angle of deviation), but not monocular (visual acuity) clinical outcomes. This adds to evidence demonstrating the role of decorrelated binocular single vision in many aspects of amblyopia, and emphasizes the importance of restoring and improving binocular single vision in amblyopic individuals.
Evolutionary relevance facilitates visual information processing.
Jackson, Russell E; Calvillo, Dusti P
2013-11-03
Visual search of the environment is a fundamental human behavior that perceptual load affects powerfully. Previously investigated means for overcoming the inhibitions of high perceptual load, however, generalize poorly to real-world human behavior. We hypothesized that humans would process evolutionarily relevant stimuli more efficiently than evolutionarily novel stimuli, and evolutionary relevance would mitigate the repercussions of high perceptual load during visual search. Animacy is a significant component to evolutionary relevance of visual stimuli because perceiving animate entities is time-sensitive in ways that pose significant evolutionary consequences. Participants completing a visual search task located evolutionarily relevant and animate objects fastest and with the least impact of high perceptual load. Evolutionarily novel and inanimate objects were located slowest and with the highest impact of perceptual load. Evolutionary relevance may importantly affect everyday visual information processing.
Differential patterns of 2D location versus depth decoding along the visual hierarchy.
Finlayson, Nonie J; Zhang, Xiaoli; Golomb, Julie D
2017-02-15
Visual information is initially represented as 2D images on the retina, but our brains are able to transform this input to perceive our rich 3D environment. While many studies have explored 2D spatial representations or depth perception in isolation, it remains unknown if or how these processes interact in human visual cortex. Here we used functional MRI and multi-voxel pattern analysis to investigate the relationship between 2D location and position-in-depth information. We stimulated different 3D locations in a blocked design: each location was defined by horizontal, vertical, and depth position. Participants remained fixated at the center of the screen while passively viewing the peripheral stimuli with red/green anaglyph glasses. Our results revealed a widespread, systematic transition throughout visual cortex. As expected, 2D location information (horizontal and vertical) could be strongly decoded in early visual areas, with reduced decoding higher along the visual hierarchy, consistent with known changes in receptive field sizes. Critically, we found that the decoding of position-in-depth information tracked inversely with the 2D location pattern, with the magnitude of depth decoding gradually increasing from intermediate to higher visual and category regions. Representations of 2D location information became increasingly location-tolerant in later areas, where depth information was also tolerant to changes in 2D location. We propose that spatial representations gradually transition from 2D-dominant to balanced 3D (2D and depth) along the visual hierarchy. Copyright © 2016 Elsevier Inc. All rights reserved.
Importing perceived features into false memories.
Lyle, Keith B; Johnson, Marcia K
2006-02-01
False memories sometimes contain specific details, such as location or colour, about events that never occurred. Based on the source-monitoring framework, we investigated one process by which false memories acquire details: the reactivation and misattribution of feature information from memories of similar perceived events. In Experiments 1A and 1B, when imagined objects were falsely remembered as seen, participants often reported that the objects had appeared in locations where visually or conceptually similar objects, respectively, had actually appeared. Experiment 2 indicated that colour and shape features of seen objects were misattributed to false memories of imagined objects. Experiment 3 showed that perceived details were misattributed to false memories of objects that had not been explicitly imagined. False memories that imported perceived features, compared to those that presumably did not, were subjectively more like memories for perceived events. Thus, perception may be even more pernicious than imagination in contributing to false memories.
Effects of Ocular Optics on Perceived Visual Direction and Depth
NASA Astrophysics Data System (ADS)
Ye, Ming
Most studies of human retinal image quality have specifically addressed the issues of image contrast, few have examined the problem of image location. However, one of the most impressive properties of human vision involves the location of objects. We are able to identify object location with great accuracy (less than 5 arcsec). The sensitivity we exhibit for image location indicates that any optical errors, such as refractive error, ocular aberrations, pupil decentration, etc., may have noticeable effects on perceived visual direction and distance of objects. The most easily observed effects of these optical factors is a binocular depth illusion called chromostereopsis in which equidistance colored objects appear to lie at the different distances. This dissertation covers a series of theoretical and experimental studies that examined the effects of ocular optics on perceived monocular visual direction and binocular chromostereopsis. Theoretical studies included development of an adequate eye model for predicting chromatic aberration, a major ocular aberration, using geometric optics. Also, a wave optical analysis is used to model the effects of defocus, optical aberrations, Stiles-Crawford effect (SCE) and pupil location on retinal image profiles. Experimental studies used psychophysical methods such as monocular vernier alignment tests, binocular stereoscopic tests, etc. This dissertation concludes: (1) With a decentered large pupil, the SCE reduces defocused image shifts compare to an eye without the SCE. (2) The blurred image location can be predicted by the centroid of the image profile. (3) Chromostereopsis with small pupils can be precisely accounted for by the interocular difference in monocular transverse chromatic aberration. (4) The SCE also plays an important role in the effect of pupil size on chromostereopsis. The reduction of chromostereopsis with large pupils can be accurately predicted by the interocular difference in monocular chromatic diplopia which is also reduced with large pupils. This supports the hypothesis that the effect of pupil size on chromostereopsis is due to monocular mechanisms.
The Role of the Oculomotor System in Updating Visual-Spatial Working Memory across Saccades.
Boon, Paul J; Belopolsky, Artem V; Theeuwes, Jan
2016-01-01
Visual-spatial working memory (VSWM) helps us to maintain and manipulate visual information in the absence of sensory input. It has been proposed that VSWM is an emergent property of the oculomotor system. In the present study we investigated the role of the oculomotor system in updating of spatial working memory representations across saccades. Participants had to maintain a location in memory while making a saccade to a different location. During the saccade the target was displaced, which went unnoticed by the participants. After executing the saccade, participants had to indicate the memorized location. If memory updating fully relies on cancellation driven by extraretinal oculomotor signals, the displacement should have no effect on the perceived location of the memorized stimulus. However, if postsaccadic retinal information about the location of the saccade target is used, the perceived location will be shifted according to the target displacement. As it has been suggested that maintenance of accurate spatial representations across saccades is especially important for action control, we used different ways of reporting the location held in memory; a match-to-sample task, a mouse click or by making another saccade. The results showed a small systematic target displacement bias in all response modalities. Parametric manipulation of the distance between the to-be-memorized stimulus and saccade target revealed that target displacement bias increased over time and changed its spatial profile from being initially centered on locations around the saccade target to becoming spatially global. Taken together results suggest that we neither rely exclusively on extraretinal nor on retinal information in updating working memory representations across saccades. The relative contribution of retinal signals is not fixed but depends on both the time available to integrate these signals as well as the distance between the saccade target and the remembered location.
Reading Habits, Perceptual Learning, and Recognition of Printed Words
ERIC Educational Resources Information Center
Nazir, Tatjana A.; Ben-Boutayab, Nadia; Decoppet, Nathalie; Deutsch, Avital; Frost, Ram
2004-01-01
The present work aims at demonstrating that visual training associated with the act of reading modifies the way we perceive printed words. As reading does not train all parts of the retina in the same way but favors regions on the side in the direction of scanning, visual word recognition should be better at retinal locations that are frequently…
Misperception of exocentric directions in auditory space
Arthur, Joeanna C.; Philbeck, John W.; Sargent, Jesse; Dopkins, Stephen
2008-01-01
Previous studies have demonstrated large errors (over 30°) in visually perceived exocentric directions (the direction between two objects that are both displaced from the observer’s location; e.g., Philbeck et al., in press). Here, we investigated whether a similar pattern occurs in auditory space. Blindfolded participants either attempted to aim a pointer at auditory targets (an exocentric task) or gave a verbal estimate of the egocentric target azimuth. Targets were located at 20° to 160° azimuth in the right hemispace. For comparison, we also collected pointing and verbal judgments for visual targets. We found that exocentric pointing responses exhibited sizeable undershooting errors, for both auditory and visual targets, that tended to become more strongly negative as azimuth increased (up to −19° for visual targets at 160°). Verbal estimates of the auditory and visual target azimuths, however, showed a dramatically different pattern, with relatively small overestimations of azimuths in the rear hemispace. At least some of the differences between verbal and pointing responses appear to be due to the frames of reference underlying the responses; when participants used the pointer to reproduce the egocentric target azimuth rather than the exocentric target direction relative to the pointer, the pattern of pointing errors more closely resembled that seen in verbal reports. These results show that there are similar distortions in perceiving exocentric directions in visual and auditory space. PMID:18555205
Aymerich-Franch, Laura; Petit, Damien; Ganesh, Gowrishankar; Kheddar, Abderrahmane
2016-11-01
Whole-body embodiment studies have shown that synchronized multi-sensory cues can trick a healthy human mind to perceive self-location outside the bodily borders, producing an illusion that resembles an out-of-body experience (OBE). But can a healthy mind also perceive the sense of self in more than one body at the same time? To answer this question, we created a novel artificial reduplication of one's body using a humanoid robot embodiment system. We first enabled individuals to embody the humanoid robot by providing them with audio-visual feedback and control of the robot head movements and walk, and then explored the self-location and self-identification perceived by them when they observed themselves through the embodied robot. Our results reveal that, when individuals are exposed to the humanoid body reduplication, they experience an illusion that strongly resembles heautoscopy, suggesting that a healthy human mind is able to bi-locate in two different bodies simultaneously. Copyright © 2016 Elsevier Inc. All rights reserved.
Dynamic Stimuli And Active Processing In Human Visual Perception
NASA Astrophysics Data System (ADS)
Haber, Ralph N.
1990-03-01
Theories of visual perception traditionally have considered a static retinal image to be the starting point for processing; and has considered processing both to be passive and a literal translation of that frozen, two dimensional, pictorial image. This paper considers five problem areas in the analysis of human visually guided locomotion, in which the traditional approach is contrasted to newer ones that utilize dynamic definitions of stimulation, and an active perceiver: (1) differentiation between object motion and self motion, and among the various kinds of self motion (e.g., eyes only, head only, whole body, and their combinations); (2) the sources and contents of visual information that guide movement; (3) the acquisition and performance of perceptual motor skills; (4) the nature of spatial representations, percepts, and the perceived layout of space; and (5) and why the retinal image is a poor starting point for perceptual processing. These newer approaches argue that stimuli must be considered as dynamic: humans process the systematic changes in patterned light when objects move and when they themselves move. Furthermore, the processing of visual stimuli must be active and interactive, so that perceivers can construct panoramic and stable percepts from an interaction of stimulus information and expectancies of what is contained in the visual environment. These developments all suggest a very different approach to the computational analyses of object location and identification, and of the visual guidance of locomotion.
Corollary discharge contributes to perceived eye location in monkeys
Cavanaugh, James; FitzGibbon, Edmond J.; Wurtz, Robert H.
2013-01-01
Despite saccades changing the image on the retina several times per second, we still perceive a stable visual world. A possible mechanism underlying this stability is that an internal retinotopic map is updated with each saccade, with the location of objects being compared before and after the saccade. Psychophysical experiments have shown that humans derive such location information from a corollary discharge (CD) accompanying saccades. Such a CD has been identified in the monkey brain in a circuit extending from superior colliculus to frontal cortex. There is a missing piece, however. Perceptual localization is established only in humans and the CD circuit only in monkeys. We therefore extended measurement of perceptual localization to the monkey by adapting the target displacement detection task developed in humans. During saccades to targets, the target disappeared and then reappeared, sometimes at a different location. The monkeys reported the displacement direction. Detections of displacement were similar in monkeys and humans, but enhanced detection of displacement from blanking the target at the end of the saccade was observed only in humans, not in monkeys. Saccade amplitude varied across trials, but the monkey's estimates of target location did not follow that variation, indicating that eye location depended on an internal CD rather than external visual information. We conclude that monkeys use a CD to determine their new eye location after each saccade, just as humans do. PMID:23986562
Corollary discharge contributes to perceived eye location in monkeys.
Joiner, Wilsaan M; Cavanaugh, James; FitzGibbon, Edmond J; Wurtz, Robert H
2013-11-01
Despite saccades changing the image on the retina several times per second, we still perceive a stable visual world. A possible mechanism underlying this stability is that an internal retinotopic map is updated with each saccade, with the location of objects being compared before and after the saccade. Psychophysical experiments have shown that humans derive such location information from a corollary discharge (CD) accompanying saccades. Such a CD has been identified in the monkey brain in a circuit extending from superior colliculus to frontal cortex. There is a missing piece, however. Perceptual localization is established only in humans and the CD circuit only in monkeys. We therefore extended measurement of perceptual localization to the monkey by adapting the target displacement detection task developed in humans. During saccades to targets, the target disappeared and then reappeared, sometimes at a different location. The monkeys reported the displacement direction. Detections of displacement were similar in monkeys and humans, but enhanced detection of displacement from blanking the target at the end of the saccade was observed only in humans, not in monkeys. Saccade amplitude varied across trials, but the monkey's estimates of target location did not follow that variation, indicating that eye location depended on an internal CD rather than external visual information. We conclude that monkeys use a CD to determine their new eye location after each saccade, just as humans do.
Optical phonetics and visual perception of lexical and phrasal stress in English.
Scarborough, Rebecca; Keating, Patricia; Mattys, Sven L; Cho, Taehong; Alwan, Abeer
2009-01-01
In a study of optical cues to the visual perception of stress, three American English talkers spoke words that differed in lexical stress and sentences that differed in phrasal stress, while video and movements of the face were recorded. The production of stressed and unstressed syllables from these utterances was analyzed along many measures of facial movement, which were generally larger and faster in the stressed condition. In a visual perception experiment, 16 perceivers identified the location of stress in forced-choice judgments of video clips of these utterances (without audio). Phrasal stress was better perceived than lexical stress. The relation of the visual intelligibility of the prosody of these utterances to the optical characteristics of their production was analyzed to determine which cues are associated with successful visual perception. While most optical measures were correlated with perception performance, chin measures, especially Chin Opening Displacement, contributed the most to correct perception independently of the other measures. Thus, our results indicate that the information for visual stress perception is mainly associated with mouth opening movements.
Manipulation of the extrastriate frontal loop can resolve visual disability in blindsight patients.
Badgaiyan, Rajendra D
2012-12-01
Patients with blindsight are not consciously aware of visual stimuli in the affected field of vision but retain nonconscious perception. This disability can be resolved if nonconsciously perceived information can be brought to their conscious awareness. It can be accomplished by manipulating neural network of visual awareness. To understand this network, we studied the pattern of cortical activity elicited during processing of visual stimuli with or without conscious awareness. The analysis indicated that a re-entrant signaling loop between the area V3A (located in the extrastriate cortex) and the frontal cortex is critical for processing conscious awareness. The loop is activated by visual signals relayed in the primary visual cortex, which is damaged in blindsight patients. Because of the damage, V3A-frontal loop is not activated and the signals are not processed for conscious awareness. These patients however continue to receive visual signals through the lateral geniculate nucleus. Since these signals do not activate the V3A-frontal loop, the stimuli are not consciously perceived. If visual input from the lateral geniculate nucleus is appropriately manipulated and made to activate the V3A-frontal loop, blindsight patients can regain conscious vision. Published by Elsevier Ltd.
An exploratory study of temporal integration in the peripheral retina of myopes
NASA Astrophysics Data System (ADS)
Macedo, Antonio F.; Encarnação, Tito J.; Vilarinho, Daniel; Baptista, António M. G.
2017-08-01
The visual system takes time to respond to visual stimuli, neurons need to accumulate information over a time span in order to fire. Visual information perceived by the peripheral retina might be impaired by imperfect peripheral optics leading to myopia development. This study explored the effect of eccentricity, moderate myopia and peripheral refraction in temporal visual integration. Myopes and emmetropes showed similar performance at detecting briefly flashed stimuli in different retinal locations. Our results show evidence that moderate myopes have normal visual integration when refractive errors are corrected with contact lens; however, the tendency to increased temporal integration thresholds observed in myopes deserves further investigation.
Are neural correlates of visual consciousness retinotopic?
ffytche, Dominic H; Pins, Delphine
2003-11-14
Some visual neurons code what we see, their defining characteristic being a response profile which mirrors conscious percepts rather than veridical sensory attributes. One issue yet to be resolved is whether, within a given cortical area, conscious visual perception relates to diffuse activity across the entire population of such cells or focal activity within the sub-population mapping the location of the perceived stimulus. Here we investigate the issue in the human brain with fMRI, using a threshold stimulation technique to dissociate perceptual from non-perceptual activity. Our results point to a retinotopic organisation of perceptual activity in early visual areas, with independent perceptual activations for different regions of visual space.
The Role of the Oculomotor System in Updating Visual-Spatial Working Memory across Saccades
Boon, Paul J.; Belopolsky, Artem V.; Theeuwes, Jan
2016-01-01
Visual-spatial working memory (VSWM) helps us to maintain and manipulate visual information in the absence of sensory input. It has been proposed that VSWM is an emergent property of the oculomotor system. In the present study we investigated the role of the oculomotor system in updating of spatial working memory representations across saccades. Participants had to maintain a location in memory while making a saccade to a different location. During the saccade the target was displaced, which went unnoticed by the participants. After executing the saccade, participants had to indicate the memorized location. If memory updating fully relies on cancellation driven by extraretinal oculomotor signals, the displacement should have no effect on the perceived location of the memorized stimulus. However, if postsaccadic retinal information about the location of the saccade target is used, the perceived location will be shifted according to the target displacement. As it has been suggested that maintenance of accurate spatial representations across saccades is especially important for action control, we used different ways of reporting the location held in memory; a match-to-sample task, a mouse click or by making another saccade. The results showed a small systematic target displacement bias in all response modalities. Parametric manipulation of the distance between the to-be-memorized stimulus and saccade target revealed that target displacement bias increased over time and changed its spatial profile from being initially centered on locations around the saccade target to becoming spatially global. Taken together results suggest that we neither rely exclusively on extraretinal nor on retinal information in updating working memory representations across saccades. The relative contribution of retinal signals is not fixed but depends on both the time available to integrate these signals as well as the distance between the saccade target and the remembered location. PMID:27631767
Fenko, Anna; de Vries, Roxan; van Rompay, Thomas
2018-01-01
This study investigates the relative impact of textual claims and visual metaphors displayed on the product’s package on consumers’ flavor experience and product evaluation. For consumers, strength is one of the most important sensory attributes of coffee. The 2 × 3 between-subjects experiment (N = 123) compared the effects of visual metaphor of strength (an image of a lion located either on top or on the bottom of the package of coffee beans) and the direct textual claim (“extra strong”) on consumers’ responses to coffee, including product expectation, flavor evaluation, strength perception and purchase intention. The results demonstrate that both the textual claim and the visual metaphor can be efficient in communicating the product attribute of strength. The presence of the image positively influenced consumers’ product expectations before tasting. The textual claim increased the perception of strength of coffee and the purchase intention of the product. The location of the image also played an important role in flavor perception and purchase intention. The image located on the bottom of the package increased the perceived strength of coffee and purchase intention of the product compared to the image on top of the package. This result could be interpreted from the perspective of the grounded cognition theory, which suggests that a picture in the lower part of the package would automatically activate the “strong is heavy” metaphor. As heavy objects are usually associated with a position on the ground, this would explain why perceiving a visually heavy package would lead to the experience of a strong coffee. Further research is needed to better understand the relationships between a metaphorical image and its spatial position in food packaging design. PMID:29459840
Taking a(c)count of eye movements: Multiple mechanisms underlie fixations during enumeration.
Paul, Jacob M; Reeve, Robert A; Forte, Jason D
2017-03-01
We habitually move our eyes when we enumerate sets of objects. It remains unclear whether saccades are directed for numerosity processing as distinct from object-oriented visual processing (e.g., object saliency, scanning heuristics). Here we investigated the extent to which enumeration eye movements are contingent upon the location of objects in an array, and whether fixation patterns vary with enumeration demands. Twenty adults enumerated random dot arrays twice: first to report the set cardinality and second to judge the perceived number of subsets. We manipulated the spatial location of dots by presenting arrays at 0°, 90°, 180°, and 270° orientations. Participants required a similar time to enumerate the set or the perceived number of subsets in the same array. Fixation patterns were systematically shifted in the direction of array rotation, and distributed across similar locations when the same array was shown on multiple occasions. We modeled fixation patterns and dot saliency using a simple filtering model and show participants judged groups of dots in close proximity (2°-2.5° visual angle) as distinct subsets. Modeling results are consistent with the suggestion that enumeration involves visual grouping mechanisms based on object saliency, and specific enumeration demands affect spatial distribution of fixations. Our findings highlight the importance of set computation, rather than object processing per se, for models of numerosity processing.
Audible vision for the blind and visually impaired in indoor open spaces.
Yu, Xunyi; Ganz, Aura
2012-01-01
In this paper we introduce Audible Vision, a system that can help blind and visually impaired users navigate in large indoor open spaces. The system uses computer vision to estimate the location and orientation of the user, and enables the user to perceive his/her relative position to a landmark through 3D audio. Testing shows that Audible Vision can work reliably in real-life ever-changing environment crowded with people.
Attention changes perceived size of moving visual patterns.
Anton-Erxleben, Katharina; Henrich, Christian; Treue, Stefan
2007-08-23
Spatial attention shifts receptive fields in monkey extrastriate visual cortex toward the focus of attention (S. Ben Hamed, J. R. Duhamel, F. Bremmer, & W. Graf, 2002; C. E. Connor, J. L. Gallant, D. C. Preddie, & D. C. Van Essen, 1996; C. E. Connor, D. C. Preddie, J. L. Gallant, & D. C. Van Essen, 1997; T. Womelsdorf, K. Anton-Erxleben, F. Pieper, & S. Treue, 2006). This distortion in the retinotopic distribution of receptive fields might cause distortions in spatial perception such as an increase of the perceived size of attended stimuli. Here we test for such an effect in human subjects by measuring the point of subjective equality (PSE) for the perceived size of a neutral and an attended stimulus when drawing automatic attention to one of two spatial locations. We found a significant increase in perceived size of attended stimuli. Depending on the absolute stimulus size, this effect ranged from 4% to 12% and was more pronounced for smaller than for larger stimuli. In our experimental design, an attentional effect on task difficulty or a cue bias might influence the PSE measure. We performed control experiments and indeed found such effects, but they could only account for part of the observed results. Our findings demonstrate that the allocation of transient spatial attention onto a visual stimulus increases its perceived size and additionally biases subjects to select this stimulus for a perceptual judgment.
Bosen, Adam K.; Fleming, Justin T.; Brown, Sarah E.; Allen, Paul D.; O'Neill, William E.; Paige, Gary D.
2016-01-01
Vision typically has better spatial accuracy and precision than audition, and as a result often captures auditory spatial perception when visual and auditory cues are presented together. One determinant of visual capture is the amount of spatial disparity between auditory and visual cues: when disparity is small visual capture is likely to occur, and when disparity is large visual capture is unlikely. Previous experiments have used two methods to probe how visual capture varies with spatial disparity. First, congruence judgment assesses perceived unity between cues by having subjects report whether or not auditory and visual targets came from the same location. Second, auditory localization assesses the graded influence of vision on auditory spatial perception by having subjects point to the remembered location of an auditory target presented with a visual target. Previous research has shown that when both tasks are performed concurrently they produce similar measures of visual capture, but this may not hold when tasks are performed independently. Here, subjects alternated between tasks independently across three sessions. A Bayesian inference model of visual capture was used to estimate perceptual parameters for each session, which were compared across tasks. Results demonstrated that the range of audio-visual disparities over which visual capture was likely to occur were narrower in auditory localization than in congruence judgment, which the model indicates was caused by subjects adjusting their prior expectation that targets originated from the same location in a task-dependent manner. PMID:27815630
Perceived duration decreases with increasing eccentricity.
Kliegl, Katrin M; Huckauf, Anke
2014-07-01
Previous studies examining the influence of stimulus location on temporal perception yield inhomogeneous and contradicting results. Therefore, the aim of the present study is to soundly examine the effect of stimulus eccentricity. In a series of five experiments, subjects compared the duration of foveal disks to disks presented at different retinal eccentricities on the horizontal meridian. The results show that the perceived duration of a visual stimulus declines with increasing eccentricity. The effect was replicated with various stimulus orders (Experiments 1-3), as well as with cortically magnified stimuli (Experiments 4-5), ruling out that the effect was merely caused by different cortical representation sizes. The apparent decreasing duration of stimuli with increasing eccentricity is discussed with respect to current models of time perception, the possible influence of visual attention and respective underlying physiological characteristics of the visual system. Copyright © 2014 Elsevier B.V. All rights reserved.
Two visual systems in monitoring of dynamic traffic: effects of visual disruption.
Zheng, Xianjun Sam; McConkie, George W
2010-05-01
Studies from neurophysiology and neuropsychology provide support for two separate object- and location-based visual systems, ventral and dorsal. In the driving context, a study was conducted using a change detection paradigm to explore drivers' ability to monitor the dynamic traffic flow, and the effects of visual disruption on these two visual systems. While driving, a discrete change, such as vehicle location, color, or identity, was occasionally made in one of the vehicles on the road ahead of the driver. Experiment results show that without visual disruption, all changes were detected very well; yet, these equally perceivable changes were disrupted differently by a brief blank display (150 ms): the detection of location changes was especially reduced. The disruption effects were also bigger for the parked vehicle compared to the moving ones. The findings support the different roles for two visual systems in monitoring the dynamic traffic: the "where", dorsal system, tracks vehicle spatiotemporal information on perceptual level, encoding information in a coarse and transient manner; whereas the "what", ventral system, monitors vehicles' featural information, encoding information more accurately and robustly. Both systems work together contributing to the driver's situation awareness of traffic. Benefits and limitations of using the driving simulation are also discussed. Copyright (c) 2009 Elsevier Ltd. All rights reserved.
Electrophysiological indices of surround suppression in humans
Vanegas, M. Isabel; Blangero, Annabelle
2014-01-01
Surround suppression is a well-known example of contextual interaction in visual cortical neurophysiology, whereby the neural response to a stimulus presented within a neuron's classical receptive field is suppressed by surrounding stimuli. Human psychophysical reports present an obvious analog to the effects seen at the single-neuron level: stimuli are perceived as lower-contrast when embedded in a surround. Here we report on a visual paradigm that provides relatively direct, straightforward indices of surround suppression in human electrophysiology, enabling us to reproduce several well-known neurophysiological and psychophysical effects, and to conduct new analyses of temporal trends and retinal location effects. Steady-state visual evoked potentials (SSVEP) elicited by flickering “foreground” stimuli were measured in the context of various static surround patterns. Early visual cortex geometry and retinotopic organization were exploited to enhance SSVEP amplitude. The foreground response was strongly suppressed as a monotonic function of surround contrast. Furthermore, suppression was stronger for surrounds of matching orientation than orthogonally-oriented ones, and stronger at peripheral than foveal locations. These patterns were reproduced in psychophysical reports of perceived contrast, and peripheral electrophysiological suppression effects correlated with psychophysical effects across subjects. Temporal analysis of SSVEP amplitude revealed short-term contrast adaptation effects that caused the foreground signal to either fall or grow over time, depending on the relative contrast of the surround, consistent with stronger adaptation of the suppressive drive. This electrophysiology paradigm has clinical potential in indexing not just visual deficits but possibly gain control deficits expressed more widely in the disordered brain. PMID:25411464
Spatial updating in area LIP is independent of saccade direction.
Heiser, Laura M; Colby, Carol L
2006-05-01
We explore the world around us by making rapid eye movements to objects of interest. Remarkably, these eye movements go unnoticed, and we perceive the world as stable. Spatial updating is one of the neural mechanisms that contributes to this perception of spatial constancy. Previous studies in macaque lateral intraparietal cortex (area LIP) have shown that individual neurons update, or "remap," the locations of salient visual stimuli at the time of an eye movement. The existence of remapping implies that neurons have access to visual information from regions far beyond the classically defined receptive field. We hypothesized that neurons have access to information located anywhere in the visual field. We tested this by recording the activity of LIP neurons while systematically varying the direction in which a stimulus location must be updated. Our primary finding is that individual neurons remap stimulus traces in multiple directions, indicating that LIP neurons have access to information throughout the visual field. At the population level, stimulus traces are updated in conjunction with all saccade directions, even when we consider direction as a function of receptive field location. These results show that spatial updating in LIP is effectively independent of saccade direction. Our findings support the hypothesis that the activity of LIP neurons contributes to the maintenance of spatial constancy throughout the visual field.
The visual system prioritizes locations near corners of surfaces (not just locations near a corner).
Bertamini, Marco; Helmy, Mai; Bates, Daniel
2013-11-01
When a new visual object appears, attention is directed toward it. However, some locations along the outline of the new object may receive more resources, perhaps as a consequence of their relative importance in describing its shape. Evidence suggests that corners receive enhanced processing, relative to the straight edges of an outline (corner enhancement effect). Using a technique similar to that in an original study in which observers had to respond to a probe presented near a contour (Cole et al. in Journal of Experimental Psychology: Human Perception and Performance 27:1356-1368, 2001), we confirmed this effect. When figure-ground relations were manipulated using shaded surfaces (Exps. 1 and 2) and stereograms (Exps. 3 and 4), two novel aspects of the phenomenon emerged: We found no difference between corners perceived as being convex or concave, and we found that the enhancement was stronger when the probe was perceived as being a feature of the surface that the corner belonged to. Therefore, the enhancement is not based on spatial aspects of the regions in the image, but critically depends on figure-ground stratification, supporting the link between the prioritization of corners and the representation of surface layout.
Schmalzl, Laura; Thomke, Erik; Ragnö, Christina; Nilseryd, Maria; Stockselius, Anita; Ehrsson, H. Henrik
2011-01-01
Most amputees experience phantom limbs, or the sensation that their amputated limb is still attached to the body. Phantom limbs can be perceived in the location previously occupied by the intact limb, or they can gradually retract inside the stump, a phenomenon referred to as “telescoping”. Telescoping is relevant from a clinical point of view, as it tends to be related to increased levels of phantom pain. In the current study we demonstrate how a full-body illusion can be used to temporarily revoke telescoping sensations in upper limb amputees. During this illusion participants view the body of a mannequin from a first person perspective while being subjected to synchronized visuo-tactile stimulation through stroking, which makes them experience the mannequin’s body as their own. In Experiment 1 we used an intact mannequin, and showed that amputees can experience ownership of an intact body as well as referral of touch from both hands of the mannequin. In Experiment 2 and 3 we used an amputated mannequin, and demonstrated that depending on the spatial location of the strokes applied to the mannequin, participants experienced their phantom hand to either remain telescoped, or to actually be located below the stump. The effects were supported by subjective data from questionnaires, as well as verbal reports of the perceived location of the phantom hand in a visual judgment task. These findings are of particular interest, as they show that the temporary revoking of telescoping sensations does not necessarily have to involve the visualization of an intact hand or illusory movement of the phantom (as in the rubber hand illusion or mirror visual feedback therapy), but that it can also be obtained through mere referral of touch from the stump to the spatial location corresponding to that previously occupied by the intact hand. Moreover, our study also provides preliminary evidence for the fact that these manipulations can have an effect on phantom pain sensations. PMID:22065956
Attention modulates perception of visual space
Zhou, Liu; Deng, Chenglong; Ooi, Teng Leng; He, Zijiang J.
2017-01-01
Attention readily facilitates the detection and discrimination of objects, but it is not known whether it helps to form the vast volume of visual space that contains the objects and where actions are implemented. Conventional wisdom suggests not, given the effortless ease with which we perceive three-dimensional (3D) scenes on opening our eyes. Here, we show evidence to the contrary. In Experiment 1, the observer judged the location of a briefly presented target, placed either on the textured ground or ceiling surface. Judged location was more accurate for a target on the ground, provided that the ground was visible and that the observer directed attention to the lower visual field, not the upper field. This reveals that attention facilitates space perception with reference to the ground. Experiment 2 showed that judged location of a target in mid-air, with both ground and ceiling surfaces present, was more accurate when the observer directed their attention to the lower visual field; this indicates that the attention effect extends to visual space above the ground. These findings underscore the role of attention in anchoring visual orientation in space, which is arguably a primal event that enhances one’s ability to interact with objects and surface layouts within the visual space. The fact that the effect of attention was contingent on the ground being visible suggests that our terrestrial visual system is best served by its ecological niche. PMID:29177198
Nakashima, Ryoichi; Iwai, Ritsuko; Ueda, Sayako; Kumada, Takatsune
2015-01-01
When observers perceive several objects in a space, at the same time, they should effectively perceive their own position as a viewpoint. However, little is known about observers’ percepts of their own spatial location based on the visual scene information viewed from them. Previous studies indicate that two distinct visual spatial processes exist in the locomotion situation: the egocentric position perception and egocentric direction perception. Those studies examined such perceptions in information rich visual environments where much dynamic and static visual information was available. This study examined these two perceptions in information of impoverished environments, including only static lane edge information (i.e., limited information). We investigated the visual factors associated with static lane edge information that may affect these perceptions. Especially, we examined the effects of the two factors on egocentric direction and position perceptions. One is the “uprightness factor” that “far” visual information is seen at upper location than “near” visual information. The other is the “central vision factor” that observers usually look at “far” visual information using central vision (i.e., foveal vision) whereas ‘near’ visual information using peripheral vision. Experiment 1 examined the effect of the “uprightness factor” using normal and inverted road images. Experiment 2 examined the effect of the “central vision factor” using normal and transposed road images where the upper half of the normal image was presented under the lower half. Experiment 3 aimed to replicate the results of Experiments 1 and 2. Results showed that egocentric direction perception is interfered with image inversion or image transposition, whereas egocentric position perception is robust against these image transformations. That is, both “uprightness” and “central vision” factors are important for egocentric direction perception, but not for egocentric position perception. Therefore, the two visual spatial perceptions about observers’ own viewpoints are fundamentally dissociable. PMID:26648895
NASA Astrophysics Data System (ADS)
Kimpe, Tom; Rostang, Johan; Avanaki, Ali; Espig, Kathryn; Xthona, Albert; Cocuranu, Ioan; Parwani, Anil V.; Pantanowitz, Liron
2014-03-01
Digital pathology systems typically consist of a slide scanner, processing software, visualization software, and finally a workstation with display for visualization of the digital slide images. This paper studies whether digital pathology images can look different when presenting them on different display systems, and whether these visual differences can result in different perceived contrast of clinically relevant features. By analyzing a set of four digital pathology images of different subspecialties on three different display systems, it was concluded that pathology images look different when visualized on different display systems. The importance of these visual differences is elucidated when they are located in areas of the digital slide that contain clinically relevant features. Based on a calculation of dE2000 differences between background and clinically relevant features, it was clear that perceived contrast of clinically relevant features is influenced by the choice of display system. Furthermore, it seems that the specific calibration target chosen for the display system has an important effect on the perceived contrast of clinically relevant features. Preliminary results suggest that calibrating to DICOM GSDF calibration performed slightly worse than sRGB, while a new experimental calibration target CSDF performed better than both DICOM GSDF and sRGB. This result is promising as it suggests that further research work could lead to better definition of an optimized calibration target for digital pathology images resulting in a positive effect on clinical performance.
Jellema, Tjeerd; Maassen, Gerard; Perrett, David I
2004-07-01
This study investigated the cellular mechanisms in the anterior part of the superior temporal sulcus (STSa) that underlie the integration of different features of the same visually perceived animate object. Three visual features were systematically manipulated: form, motion and location. In 58% of a population of cells selectively responsive to the sight of a walking agent, the location of the agent significantly influenced the cell's response. The influence of position was often evident in intricate two- and three-way interactions with the factors form and/or motion. For only one of the 31 cells tested, the response could be explained by just a single factor. For all other cells at least two factors, and for half of the cells (52%) all three factors, played a significant role in controlling responses. Our findings support a reformulation of the Ungerleider and Mishkin model, which envisages a subdivision of the visual processing into a ventral 'what' and a dorsal 'where' stream. We demonstrated that at least part of the temporal cortex ('what' stream) makes ample use of visual spatial information. Our findings open up the prospect of a much more elaborate integration of visual properties of animate objects at the single cell level. Such integration may support the comprehension of animals and their actions.
A formal theory of feature binding in object perception.
Ashby, F G; Prinzmetal, W; Ivry, R; Maddox, W T
1996-01-01
Visual objects are perceived correctly only if their features are identified and then bound together. Illusory conjunctions result when feature identification is correct but an error occurs during feature binding. A new model is proposed that assumes feature binding errors occur because of uncertainty about the location of visual features. This model accounted for data from 2 new experiments better than a model derived from A. M. Treisman and H. Schmidt's (1982) feature integration theory. The traditional method for detecting the occurrence of true illusory conjunctions is shown to be fundamentally flawed. A reexamination of 2 previous studies provided new insights into the role of attention and location information in object perception and a reinterpretation of the deficits in patients who exhibit attentional disorders.
NASA Technical Reports Server (NTRS)
Lathrop, William B.; Kaiser, Mary K.
2002-01-01
Two experiments examined perceived spatial orientation in a small environment as a function of experiencing that environment under three conditions: real-world, desktop-display (DD), and head-mounted display (HMD). Across the three conditions, participants acquired two targets located on a perimeter surrounding them, and attempted to remember the relative locations of the targets. Subsequently, participants were tested on how accurately and consistently they could point in the remembered direction of a previously seen target. Results showed that participants were significantly more consistent in the real-world and HMD conditions than in the DD condition. Further, it is shown that the advantages observed in the HMD and real-world conditions were not simply due to nonspatial response strategies. These results suggest that the additional idiothetic information afforded in the real-world and HMD conditions is useful for orientation purposes in our presented task domain. Our results are relevant to interface design issues concerning tasks that require spatial search, navigation, and visualization.
The effect of saccade metrics on the corollary discharge contribution to perceived eye location
Bansal, Sonia; Jayet Bray, Laurence C.; Peterson, Matthew S.
2015-01-01
Corollary discharge (CD) is hypothesized to provide the movement information (direction and amplitude) required to compensate for the saccade-induced disruptions to visual input. Here, we investigated to what extent these conveyed metrics influence perceptual stability in human subjects with a target-displacement detection task. Subjects made saccades to targets located at different amplitudes (4°, 6°, or 8°) and directions (horizontal or vertical). During the saccade, the target disappeared and then reappeared at a shifted location either in the same direction or opposite to the movement vector. Subjects reported the target displacement direction, and from these reports we determined the perceptual threshold for shift detection and estimate of target location. Our results indicate that the thresholds for all amplitudes and directions generally scaled with saccade amplitude. Additionally, subjects on average produced hypometric saccades with an estimated CD gain <1. Finally, we examined the contribution of different error signals to perceptual performance, the saccade error (movement-to-movement variability in saccade amplitude) and visual error (distance between the fovea and the shifted target location). Perceptual judgment was not influenced by the fluctuations in movement amplitude, and performance was largely the same across movement directions for different magnitudes of visual error. Importantly, subjects reported the correct direction of target displacement above chance level for very small visual errors (<0.75°), even when these errors were opposite the target-shift direction. Collectively, these results suggest that the CD-based compensatory mechanisms for visual disruptions are highly accurate and comparable for saccades with different metrics. PMID:25761955
Perception of 3-D location based on vision, touch, and extended touch
Giudice, Nicholas A.; Klatzky, Roberta L.; Bennett, Christopher R.; Loomis, Jack M.
2012-01-01
Perception of the near environment gives rise to spatial images in working memory that continue to represent the spatial layout even after cessation of sensory input. As the observer moves, these spatial images are continuously updated.This research is concerned with (1) whether spatial images of targets are formed when they are sensed using extended touch (i.e., using a probe to extend the reach of the arm) and (2) the accuracy with which such targets are perceived. In Experiment 1, participants perceived the 3-D locations of individual targets from a fixed origin and were then tested with an updating task involving blindfolded walking followed by placement of the hand at the remembered target location. Twenty-four target locations, representing all combinations of two distances, two heights, and six azimuths, were perceived by vision or by blindfolded exploration with the bare hand, a 1-m probe, or a 2-m probe. Systematic errors in azimuth were observed for all targets, reflecting errors in representing the target locations and updating. Overall, updating after visual perception was best, but the quantitative differences between conditions were small. Experiment 2 demonstrated that auditory information signifying contact with the target was not a factor. Overall, the results indicate that 3-D spatial images can be formed of targets sensed by extended touch and that perception by extended touch, even out to 1.75 m, is surprisingly accurate. PMID:23070234
Spatial Hearing with Incongruent Visual or Auditory Room Cues
Gil-Carvajal, Juan C.; Cubick, Jens; Santurette, Sébastien; Dau, Torsten
2016-01-01
In day-to-day life, humans usually perceive the location of sound sources as outside their heads. This externalized auditory spatial perception can be reproduced through headphones by recreating the sound pressure generated by the source at the listener’s eardrums. This requires the acoustical features of the recording environment and listener’s anatomy to be recorded at the listener’s ear canals. Although the resulting auditory images can be indistinguishable from real-world sources, their externalization may be less robust when the playback and recording environments differ. Here we tested whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings decreased significantly when collected in a more reverberant environment than the recording room, whereas azimuthal direction and compactness remained room independent. Moreover, modifying visual room-related cues had no effect on these three attributes, while incongruent auditory room-related cues between the recording and playback room did affect distance perception. Consequently, the external perception of virtual sounds depends on the degree of congruency between the acoustical features of the environment and the stimuli. PMID:27853290
Ronchi, Roberta; Bello-Ruiz, Javier; Lukowska, Marta; Herbelin, Bruno; Cabrilo, Ivan; Schaller, Karl; Blanke, Olaf
2015-04-01
Recent evidence suggests that multisensory integration of bodily signals involving exteroceptive and interoceptive information modulates bodily aspects of self-consciousness such as self-identification and self-location. In the so-called Full Body Illusion subjects watch a virtual body being stroked while they perceive tactile stimulation on their own body inducing illusory self-identification with the virtual body and a change in self-location towards the virtual body. In a related illusion, it has recently been shown that similar changes in self-identification and self-location can be observed when an interoceptive signal is used in association with visual stimulation of the virtual body (i.e., participants observe a virtual body illuminated in synchrony with their heartbeat). Although brain imaging and neuropsychological evidence suggest that the insular cortex is a core region for interoceptive processing (such as cardiac perception and awareness) as well as for self-consciousness, it is currently not known whether the insula mediates cardio-visual modulation of self-consciousness. Here we tested the involvement of insular cortex in heartbeat awareness and cardio-visual manipulation of bodily self-consciousness in a patient before and after resection of a selective right neoplastic insular lesion. Cardio-visual stimulation induced an abnormally enhanced state of bodily self-consciousness; in addition, cardio-visual manipulation was associated with an experienced loss of the spatial unity of the self (illusory bi-location and duplication of his body), not observed in healthy subjects. Heartbeat awareness was found to decrease after insular resection. Based on these data we propose that the insula mediates interoceptive awareness as well as cardio-visual effects on bodily self-consciousness and that insular processing of interoceptive signals is an important mechanism for the experienced unity of the self. Copyright © 2015 Elsevier Ltd. All rights reserved.
Verification of Emmert's law in actual and virtual environments.
Nakamizo, Sachio; Imamura, Mariko
2004-11-01
We examined Emmert's law by measuring the perceived size of an afterimage and the perceived distance of the surface on which the afterimage was projected in actual and virtual environments. The actual environment consisted of a corridor with ample cues as to distance and depth. The virtual environment was made from the CAVE of a virtual reality system. The afterimage, disc-shaped and one degree in diameter, was produced by flashing with an electric photoflash. The observers were asked to estimate the perceived distance to surfaces located at various physical distances (1 to 24 m) by the magnitude estimation method and to estimate the perceived size of the afterimage projected on the surfaces by a matching method. The results show that the perceived size of the afterimage was directly proportional to the perceived distance in both environments; thus, Emmert's law holds in virtual as well as actual environments. We suggest that Emmert's law is a specific case of a functional principle of distance scaling by the visual system.
Forever young: Visual representations of gender and age in online dating sites for older adults.
Gewirtz-Meydan, Ateret; Ayalon, Liat
2017-06-13
Online dating has become increasingly popular among older adults following broader social media adoption patterns. The current study examined the visual representations of people on 39 dating sites intended for the older population, with a particular focus on the visualization of the intersection between age and gender. All 39 dating sites for older adults were located through the Google search engine. Visual thematic analysis was performed with reference to general, non-age-related signs (e.g., facial expression, skin color), signs of aging (e.g., perceived age, wrinkles), relational features (e.g., proximity between individuals), and additional features such as number of people presented. The visual analysis in the present study revealed a clear intersection between ageism and sexism in the presentation of older adults. The majority of men and women were smiling and had a fair complexion, with light eye color and perceived age of younger than 60. Older women were presented as younger and wore more cosmetics as compared with older men. The present study stresses the social regulation of sexuality, as only heterosexual couples were presented. The narrow representation of older adults and the anti-aging messages portrayed in the pictures convey that love, intimacy, and sexual activity are for older adults who are "forever young."
King, Andy J; Jensen, Jakob D; Davis, LaShara A; Carcioppolo, Nick
2014-01-01
There is a paucity of research on the visual images used in health communication messages and campaign materials. Even though many studies suggest further investigation of these visual messages and their features, few studies provide specific constructs or assessment tools for evaluating the characteristics of visual messages in health communication contexts. The authors conducted 2 studies to validate a measure of perceived visual informativeness (PVI), a message construct assessing visual messages presenting statistical or indexical information. In Study 1, a 7-item scale was created that demonstrated good internal reliability (α = .91), as well as convergent and divergent validity with related message constructs such as perceived message quality, perceived informativeness, and perceived attractiveness. PVI also converged with a preference for visual learning but was unrelated to a person's actual vision ability. In addition, PVI exhibited concurrent validity with a number of important constructs including perceived message effectiveness, decisional satisfaction, and three key public health theory behavior predictors: perceived benefits, perceived barriers, and self-efficacy. Study 2 provided more evidence that PVI is an internally reliable measure and demonstrates that PVI is a modifiable message feature that can be tested in future experimental work. PVI provides an initial step to assist in the evaluation and testing of visual messages in campaign and intervention materials promoting informed decision making and behavior change.
The role of visuohaptic experience in visually perceived depth.
Ho, Yun-Xian; Serwe, Sascha; Trommershäuser, Julia; Maloney, Laurence T; Landy, Michael S
2009-06-01
Berkeley suggested that "touch educates vision," that is, haptic input may be used to calibrate visual cues to improve visual estimation of properties of the world. Here, we test whether haptic input may be used to "miseducate" vision, causing observers to rely more heavily on misleading visual cues. Human subjects compared the depth of two cylindrical bumps illuminated by light sources located at different positions relative to the surface. As in previous work using judgments of surface roughness, we find that observers judge bumps to have greater depth when the light source is located eccentric to the surface normal (i.e., when shadows are more salient). Following several sessions of visual judgments of depth, subjects then underwent visuohaptic training in which haptic feedback was artificially correlated with the "pseudocue" of shadow size and artificially decorrelated with disparity and texture. Although there were large individual differences, almost all observers demonstrated integration of haptic cues during visuohaptic training. For some observers, subsequent visual judgments of bump depth were unaffected by the training. However, for 5 of 12 observers, training significantly increased the weight given to pseudocues, causing subsequent visual estimates of shape to be less veridical. We conclude that haptic information can be used to reweight visual cues, putting more weight on misleading pseudocues, even when more trustworthy visual cues are available in the scene.
Discrete Events as Units of Perceived Time
ERIC Educational Resources Information Center
Liverence, Brandon M.; Scholl, Brian J.
2012-01-01
In visual images, we perceive both space (as a continuous visual medium) and objects (that inhabit space). Similarly, in dynamic visual experience, we perceive both continuous time and discrete events. What is the relationship between these units of experience? The most intuitive answer may be similar to the spatial case: time is perceived as an…
[Perception of physiological visual illusions by individuals with schizophrenia].
Ciszewski, Słowomir; Wichowicz, Hubert Michał; Żuk, Krzysztof
2015-01-01
Visual perception by individuals with schizophrenia has not been extensively researched. The focus of this review is the perception of physiological visual illusions by patients with schizophrenia, a differences of perception reported in a small number of studies. Increased or decreased susceptibility of these patients to various illusions seems to be unconnected to the location of origin in the visual apparatus, which also takes place in illusions connected to other modalities. The susceptibility of patients with schizophrenia to haptic illusions has not yet been investigated, although the need for such investigation has been is clear. The emerging picture is that some individuals with schizophrenia are "resistant" to some of the illusions and are able to assess visual phenomena more "rationally", yet certain illusions (ex. Müller-Lyer's) are perceived more intensely. Disturbances in the perception of visual illusions have neither been classified as possible diagnostic indicators of a dangerous mental condition, nor included in the endophenotype of schizophrenia. Although the relevant data are sparse, the ability to replicate the results is limited, and the research model lacks a "gold standard", some preliminary conclusions may be drawn. There are indications that disturbances in visual perception are connected to the extent of disorganization, poor initial social functioning, poor prognosis, and the types of schizophrenia described as neurodevelopmental. Patients with schizophrenia usually fail to perceive those illusions that require volitional controlled attention, and show lack of sensitivity to the contrast between shape and background.
Yost, William A; Zhong, Xuan; Najam, Anbar
2015-11-01
In four experiments listeners were rotated or were stationary. Sounds came from a stationary loudspeaker or rotated from loudspeaker to loudspeaker around an azimuth array. When either sounds or listeners rotate the auditory cues used for sound source localization change, but in the everyday world listeners perceive sound rotation only when sounds rotate not when listeners rotate. In the everyday world sound source locations are referenced to positions in the environment (a world-centric reference system). The auditory cues for sound source location indicate locations relative to the head (a head-centric reference system), not locations relative to the world. This paper deals with a general hypothesis that the world-centric location of sound sources requires the auditory system to have information about auditory cues used for sound source location and cues about head position. The use of visual and vestibular information in determining rotating head position in sound rotation perception was investigated. The experiments show that sound rotation perception when sources and listeners rotate was based on acoustic, visual, and, perhaps, vestibular information. The findings are consistent with the general hypotheses and suggest that sound source localization is not based just on acoustics. It is a multisystem process.
Deployment of spatial attention towards locations in memory representations. An EEG study.
Leszczyński, Marcin; Wykowska, Agnieszka; Perez-Osorio, Jairo; Müller, Hermann J
2013-01-01
Recalling information from visual short-term memory (VSTM) involves the same neural mechanisms as attending to an actually perceived scene. In particular, retrieval from VSTM has been associated with orienting of visual attention towards a location within a spatially-organized memory representation. However, an open question concerns whether spatial attention is also recruited during VSTM retrieval even when performing the task does not require access to spatial coordinates of items in the memorized scene. The present study combined a visual search task with a modified, delayed central probe protocol, together with EEG analysis, to answer this question. We found a temporal contralateral negativity (TCN) elicited by a centrally presented go-signal which was spatially uninformative and featurally unrelated to the search target and informed participants only about a response key that they had to press to indicate a prepared target-present vs. -absent decision. This lateralization during VSTM retrieval (TCN) provides strong evidence of a shift of attention towards the target location in the memory representation, which occurred despite the fact that the present task required no spatial (or featural) information from the search to be encoded, maintained, and retrieved to produce the correct response and that the go-signal did not itself specify any information relating to the location and defining feature of the target.
Eye Choice for Acquisition of Targets in Alternating Strabismus
Economides, John R.; Adams, Daniel L.
2014-01-01
In strabismus, potentially either eye can inform the brain about the location of a target so that an accurate saccade can be made. Sixteen human subjects with alternating exotropia were tested dichoptically while viewing stimuli on a tangent screen. Each trial began with a fixation cross visible to only one eye. After the subject fixated the cross, a peripheral target visible to only one eye flashed briefly. The subject's task was to look at it. As a rule, the eye to which the target was presented was the eye that acquired the target. However, when stimuli were presented in the far nasal visual field, subjects occasionally performed a “crossover” saccade by placing the other eye on the target. This strategy avoided the need to make a large adducting saccade. In such cases, information about target location was obtained by one eye and used to program a saccade for the other eye, with a corresponding latency increase. In 10/16 subjects, targets were presented on some trials to both eyes. Binocular sensory maps were also compiled to delineate the portions of the visual scene perceived with each eye. These maps were compared with subjects' pattern of eye choice for target acquisition. There was a correspondence between suppression scotoma maps and the eye used to acquire peripheral targets. In other words, targets were fixated by the eye used to perceive them. These studies reveal how patients with alternating strabismus, despite eye misalignment, manage to localize and capture visual targets in their environment. PMID:25355212
Attention enhances contrast appearance via increased input baseline of neural responses
Cutrone, Elizabeth K.; Heeger, David J.; Carrasco, Marisa
2014-01-01
Covert spatial attention increases the perceived contrast of stimuli at attended locations, presumably via enhancement of visual neural responses. However, the relation between perceived contrast and the underlying neural responses has not been characterized. In this study, we systematically varied stimulus contrast, using a two-alternative, forced-choice comparison task to probe the effect of attention on appearance across the contrast range. We modeled performance in the task as a function of underlying neural contrast-response functions. Fitting this model to the observed data revealed that an increased input baseline in the neural responses accounted for the enhancement of apparent contrast with spatial attention. PMID:25549920
The influence of visual contextual information on the emergence of the especial skill in basketball.
Stöckel, Tino; Breslin, Gavin
2013-10-01
We examined whether basketball throwing performance in general and motor skill specificity from the free throw distance in particular are influenced by visual contextual information. Experienced basketball players (N = 36) performed basketball set shots at five distances from the basket. Of particular interest was the performance from the free throw distance (4.23 m), at which experienced basketball players are expected to show superior performance compared with nearby locations as a result of massive amounts of practice. Whereas a control group performed the shots on a regular basketball court, the distance between the rim and the free throw line was either increased or decreased by 30 cm in two experimental groups. Findings showed that only the control group had a superior performance from the free throw distance, and the experimental groups did not. Moreover, all groups performed more accurately from the perceived free throw line (independent of its location) compared with nearby locations. The findings suggest that visual context information influences the presence of specificity effects in experienced performers. The findings have theoretical implications for explaining the memory representation underlying the especial skill effect in basketball.
Stevenson, Ryan A; Fister, Juliane Krueger; Barnett, Zachary P; Nidiffer, Aaron R; Wallace, Mark T
2012-05-01
In natural environments, human sensory systems work in a coordinated and integrated manner to perceive and respond to external events. Previous research has shown that the spatial and temporal relationships of sensory signals are paramount in determining how information is integrated across sensory modalities, but in ecologically plausible settings, these factors are not independent. In the current study, we provide a novel exploration of the impact on behavioral performance for systematic manipulations of the spatial location and temporal synchrony of a visual-auditory stimulus pair. Simple auditory and visual stimuli were presented across a range of spatial locations and stimulus onset asynchronies (SOAs), and participants performed both a spatial localization and simultaneity judgment task. Response times in localizing paired visual-auditory stimuli were slower in the periphery and at larger SOAs, but most importantly, an interaction was found between the two factors, in which the effect of SOA was greater in peripheral as opposed to central locations. Simultaneity judgments also revealed a novel interaction between space and time: individuals were more likely to judge stimuli as synchronous when occurring in the periphery at large SOAs. The results of this study provide novel insights into (a) how the speed of spatial localization of an audiovisual stimulus is affected by location and temporal coincidence and the interaction between these two factors and (b) how the location of a multisensory stimulus impacts judgments concerning the temporal relationship of the paired stimuli. These findings provide strong evidence for a complex interdependency between spatial location and temporal structure in determining the ultimate behavioral and perceptual outcome associated with a paired multisensory (i.e., visual-auditory) stimulus.
Amplifying the helicopter drift in a conformal HMD
NASA Astrophysics Data System (ADS)
Schmerwitz, Sven; Knabl, Patrizia M.; Lueken, Thomas; Doehler, Hans-Ullrich
2016-05-01
Helicopter operations require a well-controlled and minimal lateral drift shortly before ground contact. Any lateral speed exceeding this small threshold can cause a dangerous momentum around the roll axis, which may cause a total roll over of the helicopter. As long as pilots can observe visual cues from the ground, they are able to easily control the helicopter drift. But whenever natural vision is reduced or even obscured, e.g. due to night, fog, or dust, this controllability diminishes. Therefore helicopter operators could benefit from some type of "drift indication" that mitigates the influence of a degraded visual environment. Generally humans derive ego motion by the perceived environmental object flow. The visual cues perceived are located close to the helicopter, therefore even small movements can be recognized. This fact was used to investigate a modified drift indication. To enhance the perception of ego motion in a conformal HMD symbol set the measured movement was used to generate a pattern motion in the forward field of view close or on the landing pad. The paper will discuss the method of amplified ego motion drift indication. Aspects concerning impact factors like visualization type, location, gain and more will be addressed. Further conclusions from previous studies, a high fidelity experiment and a part task experiment, will be provided. A part task study will be presented that compared different amplified drift indications against a predictor. 24 participants, 15 holding a fixed wing license and 4 helicopter pilots, had to perform a dual task on a virtual reality headset. A simplified control model was used to steer a "helicopter" down to a landing pad while acknowledging randomly placed characters.
Thurlow, W R
1980-01-01
Messages were presented which moved from right to left along an electronic alphabetic display which was varied in "window" size from 4 through 32 letter spaces. Deaf subjects signed the messages they perceived. Relatively few errors were made even at the highest rate of presentation, which corresponded to a typing rate of 60 words/min. It is concluded that many deaf persons can make effective use of a small visual display. A reduced cost is then possible for visual communication instruments for these people through reduced display size. Deaf subjects who can profit from a small display can be located by a sentence test administered by tape recorder which drives the display of the communication device by means of the standard code of the deaf teletype network.
Amodal causal capture in the tunnel effect.
Bae, Gi Yeul; Flombaum, Jonathan I
2011-01-01
In addition to identifying individual objects in the world, the visual system must also characterize the relationships between objects, for instance when objects occlude one another or cause one another to move. Here we explored the relationship between perceived causality and occlusion. Can one perceive causality in an occluded location? In several experiments, observers judged whether a centrally presented event involved a single object passing behind an occluder, or one object causally launching another (out of view and behind the occluder). With no additional context, the centrally presented event was typically judged as a non-causal pass, even when the occluding and disoccluding objects were different colors--an illusion known as the 'tunnel effect' that results from spatiotemporal continuity. However, when a synchronized context event involved an unambiguous causal launch, participants perceived a causal launch behind the occluder. This percept of an occluded causal interaction could also be driven by grouping and synchrony cues in the absence of any explicitly causal interaction. These results reinforce the hypothesis that causality is an aspect of perception. It is among the interpretations of the world that are independently available to vision when resolving ambiguity, and that the visual system can 'fill in' amodally.
Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F
2018-01-01
Classically understood as a deficit in spatial vision, amblyopia is increasingly recognized to also impair audiovisual multisensory processing. Studies to date, however, have not determined whether the audiovisual abnormalities reflect a failure of multisensory integration, or an optimal strategy in the face of unisensory impairment. We use the ventriloquism effect and the maximum-likelihood estimation (MLE) model of optimal integration to investigate integration of audiovisual spatial information in amblyopia. Participants with unilateral amblyopia (n = 14; mean age 28.8 years; 7 anisometropic, 3 strabismic, 4 mixed mechanism) and visually normal controls (n = 16, mean age 29.2 years) localized brief unimodal auditory, unimodal visual, and bimodal (audiovisual) stimuli during binocular viewing using a location discrimination task. A subset of bimodal trials involved the ventriloquism effect, an illusion in which auditory and visual stimuli originating from different locations are perceived as originating from a single location. Localization precision and bias were determined by psychometric curve fitting, and the observed parameters were compared with predictions from the MLE model. Spatial localization precision was significantly reduced in the amblyopia group compared with the control group for unimodal visual, unimodal auditory, and bimodal stimuli. Analyses of localization precision and bias for bimodal stimuli showed no significant deviations from the MLE model in either the amblyopia group or the control group. Despite pervasive deficits in localization precision for visual, auditory, and audiovisual stimuli, audiovisual integration remains intact and optimal in unilateral amblyopia.
Contextual effects on perceived contrast: figure-ground assignment and orientation contrast.
Self, Matthew W; Mookhoek, Aart; Tjalma, Nienke; Roelfsema, Pieter R
2015-02-02
Figure-ground segregation is an important step in the path leading to object recognition. The visual system segregates objects ('figures') in the visual scene from their backgrounds ('ground'). Electrophysiological studies in awake-behaving monkeys have demonstrated that neurons in early visual areas increase their firing rate when responding to a figure compared to responding to the background. We hypothesized that similar changes in neural firing would take place in early visual areas of the human visual system, leading to changes in the perception of low-level visual features. In this study, we investigated whether contrast perception is affected by figure-ground assignment using stimuli similar to those in the electrophysiological studies in monkeys. We measured contrast discrimination thresholds and perceived contrast for Gabor probes placed on figures or the background and found that the perceived contrast of the probe was increased when it was placed on a figure. Furthermore, we tested how this effect compared with the well-known effect of orientation contrast on perceived contrast. We found that figure-ground assignment and orientation contrast produced changes in perceived contrast of a similar magnitude, and that they interacted. Our results demonstrate that figure-ground assignment influences perceived contrast, consistent with an effect of figure-ground assignment on activity in early visual areas of the human visual system. © 2015 ARVO.
No attentional capture from invisible flicker
Alais, David; Locke, Shannon M.; Leung, Johahn; Van der Burg, Erik
2016-01-01
We tested whether fast flicker can capture attention using eight flicker frequencies from 20–96 Hz, including several too high to be perceived (>50 Hz). Using a 480 Hz visual display rate, we presented smoothly sampled sinusoidal temporal modulations at: 20, 30, 40, 48, 60, 69, 80, and 96 Hz. We first established flicker detection rates for each frequency. Performance was at or near ceiling until 48 Hz and dropped sharply to chance level at 60 Hz and above. We then presented the same flickering stimuli as pre-cues in a visual search task containing five elements. Flicker location varied randomly and was therefore congruent with target location on 20% of trials. Comparing congruent and incongruent trials revealed a very strong congruency effect (faster search for cued targets) for all detectable frequencies (20–48 Hz) but no effect for faster flicker rates that were detected at chance. This pattern of results (obtained with brief flicker cues: 58 ms) was replicated for long flicker cues (1000 ms) intended to allow for entrainment to the flicker frequency. These results indicate that only visible flicker serves as an exogenous attentional cue and that flicker rates too high to be perceived are completely ineffective. PMID:27377759
NASA Astrophysics Data System (ADS)
Pietrzyk, Mariusz W.; Manning, David J.; Dix, Alan; Donovan, Tim
2009-02-01
Aim: The goal of the study is to determine the spatial frequency characteristics at locations in the image of overt and covert observers' decisions and find out if there are any similarities in different observers' groups: the same radiological experience group or the same accuracy scored level. Background: The radiological task is described as a visual searching decision making procedure involving visual perception and cognitive processing. Humans perceive the world through a number of spatial frequency channels, each sensitive to visual information carried by different spatial frequency ranges and orientations. Recent studies have shown that particular physical properties of local and global image-based elements are correlated with the performance and the level of experience of human observers in breast cancer and lung nodule detections. Neurological findings in visual perception were an inspiration for wavelet applications in vision research because the methodology tries to mimic the brain processing algorithms. Methods: The wavelet approach to the set of postero-anterior chest radiographs analysis has been used to characterize perceptual preferences observers with different levels of experience in the radiological task. Psychophysical methodology has been applied to track eye movements over the image, where particular ROIs related to the observers' fixation clusters has been analysed in the spaces frame by Daubechies functions. Results: Significance differences have been found between the spatial frequency characteristics at the location of different decisions.
Maselli, Antonella; Slater, Mel
2014-01-01
Bodily illusions have been used to study bodily self-consciousness and disentangle its various components, among other the sense of ownership and self-location. Congruent multimodal correlations between the real body and a fake humanoid body can in fact trigger the illusion that the fake body is one's own and/or disrupt the unity between the perceived self-location and the position of the physical body. However, the extent to which changes in self-location entail changes in ownership is still matter of debate. Here we address this problem with the support of immersive virtual reality. Congruent visuotactile stimulation was delivered on healthy participants to trigger full body illusions from different visual perspectives, each resulting in a different degree of overlap between real and virtual body. Changes in ownership and self-location were measured with novel self-posture assessment tasks and with an adapted version of the cross-modal congruency task. We found that, despite their strong coupling, self-location and ownership can be selectively altered: self-location was affected when having a third person perspective over the virtual body, while ownership toward the virtual body was experienced only in the conditions with total or partial overlap. Thus, when the virtual body is seen in the far extra-personal space, changes in self-location were not coupled with changes in ownership. If a partial spatial overlap is present, ownership was instead typically experienced with a boosted change in the perceived self-location. We discussed results in the context of the current knowledge of the multisensory integration mechanisms contributing to self-body perception. We argue that changes in the perceived self-location are associated to the dynamical representation of peripersonal space encoded by visuotactile neurons. On the other hand, our results speak in favor of visuo-proprioceptive neuronal populations being a driving trigger in full body ownership illusions. PMID:25309383
Retrospective Attention Gates Discrete Conscious Access to Past Sensory Stimuli.
Thibault, Louis; van den Berg, Ronald; Cavanagh, Patrick; Sergent, Claire
2016-01-01
Cueing attention after the disappearance of visual stimuli biases which items will be remembered best. This observation has historically been attributed to the influence of attention on memory as opposed to subjective visual experience. We recently challenged this view by showing that cueing attention after the stimulus can improve the perception of a single Gabor patch at threshold levels of contrast. Here, we test whether this retro-perception actually increases the frequency of consciously perceiving the stimulus, or simply allows for a more precise recall of its features. We used retro-cues in an orientation-matching task and performed mixture-model analysis to independently estimate the proportion of guesses and the precision of non-guess responses. We find that the improvements in performance conferred by retrospective attention are overwhelmingly determined by a reduction in the proportion of guesses, providing strong evidence that attracting attention to the target's location after its disappearance increases the likelihood of perceiving it consciously.
Huang, Hsu-Chia; Lee, Yen-Tung; Chen, Wen-Yeo; Liang, Caleb
2017-01-01
Self-location -the sense of where I am in space-provides an experiential anchor for one's interaction with the environment. In the studies of full-body illusions, many researchers have defined self-location solely in terms of body-location -the subjective feeling of where my body is. Although this view is useful, there is an issue regarding whether it can fully accommodate the role of 1PP-location -the sense of where my first-person perspective is located in space. In this study, we investigate self-location by comparing body-location and 1PP-location: using a head-mounted display (HMD) and a stereo camera, the subjects watched their own body standing in front of them and received tactile stimulations. We manipulated their senses of body-location and 1PP-location in three different conditions: the participants standing still (Basic condition), asking them to move forward (Walking condition), and swiftly moving the stereo camera away from their body (Visual condition). In the Walking condition, the participants watched their body moving away from their 1PP. In the Visual condition, the scene seen via the HMD was systematically receding. Our data show that, under different manipulations of movement, the spatial unity between 1PP-location and body-location can be temporarily interrupted. Interestingly, we also observed a "double-body effect." We further suggest that it is better to consider body-location and 1PP-location as interrelated but distinct factors that jointly support the sense of self-location.
Processing of threat-related information outside the focus of visual attention.
Calvo, Manuel G; Castillo, M Dolores
2005-05-01
This study investigates whether threat-related words are especially likely to be perceived in unattended locations of the visual field. Threat-related, positive, and neutral words were presented at fixation as probes in a lexical decision task. The probe word was preceded by 2 simultaneous prime words (1 foveal, i.e., at fixation; 1 parafoveal, i.e., 2.2 deg. of visual angle from fixation), which were presented for 150 ms, one of which was either identical or unrelated to the probe. Results showed significant facilitation in lexical response times only for the probe threat words when primed parafoveally by an identical word presented in the right visual field. We conclude that threat-related words have privileged access to processing outside the focus of attention. This reveals a cognitive bias in the preferential, parallel processing of information that is important for adaptation.
Mendoza-Halliday, Diego; Martinez-Trujillo, Julio C.
2017-01-01
The primate lateral prefrontal cortex (LPFC) encodes visual stimulus features while they are perceived and while they are maintained in working memory. However, it remains unclear whether perceived and memorized features are encoded by the same or different neurons and population activity patterns. Here we record LPFC neuronal activity while monkeys perceive the motion direction of a stimulus that remains visually available, or memorize the direction if the stimulus disappears. We find neurons with a wide variety of combinations of coding strength for perceived and memorized directions: some neurons encode both to similar degrees while others preferentially or exclusively encode either one. Reading out the combined activity of all neurons, a machine-learning algorithm reliably decode the motion direction and determine whether it is perceived or memorized. Our results indicate that a functionally diverse population of LPFC neurons provides a substrate for discriminating between perceptual and mnemonic representations of visual features. PMID:28569756
Environmental surfaces and the compression of perceived visual space
Bian, Zheng; Andersen, George J.
2011-01-01
The present study examined whether the compression of perceived visual space varies according to the type of environmental surface being viewed. To examine this issue, observers made exocentric distance judgments when viewing simulated 3D scenes. In 4 experiments, observers viewed ground and ceiling surfaces and performed either an L-shaped matching task (Experiments 1, 3, and 4) or a bisection task (Experiment 2). Overall, we found considerable compression of perceived exocentric distance on both ground and ceiling surfaces. However, the perceived exocentric distance was less compressed on a ground surface than on a ceiling surface. In addition, this ground surface advantage did not vary systematically as a function of the distance in the scene. These results suggest that the perceived visual space when viewing a ground surface is less compressed than the perceived visual space when viewing a ceiling surface and that the perceived layout of a surface varies as a function of the type of the surface. PMID:21669858
Huang, Hsu-Chia; Lee, Yen-Tung; Chen, Wen-Yeo; Liang, Caleb
2017-01-01
Self-location—the sense of where I am in space—provides an experiential anchor for one's interaction with the environment. In the studies of full-body illusions, many researchers have defined self-location solely in terms of body-location—the subjective feeling of where my body is. Although this view is useful, there is an issue regarding whether it can fully accommodate the role of 1PP-location—the sense of where my first-person perspective is located in space. In this study, we investigate self-location by comparing body-location and 1PP-location: using a head-mounted display (HMD) and a stereo camera, the subjects watched their own body standing in front of them and received tactile stimulations. We manipulated their senses of body-location and 1PP-location in three different conditions: the participants standing still (Basic condition), asking them to move forward (Walking condition), and swiftly moving the stereo camera away from their body (Visual condition). In the Walking condition, the participants watched their body moving away from their 1PP. In the Visual condition, the scene seen via the HMD was systematically receding. Our data show that, under different manipulations of movement, the spatial unity between 1PP-location and body-location can be temporarily interrupted. Interestingly, we also observed a “double-body effect.” We further suggest that it is better to consider body-location and 1PP-location as interrelated but distinct factors that jointly support the sense of self-location. PMID:28352241
Response-specifying cue for action interferes with perception of feature-sharing stimuli.
Nishimura, Akio; Yokosawa, Kazuhiko
2010-06-01
Perceiving a visual stimulus is more difficult when a to-be-executed action is compatible with that stimulus, which is known as blindness to response-compatible stimuli. The present study explored how the factors constituting the action event (i.e., response-specifying cue, response intention, and response feature) affect the occurrence of this blindness effect. The response-specifying cue varied along the horizontal and vertical dimensions, while the response buttons were arranged diagonally. Participants responded based on one dimension randomly determined in a trial-by-trial manner. The response intention varied along a single dimension, whereas the response location and the response-specifying cue varied within both vertical and horizontal dimensions simultaneously. Moreover, the compatibility between the visual stimulus and the response location and the compatibility between that stimulus and the response-specifying cue was separately determined. The blindness effect emerged exclusively based on the feature correspondence between the response-specifying cue of the action task and the visual target of the perceptual task. The size of this stimulus-stimulus (S-S) blindness effect did not differ significantly across conditions, showing no effect of response intention and response location. This finding emphasizes the effect of stimulus factors, rather than response factors, of the action event as a source of the blindness to response-compatible stimuli.
Perceptual organization of shape, color, shade, and lighting in visual and pictorial objects
Pinna, Baingio
2012-01-01
The main questions we asked in this work are the following: Where are representations of shape, color, depth, and lighting mostly located? Does their formation take time to develop? How do they contribute to determining and defining a visual object, and how do they differ? How do visual artists use them to create objects and scenes? Is the way artists use them related to the way we perceive them? To answer these questions, we studied the microgenetic development of the object perception and formation. Our hypothesis is that the main object properties are extracted in sequential order and in the same order that these roles are also used by artists and children of different age to paint objects. The results supported the microgenesis of object formation according to the following sequence: contours, color, shading, and lighting. PMID:23145283
Rösler, Lara; Rolfs, Martin; van der Stigchel, Stefan; Neggers, Sebastiaan F. W.; Cahn, Wiepke; Kahn, René S.
2015-01-01
Corollary discharge (CD) refers to “copies” of motor signals sent to sensory areas, allowing prediction of future sensory states. They enable the putative mechanisms supporting the distinction between self-generated and externally generated sensations. Accordingly, many authors have suggested that disturbed CD engenders psychotic symptoms of schizophrenia, which are characterized by agency distortions. CD also supports perceived visual stability across saccadic eye movements and is used to predict the postsaccadic retinal coordinates of visual stimuli, a process called remapping. We tested whether schizophrenia patients (SZP) show remapping disturbances as evidenced by systematic transsaccadic mislocalizations of visual targets. SZP and healthy controls (HC) performed a task in which a saccadic target disappeared upon saccade initiation and, after a brief delay, reappeared at a horizontally displaced position. HC judged the direction of this displacement accurately, despite spatial errors in saccade landing site, indicating that their comparison of the actual to predicted postsaccadic target location relied on accurate CD. SZP performed worse and relied more on saccade landing site as a proxy for the presaccadic target, consistent with disturbed CD. This remapping failure was strongest in patients with more severe psychotic symptoms, consistent with the theoretical link between disturbed CD and phenomenological experiences in schizophrenia. PMID:26108951
O'Donnell, Nicole Hummel; Willoughby, Jessica Fitts
2017-10-01
Health professionals increasingly use social media to communicate health information, but it is unknown how visual message presentation on these platforms affects message reception. This study used an experiment to analyse how young adults (n = 839) perceive sexual health messages on Instagram. Participants were exposed to one of four conditions based on visual message presentation. Messages with embedded health content had the highest perceived message effectiveness ratings. Additionally, message sensation value, attitudes and systematic information processing were significant predictors of perceived message effectiveness. Implications for visual message design for electronic health are discussed.
Fan, Zhao; Harris, John
2010-10-12
In a recent study (Fan, Z., & Harris, J. (2008). Perceived spatial displacement of motion-defined contours in peripheral vision. Vision Research, 48(28), 2793-2804), we demonstrated that virtual contours defined by two regions of dots moving in opposite directions were displaced perceptually in the direction of motion of the dots in the more eccentric region when the contours were viewed in the right visual field. Here, we show that the magnitude and/or direction of these displacements varies in different quadrants of the visual field. When contours were presented in the lower visual field, the direction of perceived contour displacement was consistent with that when both contours were presented in the right visual field. However, this illusory motion-induced spatial displacement disappeared when both contours were presented in the upper visual field. Also, perceived contour displacement in the direction of the more eccentric dots was larger in the right than in the left visual field, perhaps because of a hemispheric asymmetry in attentional allocation. Quadrant-based analyses suggest that the pattern of results arises from opposite directions of perceived contour displacement in the upper-left and lower-right visual quadrants, which depend on the relative strengths of two effects: a greater sensitivity to centripetal motion, and an asymmetry in the allocation of spatial attention. Copyright © 2010 Elsevier Ltd. All rights reserved.
Perception of straightness and parallelism with minimal distance information.
Rogers, Brian; Naumenko, Olga
2016-07-01
The ability of human observers to judge the straightness and parallelism of extended lines has been a neglected topic of study since von Helmholtz's initial observations 150 years ago. He showed that there were significant misperceptions of the straightness of extended lines seen in the peripheral visual field. The present study focused on the perception of extended lines (spanning 90° visual angle) that were directly fixated in the visual environment of a planetarium where there was only minimal information about the distance to the lines. Observers were asked to vary the curvature of 1 or more lines until they appeared to be straight and/or parallel, ignoring any perceived curvature in depth. When the horizon between the ground and the sky was visible, the results showed that observers' judgements of the straightness of a single line were significantly biased away from the veridical, great circle locations, and towards equal elevation settings. Similar biases can be seen in the jet trails of aircraft flying across the sky and in Rogers and Anstis's new moon illusion (Perception, 42(Abstract supplement) 18, 2013, 2016). The biasing effect of the horizon was much smaller when observers were asked to judge the straightness and parallelism of 2 or more extended lines. We interpret the results as showing that, in the absence of adequate distance information, observers tend to perceive the projected lines as lying on an approximately equidistant, hemispherical surface and that their judgements of straightness and parallelism are based on the perceived separation of the lines superimposed on that surface.
Implicit representations of space after bilateral parietal lobe damage.
Kim, M S; Robertson, L C
2001-11-15
There is substantial evidence that the primate cortex is grossly divided into two functional streams, an occipital-parietal-frontal pathway that processes "where" and an occipital-temporal-frontal pathway that processes "what" (Ungerleider and Mishkin, 1982). In humans, bilateral occipital-parietal damage results in severe spatial deficits and a neuropsychological disorder known as Balint's syndrome in which a single object can be perceived (simultanagnosia) but its location is unknown (Balint, 1995). The data reported here demonstrate that spatial information for visual features that cannot be explicitly located is represented normally below the level of spatial awareness even with large occipital-parietal lesions. They also demonstrate that parietal damage does not affect preattentive spatial coding of feature locations or complex spatial relationships between parts of a stimulus despite explicit spatial deficits and simultanagnosia.
Ergonomic approaches to designing educational materials for immersive multi-projection system
NASA Astrophysics Data System (ADS)
Shibata, Takashi; Lee, JaeLin; Inoue, Tetsuri
2014-02-01
Rapid advances in computer and display technologies have made it possible to present high quality virtual reality (VR) environment. To use such virtual environments effectively, research should be performed into how users perceive and react to virtual environment in view of particular human factors. We created a VR simulation of sea fish for science education, and we conducted an experiment to examine how observers perceive the size and depth of an object within their reach and evaluated their visual fatigue. We chose a multi-projection system for presenting the educational VR simulation, because this system can provide actual-size objects and produce stereo images located close to the observer. The results of the experiment show that estimation of size and depth was relatively accurate when subjects used physical actions to assess them. Presenting images within the observer's reach is suggested to be useful for education in VR environment. Evaluation of visual fatigue shows that the level of symptoms from viewing stereo images with a large disparity in VR environment was low in a short time.
Plewan, Thorsten; Rinkenauer, Gerhard
2016-01-01
Reaction time (RT) can strongly be influenced by a number of stimulus properties. For instance, there was converging evidence that perceived size rather than physical (i.e., retinal) size constitutes a major determinant of RT. However, this view has recently been challenged since within a virtual three-dimensional (3D) environment retinal size modulation failed to influence RT. In order to further investigate this issue in the present experiments response force (RF) was recorded as a supplemental measure of response activation in simple reaction tasks. In two separate experiments participants’ task was to react as fast as possible to the occurrence of a target located close to the observer or farther away while the offset between target locations was increased from Experiment 1 to Experiment 2. At the same time perceived target size (by varying the retinal size across depth planes) and target type (sphere vs. soccer ball) were modulated. Both experiments revealed faster and more forceful reactions when targets were presented closer to the observers. Perceived size and target type barely affected RT and RF in Experiment 1 but differentially affected both variables in Experiment 2. Thus, the present findings emphasize the usefulness of RF as a supplement to conventional RT measurement. On a behavioral level the results confirm that (at least) within virtual 3D space perceived object size neither strongly influences RT nor RF. Rather the relative position within egocentric (body-centered) space presumably indicates an object’s behavioral relevance and consequently constitutes an important modulator of visual processing. PMID:28018273
Predicting beauty: fractal dimension and visual complexity in art.
Forsythe, A; Nadal, M; Sheehy, N; Cela-Conde, C J; Sawey, M
2011-02-01
Visual complexity has been known to be a significant predictor of preference for artistic works for some time. The first study reported here examines the extent to which perceived visual complexity in art can be successfully predicted using automated measures of complexity. Contrary to previous findings the most successful predictor of visual complexity was Gif compression. The second study examined the extent to which fractal dimension could account for judgments of perceived beauty. The fractal dimension measure accounts for more of the variance in judgments of perceived beauty in visual art than measures of visual complexity alone, particularly for abstract and natural images. Results also suggest that when colour is removed from an artistic image observers are unable to make meaningful judgments as to its beauty. ©2010 The British Psychological Society.
Tachistoscopic illumination and masking of real scenes.
Chichka, David; Philbeck, John W; Gajewski, Daniel A
2015-03-01
Tachistoscopic presentation of scenes has been valuable for studying the emerging properties of visual scene representations. The spatial aspects of this work have generally been focused on the conceptual locations (e.g., next to the refrigerator) and directional locations of objects in 2-D arrays and/or images. Less is known about how the perceived egocentric distance of objects develops. Here we describe a novel system for presenting brief glimpses of a real-world environment, followed by a mask. The system includes projectors with mechanical shutters for projecting the fixation and masking images, a set of LED floodlights for illuminating the environment, and computer-controlled electronics to set the timing and initiate the process. Because a real environment is used, most visual distance and depth cues can be manipulated using traditional methods. The system is inexpensive, robust, and its components are readily available in the marketplace. This article describes the system and the timing characteristics of each component. We verified the system's ability to control exposure to time scales as low as a few milliseconds.
Perceiving the vertical distances of surfaces by means of a hand-held probe.
Chan, T C; Turvey, M T
1991-05-01
Nine experiments were conducted on the haptic capacity of people to perceive the distances of horizontal surfaces solely on the basis of mechanical stimulation resulting from contacting the surfaces with a vertically held rod. Participants touched target surfaces with rods inside a wooden cabinet and reported the perceived surface location with an indicator outside the cabinet. The target surface, rod, and the participant's hand were occluded, and the sound produced in exploration was muffled. Properties of the probe (length, mass, moment of inertia, center of mass, and shape) were manipulated, along with surface distance and the method and angle of probing. Results suggest that for the most common method of probing, namely, tapping, perceived vertical distance is specific to a particular relation among the rotational inertia of the probe, the distance of the point of contact with the surface from the probe's center of percussion, and the inclination at contact of the probe to the surface. They also suggest that the probe length and the distance probed are independently perceivable. The results were discussed in terms of information specificity versus percept-percept coupling and parallels between selective attention in haptic and visual perception.
Perceiving environmental structure from optical motion
NASA Technical Reports Server (NTRS)
Lappin, Joseph S.
1991-01-01
Generally speaking, one of the most important sources of optical information about environmental structure is known to be the deforming optical patterns produced by the movements of the observer (pilot) or environmental objects. As an observer moves through a rigid environment, the projected optical patterns of environmental objects are systematically transformed according to their orientations and positions in 3D space relative to those of the observer. The detailed characteristics of these deforming optical patterns carry information about the 3D structure of the objects and about their locations and orientations relative to those of the observer. The specific geometrical properties of moving images that may constitute visually detected information about the shapes and locations of environmental objects is examined.
What do we perceive from motion pictures? A computational account.
Cheong, Loong-Fah; Xiang, Xu
2007-06-01
Cinema viewed from a location other than a canonical viewing point (CVP) presents distortions to the viewer in both its static and its dynamic aspects. Past works have investigated mainly the static aspect of this problem and attempted to explain why viewers still seem to perceive the scene very well. The dynamic aspect of depth perception, which is known as structure from motion, and its possible distortion, have not been well investigated. We derive the dynamic depth cues perceived by the viewer and use the so-called isodistortion framework to understand its distortion. The result is that viewers seated at a reasonably central position experience a shift in the intrinsic parameters of their visual systems. Despite this shift, the key properties of the perceived depths remain largely the same, being determined in the main by the accuracy to which extrinsic motion parameters can be recovered. For a viewer seated at a noncentral position and watching the movie screen at a slant angle, the view is related to the view at the CVP by a homography, resulting in various aberrations such as noncentral projection.
Perceived reachability in hemispace.
Gabbard, Carl; Ammar, Diala; Rodrigues, Luis
2005-07-01
A common observation in studies of perceived (imagined) compared to actual movement in a reaching paradigm is the tendency to overestimate. Of the studies noted, reaching tasks have been presented in the general midline range. In the present study, strong right-handers were asked to judge the reachability of visual targets projected onto a table surface at midline, right- (RVF), and left-visual fields (LVF). Midline results support those of previous studies, showing an overestimation bias. In contrast, participants revealed the tendency to underestimate their reachability in RVF and LVF. These findings are discussed from the perspective of actor 'confidence' (a cognitive state) possibly associated with visual information, perceived ability, and perceived task demands.
The extent of visual space inferred from perspective angles
Erkelens, Casper J.
2015-01-01
Retinal images are perspective projections of the visual environment. Perspective projections do not explain why we perceive perspective in 3-D space. Analysis of underlying spatial transformations shows that visual space is a perspective transformation of physical space if parallel lines in physical space vanish at finite distance in visual space. Perspective angles, i.e., the angle perceived between parallel lines in physical space, were estimated for rails of a straight railway track. Perspective angles were also estimated from pictures taken from the same point of view. Perspective angles between rails ranged from 27% to 83% of their angular size in the retinal image. Perspective angles prescribe the distance of vanishing points of visual space. All computed distances were shorter than 6 m. The shallow depth of a hypothetical space inferred from perspective angles does not match the depth of visual space, as it is perceived. Incongruity between the perceived shape of a railway line on the one hand and the experienced ratio between width and length of the line on the other hand is huge, but apparently so unobtrusive that it has remained unnoticed. The incompatibility between perspective angles and perceived distances casts doubt on evidence for a curved visual space that has been presented in the literature and was obtained from combining judgments of distances and angles with physical positions. PMID:26034567
The influence of expertise and of physical complexity on visual short-term memory consolidation.
Sun, Huiming; Zimmer, Hubert D; Fu, Xiaolan
2011-04-01
We investigated whether the expertise of a perceiver and the physical complexity of a stimulus influence consolidation of visual short-term memory (VSTM) in a S1-S2 (Stimulus 1-Stimulus 2) change detection task. Consolidation is assumed to make transient perceptual representations in VSTM more durable, and it is investigated by postexposure of a mask shortly after offset of the perceived stimulus (S1; 17 to 483 ms). We presented colours, Chinese characters, pseudocharacters, and novel symbols to novices (Germans) or experts of Chinese language (Chinese readers). Physical complexity was manipulated by the number of strokes. Unfamiliar material was remembered worse than familiar material (Experiments 1, 2, and 3). For novices the absolute VSTM performance was better for physically simple than for complex material, whereas for experts the complexity did not matter-Chinese readers memorized Chinese characters (Experiment 3). Articulatory suppression did not change these effects (Experiment 2). We always observed a strong effect of SOA, but this effect was influenced neither by physical complexity nor by expertise; only the length of the interstimulus interval between S1 and the mask was relevant. This was observed even with short stimulus onset asynchrony (SOA) of 100 ms (Experiment 2) and in comparing colours and characters (Experiment 5). However, masks impaired memory if they were presented at the locations of the to-be-memorized items, but not beside them-that is, interference was location-based (Experiment 6). We explain the effect of SOA by the assumption that it takes time to stop encoding of information presented at item locations with the offset of S1. The increasing resistance against interference by irrelevant material appears as consolidation of S1.
Schiefer, Matthew; Tan, Daniel; Sidek, Steven M; Tyler, Dustin J
2016-02-01
Tactile feedback is critical to grip and object manipulation. Its absence results in reliance on visual and auditory cues. Our objective was to assess the effect of sensory feedback on task performance in individuals with limb loss. Stimulation of the peripheral nerves using implanted cuff electrodes provided two subjects with sensory feedback with intensity proportional to forces on the thumb, index, and middle fingers of their prosthetic hand during object manipulation. Both subjects perceived the sensation on their phantom hand at locations corresponding to the locations of the forces on the prosthetic hand. A bend sensor measured prosthetic hand span. Hand span modulated the intensity of sensory feedback perceived on the thenar eminence for subject 1 and the middle finger for subject 2. We performed three functional tests with the blindfolded subjects. First, the subject tried to determine whether or not a wooden block had been placed in his prosthetic hand. Second, the subject had to locate and remove magnetic blocks from a metal table. Third, the subject performed the Southampton Hand Assessment Procedure (SHAP). We also measured the subject's sense of embodiment with a survey and his self-confidence. Blindfolded performance with sensory feedback was similar to sighted performance in the wooden block and magnetic block tasks. Performance on the SHAP, a measure of hand mechanical function and control, was similar with and without sensory feedback. An embodiment survey showed an improved sense of integration of the prosthesis in self body image with sensory feedback. Sensory feedback by peripheral nerve stimulation improved object discrimination and manipulation, embodiment, and confidence. With both forms of feedback, the blindfolded subjects tended toward results obtained with visual feedback.
Perceived object stability depends on multisensory estimates of gravity.
Barnett-Cowan, Michael; Fleming, Roland W; Singh, Manish; Bülthoff, Heinrich H
2011-04-27
How does the brain estimate object stability? Objects fall over when the gravity-projected centre-of-mass lies outside the point or area of support. To estimate an object's stability visually, the brain must integrate information across the shape and compare its orientation to gravity. When observers lie on their sides, gravity is perceived as tilted toward body orientation, consistent with a representation of gravity derived from multisensory information. We exploited this to test whether vestibular and kinesthetic information affect this visual task or whether the brain estimates object stability solely from visual information. In three body orientations, participants viewed images of objects close to a table edge. We measured the critical angle at which each object appeared equally likely to fall over or right itself. Perceived gravity was measured using the subjective visual vertical. The results show that the perceived critical angle was significantly biased in the same direction as the subjective visual vertical (i.e., towards the multisensory estimate of gravity). Our results rule out a general explanation that the brain depends solely on visual heuristics and assumptions about object stability. Instead, they suggest that multisensory estimates of gravity govern the perceived stability of objects, resulting in objects appearing more stable than they are when the head is tilted in the same direction in which they fall.
Dong, Ping; Zhong, Chen-Bo
2018-05-01
We examined the psychological impact of visual darkness on people's perceived risk of contagious-disease transmission. We posited that darkness triggers an abstract construal level and increases perceived social distance from others, rendering threats from others to seem less relevant to the self. We found that participants staying in a dimly lit room (Studies 1 and 3-5) or wearing sunglasses (Study 2) tended to estimate a lower risk of catching contagious diseases from others than did those staying in a brightly lit room or wearing clear glasses. The effect persisted in both laboratory (Studies 1-4) and real-life settings (Study 5). The effect arises because visual darkness elevates perceived social distance from the contagion (Study 3) and is attenuated among abstract (vs. concrete) thinkers (Study 4). These findings delineate a systematic, unconscious influence of visual darkness-a subtle yet pervasive situational factor-on perceived risk of contagion. Theoretical contributions and policy implications are discussed.
Ostrand, Rachel; Blumstein, Sheila E.; Ferreira, Victor S.; Morgan, James L.
2016-01-01
Human speech perception often includes both an auditory and visual component. A conflict in these signals can result in the McGurk illusion, in which the listener perceives a fusion of the two streams, implying that information from both has been integrated. We report two experiments investigating whether auditory-visual integration of speech occurs before or after lexical access, and whether the visual signal influences lexical access at all. Subjects were presented with McGurk or Congruent primes and performed a lexical decision task on related or unrelated targets. Although subjects perceived the McGurk illusion, McGurk and Congruent primes with matching real-word auditory signals equivalently primed targets that were semantically related to the auditory signal, but not targets related to the McGurk percept. We conclude that the time course of auditory-visual integration is dependent on the lexicality of the auditory and visual input signals, and that listeners can lexically access one word and yet consciously perceive another. PMID:27011021
Kawase, Saya; Hannah, Beverly; Wang, Yue
2014-09-01
This study examines how visual speech information affects native judgments of the intelligibility of speech sounds produced by non-native (L2) speakers. Native Canadian English perceivers as judges perceived three English phonemic contrasts (/b-v, θ-s, l-ɹ/) produced by native Japanese speakers as well as native Canadian English speakers as controls. These stimuli were presented under audio-visual (AV, with speaker voice and face), audio-only (AO), and visual-only (VO) conditions. The results showed that, across conditions, the overall intelligibility of Japanese productions of the native (Japanese)-like phonemes (/b, s, l/) was significantly higher than the non-Japanese phonemes (/v, θ, ɹ/). In terms of visual effects, the more visually salient non-Japanese phonemes /v, θ/ were perceived as significantly more intelligible when presented in the AV compared to the AO condition, indicating enhanced intelligibility when visual speech information is available. However, the non-Japanese phoneme /ɹ/ was perceived as less intelligible in the AV compared to the AO condition. Further analysis revealed that, unlike the native English productions, the Japanese speakers produced /ɹ/ without visible lip-rounding, indicating that non-native speakers' incorrect articulatory configurations may decrease the degree of intelligibility. These results suggest that visual speech information may either positively or negatively affect L2 speech intelligibility.
Perceived importance and difficulty of online activities among visually impaired persons in Nigeria.
Okonji, Patrick Emeka; Okiki, Olatokunbo Christopher; Ogwezzy, Darlington
2018-03-26
This study investigated perceived relevance of and difficulties in access to day-to-day online activities among visually impaired computer users who used screen readers. The 98 participants in the study were grouped into visually impaired adults (aged 20-59, n = 60) and visually impaired older adults (aged 60 and over, n = 38). Data were collected in structured interview questionnaires with Likert scales exploring ratings of perceived importance and difficulty of access to 11 online platforms of various internet activities. Analyses revealed that the two groups did not differ significantly in ratings of perceived importance of four major online activities, namely sending or reading email (p = 0.5224), online banking (p = 0.2833), online shopping (p = 0.1829), and health information seeking (p = 0.1414). The topmost rated activity of priority among both groups was sending and reading emails. Findings also show that, apart from sending and reading emails, activities rated as important were mostly perceived as difficult to access. The implications of the study for inclusive design and strategies and/or interventions to encourage uptake of internet use among the visually impaired population are discussed.
Heuristics of reasoning and analogy in children's visual perspective taking.
Yaniv, I; Shatz, M
1990-10-01
We propose that children's reasoning about others' visual perspectives is guided by simple heuristics based on a perceiver's line of sight and salient features of the object met by that line. In 3 experiments employing a 2-perceiver analogy task, children aged 3-6 were generally better able to reproduce a perceiver's perspective if a visual cue in the perceiver's line of sight sufficed to distinguish it from alternatives. Children had greater difficulty when the task hinged on attending to configural cues. Availability of distinctive cues affixed on the objects' sides facilitated solution of the symmetrical orientations. These and several other related findings reported in the literature are traced to children's reliance on heuristics of reasoning.
Neural mechanisms underlying sound-induced visual motion perception: An fMRI study.
Hidaka, Souta; Higuchi, Satomi; Teramoto, Wataru; Sugita, Yoichi
2017-07-01
Studies of crossmodal interactions in motion perception have reported activation in several brain areas, including those related to motion processing and/or sensory association, in response to multimodal (e.g., visual and auditory) stimuli that were both in motion. Recent studies have demonstrated that sounds can trigger illusory visual apparent motion to static visual stimuli (sound-induced visual motion: SIVM): A visual stimulus blinking at a fixed location is perceived to be moving laterally when an alternating left-right sound is also present. Here, we investigated brain activity related to the perception of SIVM using a 7T functional magnetic resonance imaging technique. Specifically, we focused on the patterns of neural activities in SIVM and visually induced visual apparent motion (VIVM). We observed shared activations in the middle occipital area (V5/hMT), which is thought to be involved in visual motion processing, for SIVM and VIVM. Moreover, as compared to VIVM, SIVM resulted in greater activation in the superior temporal area and dominant functional connectivity between the V5/hMT area and the areas related to auditory and crossmodal motion processing. These findings indicate that similar but partially different neural mechanisms could be involved in auditory-induced and visually-induced motion perception, and neural signals in auditory, visual, and, crossmodal motion processing areas closely and directly interact in the perception of SIVM. Copyright © 2017 Elsevier B.V. All rights reserved.
Physical Activity, Body Composition, and Perceived Quality of Life of Adults with Visual Impairments
ERIC Educational Resources Information Center
Holbrook, Elizabeth A.; Caputo, Jennifer L.; Perry, Tara L.; Fuller, Dana K.; Morgan, Don W.
2009-01-01
Relatively little is known about the health and fitness of adults with visual impairments. This article documents the physical activity levels and body-composition profiles of young and middle-aged adults with visual impairments and addresses the concomitant effects of these factors on perceived quality of life. (Contains 2 tables.)
ERIC Educational Resources Information Center
Zhou, Li; Smith, Derrick W.; Parker, Amy T.; Griffin-Shirley, Nora
2013-01-01
Introduction: The study reported here explored the relationship between the self-perceived computer competence and employment outcomes of transition-aged youths with visual impairments. Methods: Data on 200 in-school youths and 190 out-of-school youths with a primary disability of visual impairment were retrieved from the database of the first…
Perceiving groups: The people perception of diversity and hierarchy.
Phillips, L Taylor; Slepian, Michael L; Hughes, Brent L
2018-05-01
The visual perception of individuals has received considerable attention (visual person perception), but little social psychological work has examined the processes underlying the visual perception of groups of people (visual people perception). Ensemble-coding is a visual mechanism that automatically extracts summary statistics (e.g., average size) of lower-level sets of stimuli (e.g., geometric figures), and also extends to the visual perception of groups of faces. Here, we consider whether ensemble-coding supports people perception, allowing individuals to form rapid, accurate impressions about groups of people. Across nine studies, we demonstrate that people visually extract high-level properties (e.g., diversity, hierarchy) that are unique to social groups, as opposed to individual persons. Observers rapidly and accurately perceived group diversity and hierarchy, or variance across race, gender, and dominance (Studies 1-3). Further, results persist when observers are given very short display times, backward pattern masks, color- and contrast-controlled stimuli, and absolute versus relative response options (Studies 4a-7b), suggesting robust effects supported specifically by ensemble-coding mechanisms. Together, we show that humans can rapidly and accurately perceive not only individual persons, but also emergent social information unique to groups of people. These people perception findings demonstrate the importance of visual processes for enabling people to perceive social groups and behave effectively in group-based social interactions. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
The paradoxical moon illusions.
Gilinsky, A S
1980-02-01
An adaptation theory of visual space is developed and applied to the data of a variety of studies of visual space perception. By distinguishing between the perceived distance of an object and that of the background or sky, the theory resolves the paradox of the moon illusions and relates both perceived size and perceived distance of the moon to the absolute level of spatial adaptation. The theory assumes that visual space expands or contracts in adjustment to changes in the sensory indicators of depth and provides a measure, A, of this adaptation-level. Changes in A have two effects--one on perceived size, one on perceived distance. Since A varies systematically as a function of angle of regard, availability of cues, and the total space-value, A is a measure of the moon illusions, and a practical index of individual differences by pilots and astronauts in the perception of the size and distance of objects on the ground and in the air.
Rolfs, Martin; Carrasco, Marisa
2012-01-01
Humans and other animals with foveate vision make saccadic eye movements to prioritize the visual analysis of behaviorally relevant information. Even before movement onset, visual processing is selectively enhanced at the target of a saccade, presumably gated by brain areas controlling eye movements. Here we assess concurrent changes in visual performance and perceived contrast before saccades, and show that saccade preparation enhances perception rapidly, altering early visual processing in a manner akin to increasing the physical contrast of the visual input. Observers compared orientation and contrast of a test stimulus, appearing briefly before a saccade, to a standard stimulus, presented previously during a fixation period. We found simultaneous progressive enhancement in both orientation discrimination performance and perceived contrast as time approached saccade onset. These effects were robust as early as 60 ms after the eye movement was cued, much faster than the voluntary deployment of covert attention (without eye movements), which takes ~300 ms. Our results link the dynamics of saccade preparation, visual performance, and subjective experience and show that upcoming eye movements alter visual processing by increasing the signal strength. PMID:23035086
Remote vs. head-mounted eye-tracking: a comparison using radiologists reading mammograms
NASA Astrophysics Data System (ADS)
Mello-Thoms, Claudia; Gur, David
2007-03-01
Eye position monitoring has been used for decades in Radiology in order to determine how radiologists interpret medical images. Using these devices several discoveries about the perception/decision making process have been made, such as the importance of comparisons of perceived abnormalities with selected areas of the background, the likelihood that a true lesion will attract visual attention early in the reading process, and the finding that most misses attract prolonged visual dwell, often comparable to dwell in the location of reported lesions. However, eye position tracking is a cumbersome process, which often requires the observer to wear a helmet gear which contains the eye tracker per se and a magnetic head tracker, which allows for the computation of head position. Observers tend to complain of fatigue after wearing the gear for a prolonged time. Recently, with the advances made to remote eye-tracking, the use of head-mounted systems seemed destined to become a thing of the past. In this study we evaluated a remote eye tracking system, and compared it to a head-mounted system, as radiologists read a case set of one-view mammograms on a high-resolution display. We compared visual search parameters between the two systems, such as time to hit the location of the lesion for the first time, amount of dwell time in the location of the lesion, total time analyzing the image, etc. We also evaluated the observers' impressions of both systems, and what their perceptions were of the restrictions of each system.
Guiding the mind's eye: improving communication and vision by external control of the scanpath
NASA Astrophysics Data System (ADS)
Barth, Erhardt; Dorr, Michael; Böhme, Martin; Gegenfurtner, Karl; Martinetz, Thomas
2006-02-01
Larry Stark has emphasised that what we visually perceive is very much determined by the scanpath, i.e. the pattern of eye movements. Inspired by his view, we have studied the implications of the scanpath for visual communication and came up with the idea to not only sense and analyse eye movements, but also guide them by using a special kind of gaze-contingent information display. Our goal is to integrate gaze into visual communication systems by measuring and guiding eye movements. For guidance, we first predict a set of about 10 salient locations. We then change the probability for one of these candidates to be attended: for one candidate the probability is increased, for the others it is decreased. To increase saliency, for example, we add red dots that are displayed very briefly such that they are hardly perceived consciously. To decrease the probability, for example, we locally reduce the temporal frequency content. Again, if performed in a gaze-contingent fashion with low latencies, these manipulations remain unnoticed. Overall, the goal is to find the real-time video transformation minimising the difference between the actual and the desired scanpath without being obtrusive. Applications are in the area of vision-based communication (better control of what information is conveyed) and augmented vision and learning (guide a person's gaze by the gaze of an expert or a computer-vision system). We believe that our research is very much in the spirit of Larry Stark's views on visual perception and the close link between vision research and engineering.
What explains health in persons with visual impairment?
2014-01-01
Background Visual impairment is associated with important limitations in functioning. The International Classification of Functioning, Disability and Health (ICF) adopted by the World Health Organisation (WHO) relies on a globally accepted framework for classifying problems in functioning and the influence of contextual factors. Its comprehensive perspective, including biological, individual and social aspects of health, enables the ICF to describe the whole health experience of persons with visual impairment. The objectives of this study are (1) to analyze whether the ICF can be used to comprehensively describe the problems in functioning of persons with visual impairment and the environmental factors that influence their lives and (2) to select the ICF categories that best capture self-perceived health of persons with visual impairment. Methods Data from 105 persons with visual impairment were collected, including socio-demographic data, vision-related data, the Extended ICF Checklist and the visual analogue scale of the EuroQoL-5D, to assess self-perceived health. Descriptive statistics and a Group Lasso regression were performed. The main outcome measures were functioning defined as impairments in Body functions and Body structures, limitations in Activities and restrictions in Participation, influencing Environmental factors and self-perceived health. Results In total, 120 ICF categories covering a broad range of Body functions, Body structures, aspects of Activities and Participation and Environmental factors were identified. Thirteen ICF categories that best capture self-perceived health were selected based on the Group Lasso regression. While Activities-and-Participation categories were selected most frequently, the greatest impact on self-perceived health was found in Body-functions categories. The ICF can be used as a framework to comprehensively describe the problems of persons with visual impairment and the Environmental factors which influence their lives. Conclusions There are plenty of ICF categories, Environmental-factors categories in particular, which are relevant to persons with visual impairment, but have hardly ever been taken into consideration in literature and visual impairment-specific patient-reported outcome measures. PMID:24886326
ERIC Educational Resources Information Center
Lee, Soon Min; Oh, Yunjin
2017-01-01
Introduction: This study examined a mediator role of perceived stress on the prediction of the effects of academic stress on depressive symptoms among e-learning students with visual impairments. Methods: A convenience sample for this study was collected for three weeks from November to December in 2012 among students with visual impairments…
Tachistoscopic illumination and masking of real scenes
Chichka, David; Philbeck, John W.; Gajewski, Daniel A.
2014-01-01
Tachistoscopic presentation of scenes has been valuable for studying the emerging properties of visual scene representations. The spatial aspects of this work have generally been focused on the conceptual locations (e.g., next to the refrigerator) and the directional locations of objects in 2D arrays and/or images. Less is known about how the perceived egocentric distance of objects develops. Here we describe a novel system for presenting brief glimpses of a real-world environment, followed by a mask. The system includes projectors with mechanical shutters for projecting the fixation and masking images, a set of LED floodlights for illuminating the environment, and computer-controlled electronics to set the timing and initiate the process. Because a real environment is used, most visual distance and depth cues may be manipulated using traditional methods. The system is inexpensive, robust, and its components are readily available in the marketplace. This paper describes the system and the timing characteristics of each component. Verification of the ability to control exposure to time scales as low as a few milliseconds is demonstrated. PMID:24519496
Figure-ground organization and the emergence of proto-objects in the visual cortex.
von der Heydt, Rüdiger
2015-01-01
A long history of studies of perception has shown that the visual system organizes the incoming information early on, interpreting the 2D image in terms of a 3D world and producing a structure that provides perceptual continuity and enables object-based attention. Recordings from monkey visual cortex show that many neurons, especially in area V2, are selective for border ownership. These neurons are edge selective and have ordinary classical receptive fields (CRF), but in addition their responses are modulated (enhanced or suppressed) depending on the location of a 'figure' relative to the edge in their receptive field. Each neuron has a fixed preference for location on one side or the other. This selectivity is derived from the image context far beyond the CRF. This paper reviews evidence indicating that border ownership selectivity reflects the formation of early object representations ('proto-objects'). The evidence includes experiments showing (1) reversal of border ownership signals with change of perceived object structure, (2) border ownership specific enhancement of responses in object-based selective attention, (3) persistence of border ownership signals in accordance with continuity of object perception, and (4) remapping of border ownership signals across saccades and object movements. Findings 1 and 2 can be explained by hypothetical grouping circuits that sum contour feature signals in search of objectness, and, via recurrent projections, enhance the corresponding low-level feature signals. Findings 3 and 4 might be explained by assuming that the activity of grouping circuits persists and can be remapped. Grouping, persistence, and remapping are fundamental operations of vision. Finding these operations manifest in low-level visual areas challenges traditional views of visual processing. New computational models need to be developed for a comprehensive understanding of the function of the visual cortex.
Figure–ground organization and the emergence of proto-objects in the visual cortex
von der Heydt, Rüdiger
2015-01-01
A long history of studies of perception has shown that the visual system organizes the incoming information early on, interpreting the 2D image in terms of a 3D world and producing a structure that provides perceptual continuity and enables object-based attention. Recordings from monkey visual cortex show that many neurons, especially in area V2, are selective for border ownership. These neurons are edge selective and have ordinary classical receptive fields (CRF), but in addition their responses are modulated (enhanced or suppressed) depending on the location of a ‘figure’ relative to the edge in their receptive field. Each neuron has a fixed preference for location on one side or the other. This selectivity is derived from the image context far beyond the CRF. This paper reviews evidence indicating that border ownership selectivity reflects the formation of early object representations (‘proto-objects’). The evidence includes experiments showing (1) reversal of border ownership signals with change of perceived object structure, (2) border ownership specific enhancement of responses in object-based selective attention, (3) persistence of border ownership signals in accordance with continuity of object perception, and (4) remapping of border ownership signals across saccades and object movements. Findings 1 and 2 can be explained by hypothetical grouping circuits that sum contour feature signals in search of objectness, and, via recurrent projections, enhance the corresponding low-level feature signals. Findings 3 and 4 might be explained by assuming that the activity of grouping circuits persists and can be remapped. Grouping, persistence, and remapping are fundamental operations of vision. Finding these operations manifest in low-level visual areas challenges traditional views of visual processing. New computational models need to be developed for a comprehensive understanding of the function of the visual cortex. PMID:26579062
Alphonsa, Sushma; Dai, Boyi; Benham-Deal, Tami; Zhu, Qin
2016-01-01
The speed-accuracy trade-off is a fundamental movement problem that has been extensively investigated. It has been established that the speed at which one can move to tap targets depends on how large the targets are and how far they are apart. These spatial properties of the targets can be quantified by the index of difficulty (ID). Two visual illusions are known to affect the perception of target size and movement amplitude: the Ebbinghaus illusion and Muller-Lyer illusion. We created visual images that combined these two visual illusions to manipulate the perceived ID, and then examined people's visual perception of the targets in illusory context as well as their performance in tapping those targets in both discrete and continuous manners. The findings revealed that the combined visual illusions affected the perceived ID similarly in both discrete and continuous judgment conditions. However, the movement outcomes were affected by the combined visual illusions according to the tapping mode. In discrete tapping, the combined visual illusions affected both movement accuracy and movement amplitude such that the effective ID resembled the perceived ID. In continuous tapping, none of the movement outcomes were affected by the combined visual illusions. Participants tapped the targets with higher speed and accuracy in all visual conditions. Based on these findings, we concluded that distinct visual-motor control mechanisms were responsible for execution of discrete and continuous Fitts' tapping. Although discrete tapping relies on allocentric information (object-centered) to plan for action, continuous tapping relies on egocentric information (self-centered) to control for action. The planning-control model for rapid aiming movements is supported.
Barli, Onder; Bilgili, Bilsen; Dane, Senol
2006-10-01
The associations of sex and eyedness of consumers in a market and the market's lighting and wall color with price attraction and perceived quality of goods and the inside visual appeal were studied using an inventory after shopping by 440 men and 478 women, 20 to 60 years old (M = 29.3, SD = 10.2). Two lights (soft and bright) and 4 colors (blue, yellow, green, and red) and neutral light (white) were used. Women rated the prices of goods more attractive compared to men. In the total sample, left-eye preferents rated visual appeal higher compared to right-eye preferents. Bright light was associated with higher visual appeal than soft light. Green was associated with the highest inside visual appeal and perceived quality of goods, which may be due to its intermediate wavelength.
Figure-ground activity in V1 and guidance of saccadic eye movements.
Supèr, Hans
2006-01-01
Every day we shift our gaze about 150.000 times mostly without noticing it. The direction of these gaze shifts are not random but directed by sensory information and internal factors. After each movement the eyes hold still for a brief moment so that visual information at the center of our gaze can be processed in detail. This means that visual information at the saccade target location is sufficient to accurately guide the gaze shift but yet is not sufficiently processed to be fully perceived. In this paper I will discuss the possible role of activity in the primary visual cortex (V1), in particular figure-ground activity, in oculo-motor behavior. Figure-ground activity occurs during the late response period of V1 neurons and correlates with perception. The strength of figure-ground responses predicts the direction and moment of saccadic eye movements. The superior colliculus, a gaze control center that integrates visual and motor signals, receives direct anatomical connections from V1. These projections may convey the perceptual information that is required for appropriate gaze shifts. In conclusion, figure-ground activity in V1 may act as an intermediate component linking visual and motor signals.
Experimental test of visuomotor updating models that explain perisaccadic mislocalization.
Van Wetter, Sigrid M C I; Van Opstal, A John
2008-10-23
Localization of a brief visual target is inaccurate when presented around saccade onset. Perisaccadic mislocalization is maximal in the saccade direction and varies systematically with the target-saccade onset disparity. It has been hypothesized that this effect is either due to a sluggish representation of eye position, to low-pass filtering of the visual event, to saccade-induced compression of visual space, or to a combination of these effects. Despite their differences, these schemes all predict that the pattern of localization errors varies systematically with the saccade amplitude and kinematics. We tested these predictions for the double-step paradigm by analyzing the errors for saccades of widely varying amplitudes. Our data show that the measured error patterns are only mildly influenced by the primary-saccade amplitude over a large range of saccade properties. An alternative possibility, better accounting for the data, assumes that around saccade onset perceived target location undergoes a uniform shift in the saccade direction that varies with amplitude only for small saccades. The strength of this visual effect saturates at about 10 deg and also depends on target duration. Hence, we propose that perisaccadic mislocalization results from errors in visual-spatial perception rather than from sluggish oculomotor feedback.
Visual cues and perceived reachability.
Gabbard, Carl; Ammar, Diala
2005-12-01
A rather consistent finding in studies of perceived (imagined) compared to actual movement in a reaching paradigm is the tendency to overestimate at midline. Explanations of such behavior have focused primarily on perceptions of postural constraints and the notion that individuals calibrate reachability in reference to multiple degrees of freedom, also known as the whole-body explanation. The present study examined the role of visual information in the form of binocular and monocular cues in perceived reachability. Right-handed participants judged the reachability of visual targets at midline with both eyes open, dominant eye occluded, and the non-dominant eye covered. Results indicated that participants were relatively accurate with condition responses not being significantly different in regard to total error. Analysis of the direction of error (mean bias) revealed effective accuracy across conditions with only a marginal distinction between monocular and binocular conditions. Therefore, within the task conditions of this experiment, it appears that binocular and monocular cues provide sufficient visual information for effective judgments of perceived reach at midline.
Duration estimates within a modality are integrated sub-optimally
Cai, Ming Bo; Eagleman, David M.
2015-01-01
Perceived duration can be influenced by various properties of sensory stimuli. For example, visual stimuli of higher temporal frequency are perceived to last longer than those of lower temporal frequency. How does the brain form a representation of duration when each of two simultaneously presented stimuli influences perceived duration in different way? To answer this question, we investigated the perceived duration of a pair of dynamic visual stimuli of different temporal frequencies in comparison to that of a single visual stimulus of either low or high temporal frequency. We found that the duration representation of simultaneously occurring visual stimuli is best described by weighting the estimates of duration based on each individual stimulus. However, the weighting performance deviates from the prediction of statistically optimal integration. In addition, we provided a Bayesian account to explain a difference in the apparent sensitivity of the psychometric curves introduced by the order in which the two stimuli are displayed in a two-alternative forced-choice task. PMID:26321965
Kolarik, Andrew J; Moore, Brian C J; Zahorik, Pavel; Cirstea, Silvia; Pardhan, Shahina
2016-02-01
Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.
Using visuo-kinetic virtual reality to induce illusory spinal movement: the MoOVi Illusion
Smith, Ross T.; Hunter, Estin V.; Davis, Miles G.; Sterling, Michele; Moseley, G. Lorimer
2017-01-01
Background Illusions that alter perception of the body provide novel opportunities to target brain-based contributions to problems such as persistent pain. One example of this, mirror therapy, uses vision to augment perceived movement of a painful limb to treat pain. Since mirrors can’t be used to induce augmented neck or other spinal movement, we aimed to test whether such an illusion could be achieved using virtual reality, in advance of testing its potential therapeutic benefit. We hypothesised that perceived head rotation would depend on visually suggested movement. Method In a within-subjects repeated measures experiment, 24 healthy volunteers performed neck movements to 50o of rotation, while a virtual reality system delivered corresponding visual feedback that was offset by a factor of 50%–200%—the Motor Offset Visual Illusion (MoOVi)—thus simulating more or less movement than that actually occurring. At 50o of real-world head rotation, participants pointed in the direction that they perceived they were facing. The discrepancy between actual and perceived direction was measured and compared between conditions. The impact of including multisensory (auditory and visual) feedback, the presence of a virtual body reference, and the use of 360o immersive virtual reality with and without three-dimensional properties, was also investigated. Results Perception of head movement was dependent on visual-kinaesthetic feedback (p = 0.001, partial eta squared = 0.17). That is, altered visual feedback caused a kinaesthetic drift in the direction of the visually suggested movement. The magnitude of the drift was not moderated by secondary variables such as the addition of illusory auditory feedback, the presence of a virtual body reference, or three-dimensionality of the scene. Discussion Virtual reality can be used to augment perceived movement and body position, such that one can perform a small movement, yet perceive a large one. The MoOVi technique tested here has clear potential for assessment and therapy of people with spinal pain. PMID:28243537
Using visuo-kinetic virtual reality to induce illusory spinal movement: the MoOVi Illusion.
Harvie, Daniel S; Smith, Ross T; Hunter, Estin V; Davis, Miles G; Sterling, Michele; Moseley, G Lorimer
2017-01-01
Illusions that alter perception of the body provide novel opportunities to target brain-based contributions to problems such as persistent pain. One example of this, mirror therapy, uses vision to augment perceived movement of a painful limb to treat pain. Since mirrors can't be used to induce augmented neck or other spinal movement, we aimed to test whether such an illusion could be achieved using virtual reality, in advance of testing its potential therapeutic benefit. We hypothesised that perceived head rotation would depend on visually suggested movement. In a within-subjects repeated measures experiment, 24 healthy volunteers performed neck movements to 50 o of rotation, while a virtual reality system delivered corresponding visual feedback that was offset by a factor of 50%-200%-the Motor Offset Visual Illusion (MoOVi)-thus simulating more or less movement than that actually occurring. At 50 o of real-world head rotation, participants pointed in the direction that they perceived they were facing. The discrepancy between actual and perceived direction was measured and compared between conditions. The impact of including multisensory (auditory and visual) feedback, the presence of a virtual body reference, and the use of 360 o immersive virtual reality with and without three-dimensional properties, was also investigated. Perception of head movement was dependent on visual-kinaesthetic feedback ( p = 0.001, partial eta squared = 0.17). That is, altered visual feedback caused a kinaesthetic drift in the direction of the visually suggested movement. The magnitude of the drift was not moderated by secondary variables such as the addition of illusory auditory feedback, the presence of a virtual body reference, or three-dimensionality of the scene. Virtual reality can be used to augment perceived movement and body position, such that one can perform a small movement, yet perceive a large one. The MoOVi technique tested here has clear potential for assessment and therapy of people with spinal pain.
The Effect of Temporal Perception on Weight Perception
Kambara, Hiroyuki; Shin, Duk; Kawase, Toshihiro; Yoshimura, Natsue; Akahane, Katsuhito; Sato, Makoto; Koike, Yasuharu
2013-01-01
A successful catch of a falling ball requires an accurate estimation of the timing for when the ball hits the hand. In a previous experiment in which participants performed ball-catching task in virtual reality environment, we accidentally found that the weight of a falling ball was perceived differently when the timing of ball load force to the hand was shifted from the timing expected from visual information. Although it is well known that spatial information of an object, such as size, can easily deceive our perception of its heaviness, the relationship between temporal information and perceived heaviness is still not clear. In this study, we investigated the effect of temporal factors on weight perception. We conducted ball-catching experiments in a virtual environment where the timing of load force exertion was shifted away from the visual contact timing (i.e., time when the ball hit the hand in the display). We found that the ball was perceived heavier when force was applied earlier than visual contact and lighter when force was applied after visual contact. We also conducted additional experiments in which participants were conditioned to one of two constant time offsets prior to testing weight perception. After performing ball-catching trials with 60 ms advanced or delayed load force exertion, participants’ subjective judgment on the simultaneity of visual contact and force exertion changed, reflecting a shift in perception of time offset. In addition, timing of catching motion initiation relative to visual contact changed, reflecting a shift in estimation of force timing. We also found that participants began to perceive the ball as lighter after conditioning to 60 ms advanced offset and heavier after the 60 ms delayed offset. These results suggest that perceived heaviness depends not on the actual time offset between force exertion and visual contact but on the subjectively perceived time offset between them and/or estimation error in force timing. PMID:23450805
Balas, Benjamin
2016-11-01
Peripheral visual perception is characterized by reduced information about appearance due to constraints on how image structure is represented. Visual crowding is a consequence of excessive integration in the visual periphery. Basic phenomenology of visual crowding and other tasks have been successfully accounted for by a summary-statistic model of pooling, suggesting that texture-like processing is useful for how information is reduced in peripheral vision. I attempt to extend the scope of this model by examining a property of peripheral vision: reduced perceived numerosity in the periphery. I demonstrate that a summary-statistic model of peripheral appearance accounts for reduced numerosity in peripherally viewed arrays of randomly placed dots, but does not account for observed effects of dot clustering within such arrays. The model thus offers a limited account of how numerosity is perceived in the visual periphery. I also demonstrate that the model predicts that numerosity estimation is sensitive to element shape, which represents a novel prediction regarding the phenomenology of peripheral numerosity perception. Finally, I discuss ways to extend the model to a broader range of behavior and the potential for using the model to make further predictions about how number is perceived in untested scenarios in peripheral vision.
Task relevance predicts gaze in videos of real moving scenes.
Howard, Christina J; Gilchrist, Iain D; Troscianko, Tom; Behera, Ardhendu; Hogg, David C
2011-09-01
Low-level stimulus salience and task relevance together determine the human fixation priority assigned to scene locations (Fecteau and Munoz in Trends Cogn Sci 10(8):382-390, 2006). However, surprisingly little is known about the contribution of task relevance to eye movements during real-world visual search where stimuli are in constant motion and where the 'target' for the visual search is abstract and semantic in nature. Here, we investigate this issue when participants continuously search an array of four closed-circuit television (CCTV) screens for suspicious events. We recorded eye movements whilst participants watched real CCTV footage and moved a joystick to continuously indicate perceived suspiciousness. We find that when multiple areas of a display compete for attention, gaze is allocated according to relative levels of reported suspiciousness. Furthermore, this measure of task relevance accounted for twice the amount of variance in gaze likelihood as the amount of low-level visual changes over time in the video stimuli.
Rhesus Monkeys Behave As If They Perceive the Duncker Illusion
Zivotofsky, A. Z.; Goldberg, M. E.; Powell, K. D.
2008-01-01
The visual system uses the pattern of motion on the retina to analyze the motion of objects in the world, and the motion of the observer him/herself. Distinguishing between retinal motion evoked by movement of the retina in space and retinal motion evoked by movement of objects in the environment is computationally difficult, and the human visual system frequently misinterprets the meaning of retinal motion. In this study, we demonstrate that the visual system of the Rhesus monkey also misinterprets retinal motion. We show that monkeys erroneously report the trajectories of pursuit targets or their own pursuit eye movements during an epoch of smooth pursuit across an orthogonally moving background. Furthermore, when they make saccades to the spatial location of stimuli that flashed early in an epoch of smooth pursuit or fixation, they make large errors that appear to take into account the erroneous smooth eye movement that they report in the first experiment, and not the eye movement that they actually make. PMID:16102233
Lin, Hsien-Cheng; Chiu, Yu-Hsien; Chen, Yenming J; Wuang, Yee-Pay; Chen, Chiu-Ping; Wang, Chih-Chung; Huang, Chien-Ling; Wu, Tang-Meng; Ho, Wen-Hsien
2017-11-01
This study developed an interactive computer game-based visual perception learning system for special education children with developmental delay. To investigate whether perceived interactivity affects continued use of the system, this study developed a theoretical model of the process in which learners decide whether to continue using an interactive computer game-based visual perception learning system. The technology acceptance model, which considers perceived ease of use, perceived usefulness, and perceived playfulness, was extended by integrating perceived interaction (i.e., learner-instructor interaction and learner-system interaction) and then analyzing the effects of these perceptions on satisfaction and continued use. Data were collected from 150 participants (rehabilitation therapists, medical paraprofessionals, and parents of children with developmental delay) recruited from a single medical center in Taiwan. Structural equation modeling and partial-least-squares techniques were used to evaluate relationships within the model. The modeling results indicated that both perceived ease of use and perceived usefulness were positively associated with both learner-instructor interaction and learner-system interaction. However, perceived playfulness only had a positive association with learner-system interaction and not with learner-instructor interaction. Moreover, satisfaction was positively affected by perceived ease of use, perceived usefulness, and perceived playfulness. Thus, satisfaction positively affects continued use of the system. The data obtained by this study can be applied by researchers, designers of computer game-based learning systems, special education workers, and medical professionals. Copyright © 2017 Elsevier B.V. All rights reserved.
Extended Wearing Trial of Trifield Lens Device for “Tunnel Vision”
Woods, Russell L.; Giorgi, Robert G.; Berson, Eliot L.; Peli, Eli
2009-01-01
Severe visual field constriction (tunnel vision) impairs the ability to navigate and walk safely. We evaluated Trifield glasses as a mobility rehabilitation device for tunnel vision in an extended wearing trial. Twelve patients with tunnel vision (5 to 22 degrees wide) due to retinitis pigmentosa or choroideremia participated in the 5-visit wearing trial. To expand the horizontal visual field, one spectacle lens was fitted with two apex-to-apex prisms that vertically bisected the pupil on primary gaze. This provides visual field expansion at the expense of visual confusion (two objects with the same visual direction). Patients were asked to wear these spectacles as much as possible for the duration of the wearing trial (median 8, range 6 to 60, weeks). Clinical success (continued wear, indicating perceived overall benefit), visual field expansion, perceived direction and perceived visual ability were measured. Of 12 patients, 9 chose to continue wearing the Trifield glasses at the end of the wearing trial. Of those 9 patients, at long-term follow-up (35 to 78 weeks), 3 reported still wearing the Trifield glasses. Visual field expansion (median 18, range 9 to 38, degrees) was demonstrated for all patients. No patient demonstrated adaptation to the change in visual direction produced by the Trifield glasses (prisms). For difficulty with obstacles, some differences between successful and non-successful wearers were found. Trifield glasses provided reported benefits in obstacle avoidance to 7 of the 12 patients completing the wearing trial. Crowded environments were particularly difficult for most wearers. Possible reasons for long-term discontinuation and lack of adaptation to perceived direction are discussed. PMID:20444130
Extended wearing trial of Trifield lens device for 'tunnel vision'.
Woods, Russell L; Giorgi, Robert G; Berson, Eliot L; Peli, Eli
2010-05-01
Severe visual field constriction (tunnel vision) impairs the ability to navigate and walk safely. We evaluated Trifield glasses as a mobility rehabilitation device for tunnel vision in an extended wearing trial. Twelve patients with tunnel vision (5-22 degrees wide) due to retinitis pigmentosa or choroideremia participated in the 5-visit wearing trial. To expand the horizontal visual field, one spectacle lens was fitted with two apex-to-apex prisms that vertically bisected the pupil on primary gaze. This provides visual field expansion at the expense of visual confusion (two objects with the same visual direction). Patients were asked to wear these spectacles as much as possible for the duration of the wearing trial (median 8, range 6-60 weeks). Clinical success (continued wear, indicating perceived overall benefit), visual field expansion, perceived direction and perceived visual ability were measured. Of 12 patients, nine chose to continue wearing the Trifield glasses at the end of the wearing trial. Of those nine patients, at long-term follow-up (35-78 weeks), three reported still wearing the Trifield glasses. Visual field expansion (median 18, range 9-38 degrees) was demonstrated for all patients. No patient demonstrated adaptation to the change in visual direction produced by the Trifield glasses (prisms). For reported difficulty with obstacles, some differences between successful and non-successful wearers were found. Trifield glasses provided reported benefits in obstacle avoidance to 7 of the 12 patients completing the wearing trial. Crowded environments were particularly difficult for most wearers. Possible reasons for long-term discontinuation and lack of adaptation to perceived direction are discussed.
Perceiving the present and a systematization of illusions.
Changizi, Mark A; Hsieh, Andrew; Nijhawan, Romi; Kanai, Ryota; Shimojo, Shinsuke
2008-04-05
Over the history of the study of visual perception there has been great success at discovering countless visual illusions. There has been less success in organizing the overwhelming variety of illusions into empirical generalizations (much less explaining them all via a unifying theory). Here, this article shows that it is possible to systematically organize more than 50 kinds of illusion into a 7 × 4 matrix of 28 classes. In particular, this article demonstrates that (1) smaller sizes, (2) slower speeds, (3) greater luminance contrast, (4) farther distance, (5) lower eccentricity, (6) greater proximity to the vanishing point, and (7) greater proximity to the focus of expansion all tend to have similar perceptual effects, namely, to (A) increase perceived size, (B) increase perceived speed, (C) decrease perceived luminance contrast, and (D) decrease perceived distance. The detection of these empirical regularities was motivated by a hypothesis, called "perceiving the present," that the visual system possesses mechanisms for compensating neural delay during forward motion. This article shows how this hypothesis predicts the empirical regularity. 2008 Cognitive Science Society, Inc.
Reeder, B; Chung, J; Le, T; Thompson, H; Demiris, G
2014-01-01
This article is part of the Focus Theme of Methods of Information in Medicine on "Using Data from Ambient Assisted Living and Smart Homes in Electronic Health Records". Our objectives were to: 1) characterize older adult participants' perceived usefulness of in-home sensor data and 2) develop novel visual displays for sensor data from Ambient Assisted Living environments that can become part of electronic health records. Semi-structured interviews were conducted with community-dwelling older adult participants during three and six-month visits. We engaged participants in two design iterations by soliciting feedback about display types and visual displays of simulated data related to a fall scenario. Interview transcripts were analyzed to identify themes related to perceived usefulness of sensor data. Thematic analysis identified three themes: perceived usefulness of sensor data for managing health; factors that affect perceived usefulness of sensor data and; perceived usefulness of visual displays. Visual displays were cited as potentially useful for family members and health care providers. Three novel visual displays were created based on interview results, design guidelines derived from prior AAL research, and principles of graphic design theory. Participants identified potential uses of personal activity data for monitoring health status and capturing early signs of illness. One area for future research is to determine how visual displays of AAL data might be utilized to connect family members and health care providers through shared understanding of activity levels versus a more simplified view of self-management. Connecting informal and formal caregiving networks may facilitate better communication between older adults, family members and health care providers for shared decision-making.
Mackrous, I; Simoneau, M
2011-11-10
Following body rotation, optimal updating of the position of a memorized target is attained when retinal error is perceived and corrective saccade is performed. Thus, it appears that these processes may enable the calibration of the vestibular system by facilitating the sharing of information between both reference frames. Here, it is assessed whether having sensory information regarding body rotation in the target reference frame could enhance an individual's learning rate to predict the position of an earth-fixed target. During rotation, participants had to respond when they felt their body midline had crossed the position of the target and received knowledge of result. During practice blocks, for two groups, visual cues were displayed in the same reference frame of the target, whereas a third group relied on vestibular information (vestibular-only group) to predict the location of the target. Participants, unaware of the role of the visual cues (visual cues group), learned to predict the location of the target and spatial error decreased from 16.2 to 2.0°, reflecting a learning rate of 34.08 trials (determined from fitting a falling exponential model). In contrast, the group aware of the role of the visual cues (explicit visual cues group) showed a faster learning rate (i.e. 2.66 trials) but similar final spatial error 2.9°. For the vestibular-only group, similar accuracy was achieved (final spatial error of 2.3°), but their learning rate was much slower (i.e. 43.29 trials). Transferring to the Post-test (no visual cues and no knowledge of result) increased the spatial error of the explicit visual cues group (9.5°), but it did not change the performance of the vestibular group (1.2°). Overall, these results imply that cognition assists the brain in processing the sensory information within the target reference frame. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.
Eye movement-invariant representations in the human visual system.
Nishimoto, Shinji; Huth, Alexander G; Bilenko, Natalia Y; Gallant, Jack L
2017-01-01
During natural vision, humans make frequent eye movements but perceive a stable visual world. It is therefore likely that the human visual system contains representations of the visual world that are invariant to eye movements. Here we present an experiment designed to identify visual areas that might contain eye-movement-invariant representations. We used functional MRI to record brain activity from four human subjects who watched natural movies. In one condition subjects were required to fixate steadily, and in the other they were allowed to freely make voluntary eye movements. The movies used in each condition were identical. We reasoned that the brain activity recorded in a visual area that is invariant to eye movement should be similar under fixation and free viewing conditions. In contrast, activity in a visual area that is sensitive to eye movement should differ between fixation and free viewing. We therefore measured the similarity of brain activity across repeated presentations of the same movie within the fixation condition, and separately between the fixation and free viewing conditions. The ratio of these measures was used to determine which brain areas are most likely to contain eye movement-invariant representations. We found that voxels located in early visual areas are strongly affected by eye movements, while voxels in ventral temporal areas are only weakly affected by eye movements. These results suggest that the ventral temporal visual areas contain a stable representation of the visual world that is invariant to eye movements made during natural vision.
Unconscious analyses of visual scenes based on feature conjunctions.
Tachibana, Ryosuke; Noguchi, Yasuki
2015-06-01
To efficiently process a cluttered scene, the visual system analyzes statistical properties or regularities of visual elements embedded in the scene. It is controversial, however, whether those scene analyses could also work for stimuli unconsciously perceived. Here we show that our brain performs the unconscious scene analyses not only using a single featural cue (e.g., orientation) but also based on conjunctions of multiple visual features (e.g., combinations of color and orientation information). Subjects foveally viewed a stimulus array (duration: 50 ms) where 4 types of bars (red-horizontal, red-vertical, green-horizontal, and green-vertical) were intermixed. Although a conscious perception of those bars was inhibited by a subsequent mask stimulus, the brain correctly analyzed the information about color, orientation, and color-orientation conjunctions of those invisible bars. The information of those features was then used for the unconscious configuration analysis (statistical processing) of the central bars, which induced a perceptual bias and illusory feature binding in visible stimuli at peripheral locations. While statistical analyses and feature binding are normally 2 key functions of the visual system to construct coherent percepts of visual scenes, our results show that a high-level analysis combining those 2 functions is correctly performed by unconscious computations in the brain. (c) 2015 APA, all rights reserved).
Lazard, Allison; Mackert, Michael
2014-10-01
This paper highlights the influential role of design complexity for users' first impressions of health websites. An experimental design was utilized to investigate whether a website's level of design complexity impacts user evaluations. An online questionnaire measured the hypothesized impact of design complexity on predictors of message effectiveness. Findings reveal that increased design complexity was positively associated with higher levels of perceived design esthetics, attitude toward the website, perceived message comprehensibility, perceived ease of use, perceived usefulness, perceived message quality, perceived informativeness, and perceived visual informativeness. This research gives further evidence that design complexity should be considered an influential variable for health communicators to effectively reach their audiences, as it embodies the critical first step for message evaluation via electronic platforms. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Selecting and perceiving multiple visual objects
Xu, Yaoda; Chun, Marvin M.
2010-01-01
To explain how multiple visual objects are attended and perceived, we propose that our visual system first selects a fixed number of about four objects from a crowded scene based on their spatial information (object individuation) and then encode their details (object identification). We describe the involvement of the inferior intra-parietal sulcus (IPS) in object individuation and the superior IPS and higher visual areas in object identification. Our neural object-file theory synthesizes and extends existing ideas in visual cognition and is supported by behavioral and neuroimaging results. It provides a better understanding of the role of the different parietal areas in encoding visual objects and can explain various forms of capacity-limited processing in visual cognition such as working memory. PMID:19269882
Perceived Competence of Children with Visual Impairments
ERIC Educational Resources Information Center
Shapiro, Deborah R.; Moffett, Aaron; Lieberman, Lauren; Dummer, Gail M.
2005-01-01
This study examined the perceptions of competence of 43 children with visual impairments who were attending a summer sports camp. It found there were meaningful differences in the perceived competence of the girls, but not the boys, after they attended the camp, and no differences in the perceptions of competence with age.
Flexible Visual Processing of Spatial Relationships
ERIC Educational Resources Information Center
Franconeri, Steven L.; Scimeca, Jason M.; Roth, Jessica C.; Helseth, Sarah A.; Kahn, Lauren E.
2012-01-01
Visual processing breaks the world into parts and objects, allowing us not only to examine the pieces individually, but also to perceive the relationships among them. There is work exploring how we perceive spatial relationships within structures with existing representations, such as faces, common objects, or prototypical scenes. But strikingly,…
Structural salience and the nonaccidentality of a Gestalt.
Strother, Lars; Kubovy, Michael
2012-08-01
We perceive structure through a process of perceptual organization. Here we report a new perceptual organization phenomenon-the facilitation of visual grouping by global curvature. Observers viewed patterns that they perceived as organized into collections of curves. The patterns were perceptually ambiguous such that the perceived orientation of the patterns varied from trial to trial. When patterns were sufficiently dense and proximity was equated for the predominant perceptual alternatives, observers tended to perceive the organization with the greatest curvature. This effect is tantamount to visual grouping by maximal curvature and thus demonstrates an unprecedented effect of global structure on perceptual organization. We account for this result with a model that predicts the perceived organization of a pattern as function of its nonaccidentality, which we define as the probability that it could have occurred by chance. Our findings demonstrate a novel relationship between the geometry of a pattern and the visual salience of global structure. (c) 2012 APA, all rights reserved.
Baldwin, Carryl L; Eisert, Jesse L; Garcia, Andre; Lewis, Bridget; Pratt, Stephanie M; Gonzalez, Christian
2012-01-01
Through a series of investigations involving different levels of contextual fidelity we developed scales of perceived urgency for several dimensions of the auditory, visual, and tactile modalities. Psychophysical ratings of perceived urgency, annoyance, and acceptability as well as behavioral responses to signals in each modality were obtained and analyzed using Steven's Power Law to allow comparison across modalities. Obtained results and their implications for use as in-vehicle alerts and warnings are discussed.
Competition-strength-dependent ground suppression in figure-ground perception.
Salvagio, Elizabeth; Cacciamani, Laura; Peterson, Mary A
2012-07-01
Figure-ground segregation is modeled as inhibitory competition between objects that might be perceived on opposite sides of borders. The winner is the figure; the loser is suppressed, and its location is perceived as shapeless ground. Evidence of ground suppression would support inhibitory competition models and would contribute to explaining why grounds are shapeless near borders shared with figures, yet such evidence is scarce. We manipulated whether competition from potential objects on the ground side of figures was high (i.e., portions of familiar objects were potentially present there) or low (novel objects were potentially present). We predicted that greater competition would produce more ground suppression. The results of two experiments in which suppression was assessed via judgments of the orientation of target bars confirmed this prediction; a third experiment showed that ground suppression is short-lived. Our findings support inhibitory competition models of figure assignment, in particular, and models of visual perception entailing feedback, in general.
Reward alters the perception of time.
Failing, Michel; Theeuwes, Jan
2016-03-01
Recent findings indicate that monetary rewards have a powerful effect on cognitive performance. In order to maximize overall gain, the prospect of earning reward biases visual attention to specific locations or stimulus features improving perceptual sensitivity and processing. The question we addressed in this study is whether the prospect of reward also affects the subjective perception of time. Here, participants performed a prospective timing task using temporal oddballs. The results show that temporal oddballs, displayed for varying durations, presented in a sequence of standard stimuli were perceived to last longer when they signaled a relatively high reward compared to when they signaled no or low reward. When instead of the oddball the standards signaled reward, the perception of the temporal oddball remained unaffected. We argue that by signaling reward, a stimulus becomes subjectively more salient thereby modulating its attentional deployment and distorting how it is perceived in time. Copyright © 2015 Elsevier B.V. All rights reserved.
Influence of Visual Prism Adaptation on Auditory Space Representation.
Pochopien, Klaudia; Fahle, Manfred
2017-01-01
Prisms shifting the visual input sideways produce a mismatch between the visual versus felt position of one's hand. Prism adaptation eliminates this mismatch, realigning hand proprioception with visual input. Whether this realignment concerns exclusively the visuo-(hand)motor system or it generalizes to acoustic inputs is controversial. We here show that there is indeed a slight influence of visual adaptation on the perceived direction of acoustic sources. However, this shift in perceived auditory direction can be fully explained by a subconscious head rotation during prism exposure and by changes in arm proprioception. Hence, prism adaptation does only indirectly generalize to auditory space perception.
Glowinski, Donald; Riolfo, Arianna; Shirole, Kanika; Torres-Eliard, Kim; Chiorri, Carlo; Grandjean, Didier
2014-01-01
Visual information is imperative when developing a concrete and context-sensitive understanding of how music performance is perceived. Recent studies highlight natural, automatic, and nonconscious dependence on visual cues that ultimately refer to body expressions observed in the musician. The current study investigated how the social context of a performing musician (eg playing alone or within an ensemble) and the musical expertise of the perceivers influence the strategies used to understand and decode the visual features of music performance. Results revealed that both perceiver groups, nonmusicians and musicians, have a higher sensitivity towards gaze information; therefore, an impoverished stimulus such as a point-light display is insufficient to understand the social context in which the musician is performing. Implications for these findings are discussed.
A. Smith, Nicholas; A. Folland, Nicholas; Martinez, Diana M.; Trainor, Laurel J.
2017-01-01
Infants learn to use auditory and visual information to organize the sensory world into identifiable objects with particular locations. Here we use a behavioural method to examine infants' use of harmonicity cues to auditory object perception in a multisensory context. Sounds emitted by different objects sum in the air and the auditory system must figure out which parts of the complex waveform belong to different sources (auditory objects). One important cue to this source separation is that complex tones with pitch typically contain a fundamental frequency and harmonics at integer multiples of the fundamental. Consequently, adults hear a mistuned harmonic in a complex sound as a distinct auditory object (Alain et al., 2003). Previous work by our group demonstrated that 4-month-old infants are also sensitive to this cue. They behaviourally discriminate a complex tone with a mistuned harmonic from the same complex with in-tune harmonics, and show an object-related event-related potential (ERP) electrophysiological (EEG) response to the stimulus with mistuned harmonics. In the present study we use an audiovisual procedure to investigate whether infants perceive a complex tone with an 8% mistuned harmonic as emanating from two objects, rather than merely detecting the mistuned cue. We paired in-tune and mistuned complex tones with visual displays that contained either one or two bouncing balls. Four-month-old infants showed surprise at the incongruous pairings, looking longer at the display of two balls when paired with the in-tune complex and at the display of one ball when paired with the mistuned harmonic complex. We conclude that infants use harmonicity as a cue for source separation when integrating auditory and visual information in object perception. PMID:28346869
Does my step look big in this? A visual illusion leads to safer stepping behaviour.
Elliott, David B; Vale, Anna; Whitaker, David; Buckley, John G
2009-01-01
Tripping is a common factor in falls and a typical safety strategy to avoid tripping on steps or stairs is to increase foot clearance over the step edge. In the present study we asked whether the perceived height of a step could be increased using a visual illusion and whether this would lead to the adoption of a safer stepping strategy, in terms of greater foot clearance over the step edge. The study also addressed the controversial question of whether motor actions are dissociated from visual perception. 21 young, healthy subjects perceived the step to be higher in a configuration of the horizontal-vertical illusion compared to a reverse configuration (p = 0.01). During a simple stepping task, maximum toe elevation changed by an amount corresponding to the size of the visual illusion (p<0.001). Linear regression analyses showed highly significant associations between perceived step height and maximum toe elevation for all conditions. The perceived height of a step can be manipulated using a simple visual illusion, leading to the adoption of a safer stepping strategy in terms of greater foot clearance over a step edge. In addition, the strong link found between perception of a visual illusion and visuomotor action provides additional support to the view that the original, controversial proposal by Goodale and Milner (1992) of two separate and distinct visual streams for perception and visuomotor action should be re-evaluated.
Distortions of Subjective Time Perception Within and Across Senses
van Wassenhove, Virginie; Buonomano, Dean V.; Shimojo, Shinsuke; Shams, Ladan
2008-01-01
Background The ability to estimate the passage of time is of fundamental importance for perceptual and cognitive processes. One experience of time is the perception of duration, which is not isomorphic to physical duration and can be distorted by a number of factors. Yet, the critical features generating these perceptual shifts in subjective duration are not understood. Methodology/Findings We used prospective duration judgments within and across sensory modalities to examine the effect of stimulus predictability and feature change on the perception of duration. First, we found robust distortions of perceived duration in auditory, visual and auditory-visual presentations despite the predictability of the feature changes in the stimuli. For example, a looming disc embedded in a series of steady discs led to time dilation, whereas a steady disc embedded in a series of looming discs led to time compression. Second, we addressed whether visual (auditory) inputs could alter the perception of duration of auditory (visual) inputs. When participants were presented with incongruent audio-visual stimuli, the perceived duration of auditory events could be shortened or lengthened by the presence of conflicting visual information; however, the perceived duration of visual events was seldom distorted by the presence of auditory information and was never perceived shorter than their actual durations. Conclusions/Significance These results support the existence of multisensory interactions in the perception of duration and, importantly, suggest that vision can modify auditory temporal perception in a pure timing task. Insofar as distortions in subjective duration can neither be accounted for by the unpredictability of an auditory, visual or auditory-visual event, we propose that it is the intrinsic features of the stimulus that critically affect subjective time distortions. PMID:18197248
Heuristics of Reasoning and Analogy in Children's Visual Perspective Taking.
ERIC Educational Resources Information Center
Yaniv, Ilan; Shatz, Marilyn
1990-01-01
In three experiments, children of three through six years of age were generally better able to reproduce a perceiver's perspective if a visual cue in the perceiver's line of sight was salient. Children had greater difficulty when the task hinged on attending to configural cues. Availability of distinctive cues affixed to objects facilitated…
Visual Literacy: Implications for the Production of Children's Television Programs.
ERIC Educational Resources Information Center
Amey, L. J.
Visual literacy, the integration of seeing with other cognitive processes, is an essential tool of learning. To explain the relationship between the perceiver and the perceived, three types of theories can be brought to bear: introverted; extroverted; and transactional. Franklin Fearing, George Herbert Mead, Martin Buber, and other theorists have…
Children perceive speech onsets by ear and eye*
JERGER, SUSAN; DAMIAN, MARKUS F.; TYE-MURRAY, NANCY; ABDI, HERVÉ
2016-01-01
Adults use vision to perceive low-fidelity speech; yet how children acquire this ability is not well understood. The literature indicates that children show reduced sensitivity to visual speech from kindergarten to adolescence. We hypothesized that this pattern reflects the effects of complex tasks and a growth period with harder-to-utilize cognitive resources, not lack of sensitivity. We investigated sensitivity to visual speech in children via the phonological priming produced by low-fidelity (non-intact onset) auditory speech presented audiovisually (see dynamic face articulate consonant/rhyme b/ag; hear non-intact onset/rhyme: −b/ag) vs. auditorily (see still face; hear exactly same auditory input). Audiovisual speech produced greater priming from four to fourteen years, indicating that visual speech filled in the non-intact auditory onsets. The influence of visual speech depended uniquely on phonology and speechreading. Children – like adults – perceive speech onsets multimodally. Findings are critical for incorporating visual speech into developmental theories of speech perception. PMID:26752548
Takeuchi, Tatsuto; Yoshimoto, Sanae; Shimada, Yasuhiro; Kochiyama, Takanori; Kondo, Hirohito M
2017-02-19
Recent studies have shown that interindividual variability can be a rich source of information regarding the mechanism of human visual perception. In this study, we examined the mechanisms underlying interindividual variability in the perception of visual motion, one of the fundamental components of visual scene analysis, by measuring neurotransmitter concentrations using magnetic resonance spectroscopy. First, by psychophysically examining two types of motion phenomena-motion assimilation and contrast-we found that, following the presentation of the same stimulus, some participants perceived motion assimilation, while others perceived motion contrast. Furthermore, we found that the concentration of the excitatory neurotransmitter glutamate-glutamine (Glx) in the dorsolateral prefrontal cortex (Brodmann area 46) was positively correlated with the participant's tendency to motion assimilation over motion contrast; however, this effect was not observed in the visual areas. The concentration of the inhibitory neurotransmitter γ-aminobutyric acid had only a weak effect compared with that of Glx. We conclude that excitatory process in the suprasensory area is important for an individual's tendency to determine antagonistically perceived visual motion phenomena.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).
Baek, Eui Seon; Hwang, Soonshin; Choi, Yoon Jeong; Roh, Mi Ryung; Nguyen, Tung; Kim, Kyung-Ho; Chung, Chooryung J
2018-07-01
The objectives of this study were to evaluate the quantitative and perceived visual changes of the nasolabial fold (NLF) after maximum retraction in adults and to determine its contributing factors. A total of 39 adult women's cone-beam computed tomography images were collected retrospectively and divided into the retraction group (age 26.9 ± 8.80) that underwent maximum retraction following 4 premolar extraction and the control group (age 24.6 ± 5.36) with minor changes of the incisors. Three-dimensional morphologic changes of hard and soft tissue including NLF were measured by pre- and posttreatment cone-beam computed tomography. In addition, perceived visual change of the NLF was monitored using the modified Global Aesthetic Improvement Scale. The influence of age, initial severity of NLF, and initial soft tissue thickness was evaluated. Anterior retraction induced significant changes of the facial soft tissue including the lips, perioral, and the NLF when compared with the controls ( P < .01). Perceived visual changes of the NLF was noted only in women younger than age 30 ( P < .05), with the odds ratio (95% confidence interval) of 2.44 (1.3461-4.4226), indicating greater possibility for improvement of NLF esthetics in young women of the retraction group when compared with the controls. Orthodontic retraction induced quantitative and perceived visual changes of the NLF. For adult women younger than age 30, the appearance of the NLF improved after maximum retraction despite the greater posterior change of the NLF.
Processing spatial layout by perception and sensorimotor interaction.
Bridgeman, Bruce; Hoover, Merrit
2008-06-01
Everyone has the feeling that perception is usually accurate - we apprehend the layout of the world without significant error, and therefore we can interact with it effectively. Several lines of experimentation, however, show that perceived layout is seldom accurate enough to account for the success of visually guided behaviour. A visual world that has more texture on one side, for example, induces a shift of the body's straight ahead to that side and a mislocalization of a small target to the opposite side. Motor interaction with the target remains accurate, however, as measured by a jab with the finger. Slopes of hills are overestimated, even while matching the slopes of the same hills with the forearm is more accurate. The discrepancy shrinks as the estimated range is reduced, until the two estimates are hardly discrepant for a segment of a slope within arm's reach. From an evolutionary standpoint, the function of perception is not to provide an accurate physical layout of the world, but to inform the planning of future behaviour. Illusions - inaccuracies in perception - are perceived as such only when they can be verified by objective means, such as measuring the slope of a hill, the range of a landmark, or the location of a target. Normally such illusions are not checked and are accepted as reality without contradiction.
Suggested Interactivity: Seeking Perceived Affordances for Information Visualization.
Boy, Jeremy; Eveillard, Louis; Detienne, Françoise; Fekete, Jean-Daniel
2016-01-01
In this article, we investigate methods for suggesting the interactivity of online visualizations embedded with text. We first assess the need for such methods by conducting three initial experiments on Amazon's Mechanical Turk. We then present a design space for Suggested Interactivity (i. e., visual cues used as perceived affordances-SI), based on a survey of 382 HTML5 and visualization websites. Finally, we assess the effectiveness of three SI cues we designed for suggesting the interactivity of bar charts embedded with text. Our results show that only one cue (SI3) was successful in inciting participants to interact with the visualizations, and we hypothesize this is because this particular cue provided feedforward.
Micro-Valences: Perceiving Affective Valence in Everyday Objects
Lebrecht, Sophie; Bar, Moshe; Barrett, Lisa Feldman; Tarr, Michael J.
2012-01-01
Perceiving the affective valence of objects influences how we think about and react to the world around us. Conversely, the speed and quality with which we visually recognize objects in a visual scene can vary dramatically depending on that scene’s affective content. Although typical visual scenes contain mostly “everyday” objects, the affect perception in visual objects has been studied using somewhat atypical stimuli with strong affective valences (e.g., guns or roses). Here we explore whether affective valence must be strong or overt to exert an effect on our visual perception. We conclude that everyday objects carry subtle affective valences – “micro-valences” – which are intrinsic to their perceptual representation. PMID:22529828
Teramoto, Wataru; Watanabe, Hiroshi; Umemura, Hiroyuki
2008-01-01
The perceived temporal order of external successive events does not always follow their physical temporal order. We examined the contribution of self-motion mechanisms in the perception of temporal order in the auditory modality. We measured perceptual biases in the judgment of the temporal order of two short sounds presented successively, while participants experienced visually induced self-motion (yaw-axis circular vection) elicited by viewing long-lasting large-field visual motion. In experiment 1, a pair of white-noise patterns was presented to participants at various stimulus-onset asynchronies through headphones, while they experienced visually induced self-motion. Perceived temporal order of auditory events was modulated by the direction of the visual motion (or self-motion). Specifically, the sound presented to the ear in the direction opposite to the visual motion (ie heading direction) was perceived prior to the sound presented to the ear in the same direction. Experiments 2A and 2B were designed to reduce the contributions of decisional and/or response processes. In experiment 2A, the directional cueing of the background (left or right) and the response dimension (high pitch or low pitch) were not spatially associated. In experiment 2B, participants were additionally asked to report which of the two sounds was perceived 'second'. Almost the same results as in experiment 1 were observed, suggesting that the change in temporal order of auditory events during large-field visual motion reflects a change in perceptual processing. Experiment 3 showed that the biases in the temporal-order judgments of auditory events were caused by concurrent actual self-motion with a rotatory chair. In experiment 4, using a small display, we showed that 'pure' long exposure to visual motion without the sensation of self-motion was not responsible for this phenomenon. These results are consistent with previous studies reporting a change in the perceived temporal order of visual or tactile events depending on the direction of self-motion. Hence, large-field induced (ie optic flow) self-motion can affect the temporal order of successive external events across various modalities.
Characterizing visual asymmetries in contrast perception using shaded stimuli.
Chacón, José; Castellanos, Miguel Ángel; Serrano-Pedraza, Ignacio
2015-01-01
Previous research has shown a visual asymmetry in shaded stimuli where the perceived contrast depended on the polarity of their dark and light areas (Chacón, 2004). In particular, circles filled out with a top-dark luminance ramp were perceived with higher contrast than top-light ones although both types of stimuli had the same physical contrast. Here, using shaded stimuli, we conducted four experiments in order to find out if the perceived contrast depends on: (a) the contrast level, (b) the type of shading (continuous vs. discrete) and its degree of perceived three-dimensionality, (c) the orientation of the shading, and (d) the sign of the perceived contrast alterations. In all experiments the observers' tasks were to equate the perceived contrast of two sets of elements (usually shaded with opposite luminance polarity), in order to determine the subjective equality point. Results showed that (a) there is a strong difference in perceived contrast between circles filled out with luminance ramp top-dark and top-light that is similar for different contrast levels; (b) we also found asymmetries in contrast perception with different shaded stimuli, and this asymmetry was not related with the perceived three-dimensionality but with the type of shading, being greater for continuous-shading stimuli; (c) differences in perceived contrast varied with stimulus orientation, showing the maximum difference on vertical axis with a left bias consistent with the bias found in previous studies that used visual-search tasks; and (d) asymmetries are consistent with an attenuation in perceived contrast that is selective for top-light vertically-shaded stimuli.
Perceived distance depends on the orientation of both the body and the visual environment.
Harris, Laurence R; Mander, Charles
2014-10-15
Models of depth perception typically omit the orientation and height of the observer despite the potential usefulness of the height above the ground plane and the need to know about head position to interpret retinal disparity information. To assess the contribution of orientation to perceived distance, we used the York University Tumbled and Tumbling Room facilities to modulate both perceived and actual body orientation. These facilities are realistically decorated rooms that can be systematically arranged to vary the relative orientation of visual, gravity, and body cues to upright. To assess perceived depth we exploited size/distance constancy. Observers judged the perceived length of a visual line (controlled by a QUEST adaptive procedure) projected on to the wall of the facilities, relative to the length of an unseen iron rod held in their hands. In the Tumbled Room (viewing distance 337 cm) the line was set about 10% longer when participants were supine compared to when they were upright. In the Tumbling Room (viewing distance 114 cm), the line was set about 11% longer when participants were either supine or made to feel that they were supine by the orientation of the room. Matching a longer visual line to the reference rod is compatible with the opposite wall being perceived as closer. The effect was modulated by whether viewing was monocular or binocular at a viewing distance of 114 cm but not at 337 cm suggesting that reliable binocular cues can override the effect. © 2014 ARVO.
Kang, Chang-ku; Moon, Jong-yeol; Lee, Sang-im; Jablonski, Piotr G.
2013-01-01
Many moths have wing patterns that resemble bark of trees on which they rest. The wing patterns help moths to become camouflaged and to avoid predation because the moths are able to assume specific body orientations that produce a very good match between the pattern on the bark and the pattern on the wings. Furthermore, after landing on a bark moths are able to perceive stimuli that correlate with their crypticity and are able to re-position their bodies to new more cryptic locations and body orientations. However, the proximate mechanisms, i.e. how a moth finds an appropriate resting position and orientation, are poorly studied. Here, we used a geometrid moth Jankowskia fuscaria to examine i) whether a choice of resting orientation by moths depends on the properties of natural background, and ii) what sensory cues moths use. We studied moths’ behavior on natural (a tree log) and artificial backgrounds, each of which was designed to mimic one of the hypothetical cues that moths may perceive on a tree trunk (visual pattern, directional furrow structure, and curvature). We found that moths mainly used structural cues from the background when choosing their resting position and orientation. Our findings highlight the possibility that moths use information from one type of sensory modality (structure of furrows is probably detected through tactile channel) to achieve crypticity in another sensory modality (visual). This study extends our knowledge of how behavior, sensory systems and morphology of animals interact to produce crypsis. PMID:24205118
Kang, Chang-Ku; Moon, Jong-Yeol; Lee, Sang-Im; Jablonski, Piotr G
2013-01-01
Many moths have wing patterns that resemble bark of trees on which they rest. The wing patterns help moths to become camouflaged and to avoid predation because the moths are able to assume specific body orientations that produce a very good match between the pattern on the bark and the pattern on the wings. Furthermore, after landing on a bark moths are able to perceive stimuli that correlate with their crypticity and are able to re-position their bodies to new more cryptic locations and body orientations. However, the proximate mechanisms, i.e. how a moth finds an appropriate resting position and orientation, are poorly studied. Here, we used a geometrid moth Jankowskia fuscaria to examine i) whether a choice of resting orientation by moths depends on the properties of natural background, and ii) what sensory cues moths use. We studied moths' behavior on natural (a tree log) and artificial backgrounds, each of which was designed to mimic one of the hypothetical cues that moths may perceive on a tree trunk (visual pattern, directional furrow structure, and curvature). We found that moths mainly used structural cues from the background when choosing their resting position and orientation. Our findings highlight the possibility that moths use information from one type of sensory modality (structure of furrows is probably detected through tactile channel) to achieve crypticity in another sensory modality (visual). This study extends our knowledge of how behavior, sensory systems and morphology of animals interact to produce crypsis.
Color coding of control room displays: the psychocartography of visual layering effects.
Van Laar, Darren; Deshe, Ofer
2007-06-01
To evaluate which of three color coding methods (monochrome, maximally discriminable, and visual layering) used to code four types of control room display format (bars, tables, trend, mimic) was superior in two classes of task (search, compare). It has recently been shown that color coding of visual layers, as used in cartography, may be used to color code any type of information display, but this has yet to be fully evaluated. Twenty-four people took part in a 2 (task) x 3 (coding method) x 4 (format) wholly repeated measures design. The dependent variables assessed were target location reaction time, error rates, workload, and subjective feedback. Overall, the visual layers coding method produced significantly faster reaction times than did the maximally discriminable and the monochrome methods for both the search and compare tasks. No significant difference in errors was observed between conditions for either task type. Significantly less perceived workload was experienced with the visual layers coding method, which was also rated more highly than the other coding methods on a 14-item visual display quality questionnaire. The visual layers coding method is superior to other color coding methods for control room displays when the method supports the user's task. The visual layers color coding method has wide applicability to the design of all complex information displays utilizing color coding, from the most maplike (e.g., air traffic control) to the most abstract (e.g., abstracted ecological display).
Filbrich, Lieve; Alamia, Andrea; Burns, Soline; Legrain, Valéry
2017-07-01
Despite their high relevance for defending the integrity of the body, crossmodal links between nociception, the neural system specifically coding potentially painful information, and vision are still poorly studied, especially the effects of nociception on visual perception. This study investigated if, and in which time window, a nociceptive stimulus can attract attention to its location on the body, independently of voluntary control, to facilitate the processing of visual stimuli occurring in the same side of space as the limb on which the visual stimulus was applied. In a temporal order judgment task based on an adaptive procedure, participants judged which of two visual stimuli, one presented next to either hand in either side of space, had been perceived first. Each pair of visual stimuli was preceded (by 200, 400, or 600 ms) by a nociceptive stimulus applied either unilaterally on one single hand, or bilaterally, on both hands simultaneously. Results show that, as compared to the bilateral condition, participants' judgments were biased to the advantage of the visual stimuli that occurred in the same side of space as the hand on which a unilateral, nociceptive stimulus was applied. This effect was present in a time window ranging from 200 to 600 ms, but importantly, biases increased with decreasing time interval. These results suggest that nociceptive stimuli can affect the perceptual processing of spatially congruent visual inputs.
ERIC Educational Resources Information Center
Kim, Nam-Gyoon; Park, Jong-Hee
2010-01-01
Recent research has demonstrated that Alzheimer's disease (AD) affects the visual sensory pathways, producing a variety of visual deficits, including the capacity to perceive structure-from-motion (SFM). Because the sensory areas of the adult brain are known to retain a large degree of plasticity, the present study was conducted to explore whether…
ICT Teachers' Acceptance of "Scratch" as Algorithm Visualization Software
ERIC Educational Resources Information Center
Saltan, Fatih; Kara, Mehmet
2016-01-01
This study aims to investigate the acceptance of ICT teachers pertaining to the use of Scratch as an Algorithm Visualization (AV) software in terms of perceived ease of use and perceived usefulness. An embedded mixed method research design was used in the study, in which qualitative data were embedded in quantitative ones and used to explain the…
The visual perception of spatial extent.
DOT National Transportation Integrated Search
1963-09-01
This study was concerned with the manner in which perceived depth and perceived frontoparallel size varied with physical distance and hence with each other. An equation expressing the relation between perceived frontoparallel size and physical depth ...
Split brain: divided perception but undivided consciousness.
Pinto, Yair; Neville, David A; Otten, Marte; Corballis, Paul M; Lamme, Victor A F; de Haan, Edward H F; Foschi, Nicoletta; Fabri, Mara
2017-05-01
In extensive studies with two split-brain patients we replicate the standard finding that stimuli cannot be compared across visual half-fields, indicating that each hemisphere processes information independently of the other. Yet, crucially, we show that the canonical textbook findings that a split-brain patient can only respond to stimuli in the left visual half-field with the left hand, and to stimuli in the right visual half-field with the right hand and verbally, are not universally true. Across a wide variety of tasks, split-brain patients with a complete and radiologically confirmed transection of the corpus callosum showed full awareness of presence, and well above chance-level recognition of location, orientation and identity of stimuli throughout the entire visual field, irrespective of response type (left hand, right hand, or verbally). Crucially, we used confidence ratings to assess conscious awareness. This revealed that also on high confidence trials, indicative of conscious perception, response type did not affect performance. These findings suggest that severing the cortical connections between hemispheres splits visual perception, but does not create two independent conscious perceivers within one brain. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Learning semantic and visual similarity for endomicroscopy video retrieval.
Andre, Barbara; Vercauteren, Tom; Buchner, Anna M; Wallace, Michael B; Ayache, Nicholas
2012-06-01
Content-based image retrieval (CBIR) is a valuable computer vision technique which is increasingly being applied in the medical community for diagnosis support. However, traditional CBIR systems only deliver visual outputs, i.e., images having a similar appearance to the query, which is not directly interpretable by the physicians. Our objective is to provide a system for endomicroscopy video retrieval which delivers both visual and semantic outputs that are consistent with each other. In a previous study, we developed an adapted bag-of-visual-words method for endomicroscopy retrieval, called "Dense-Sift," that computes a visual signature for each video. In this paper, we present a novel approach to complement visual similarity learning with semantic knowledge extraction, in the field of in vivo endomicroscopy. We first leverage a semantic ground truth based on eight binary concepts, in order to transform these visual signatures into semantic signatures that reflect how much the presence of each semantic concept is expressed by the visual words describing the videos. Using cross-validation, we demonstrate that, in terms of semantic detection, our intuitive Fisher-based method transforming visual-word histograms into semantic estimations outperforms support vector machine (SVM) methods with statistical significance. In a second step, we propose to improve retrieval relevance by learning an adjusted similarity distance from a perceived similarity ground truth. As a result, our distance learning method allows to statistically improve the correlation with the perceived similarity. We also demonstrate that, in terms of perceived similarity, the recall performance of the semantic signatures is close to that of visual signatures and significantly better than those of several state-of-the-art CBIR methods. The semantic signatures are thus able to communicate high-level medical knowledge while being consistent with the low-level visual signatures and much shorter than them. In our resulting retrieval system, we decide to use visual signatures for perceived similarity learning and retrieval, and semantic signatures for the output of an additional information, expressed in the endoscopist own language, which provides a relevant semantic translation of the visual retrieval outputs.
Jambrošić, Kristian; Horvat, Marko; Domitrović, Hrvoje
2013-07-01
Urban soundscapes at five locations in the city of Zadar were perceptually assessed by on-site surveys and objectively evaluated based on monaural and binaural recordings. All locations were chosen so that they would display auditory and visual diversity as much as possible. The unique sound installation known as the Sea Organ was included as an atypical music-like environment. Typical objective parameters were calculated from the recordings related to the amount of acoustic energy, spectral properties of sound, the amount of fluctuations, and tonal properties. The subjective assessment was done on-site using a common survey for evaluating the properties of sound and visual environment. The results revealed the importance of introducing the context into soundscape research because objective parameters did not show significant correlation with responses obtained from interviewees. Excessive values of certain objective parameters could indicate that a sound environment will be perceived as unpleasant or annoying, but its overall perception depends on how well it agrees with people's expectations. This was clearly seen for the case of Sea Organ for which the highest values of objective parameters were obtained, but, at the same time, it was evaluated as the most positive sound environment in every aspect.
Lansu, Tessa A M; Cillessen, Antonius H N; Karremans, Johan C
2014-01-01
Previous research has shown that adolescents' attention for a peer is determined by the peer's status. This study examined how it is also determined by the status of the perceiving adolescent, and the gender of both parties involved (perceiver and perceived). Participants were 122 early adolescents (M age = 11.0 years) who completed sociometric measures and eye-tracking recordings of visual fixations at pictures of high-status (popular) and low-status (unpopular) classmates. Automatic attention (first-gaze preference) and controlled attention (total gaze time) were measured. Target popularity was associated with both measures of attention. These associations were further moderated by perceiver popularity and perceiver and target gender. Popular adolescents attracted attention especially from other popular adolescents. Popular boys attracted attention especially from girls. © 2013 The Authors. Child Development © 2013 Society for Research in Child Development, Inc.
Front-Presented Looming Sound Selectively Alters the Perceived Size of a Visual Looming Object.
Yamasaki, Daiki; Miyoshi, Kiyofumi; Altmann, Christian F; Ashida, Hiroshi
2018-07-01
In spite of accumulating evidence for the spatial rule governing cross-modal interaction according to the spatial consistency of stimuli, it is still unclear whether 3D spatial consistency (i.e., front/rear of the body) of stimuli also regulates audiovisual interaction. We investigated how sounds with increasing/decreasing intensity (looming/receding sound) presented from the front and rear space of the body impact the size perception of a dynamic visual object. Participants performed a size-matching task (Experiments 1 and 2) and a size adjustment task (Experiment 3) of visual stimuli with increasing/decreasing diameter, while being exposed to a front- or rear-presented sound with increasing/decreasing intensity. Throughout these experiments, we demonstrated that only the front-presented looming sound caused overestimation of the spatially consistent looming visual stimulus in size, but not of the spatially inconsistent and the receding visual stimulus. The receding sound had no significant effect on vision. Our results revealed that looming sound alters dynamic visual size perception depending on the consistency in the approaching quality and the front-rear spatial location of audiovisual stimuli, suggesting that the human brain differently processes audiovisual inputs based on their 3D spatial consistency. This selective interaction between looming signals should contribute to faster detection of approaching threats. Our findings extend the spatial rule governing audiovisual interaction into 3D space.
Listeners' expectation of room acoustical parameters based on visual cues
NASA Astrophysics Data System (ADS)
Valente, Daniel L.
Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audio-visual study, in which participants are instructed to make spatial congruency and quantity judgments in dynamic cross-modal environments. The results of these psychophysical tests suggest the importance of consilient audio-visual presentation to the legibility of an auditory scene. Several studies have looked into audio-visual interaction in room perception in recent years, but these studies rely on static images, speech signals, or photographs alone to represent the visual scene. Building on these studies, the aim is to propose a testing method that uses monochromatic compositing (blue-screen technique) to position a studio recording of a musical performance in a number of virtual acoustical environments and ask subjects to assess these environments. In the first experiment of the study, video footage was taken from five rooms varying in physical size from a small studio to a small performance hall. Participants were asked to perceptually align two distinct acoustical parameters---early-to-late reverberant energy ratio and reverberation time---of two solo musical performances in five contrasting visual environments according to their expectations of how the room should sound given its visual appearance. In the second experiment in the study, video footage shot from four different listening positions within a general-purpose space was coupled with sounds derived from measured binaural impulse responses (IRs). The relationship between the presented image, sound, and virtual receiver position was examined. It was found that many visual cues caused different perceived events of the acoustic environment. This included the visual attributes of the space in which the performance was located as well as the visual attributes of the performer. The addressed visual makeup of the performer included: (1) an actual video of the performance, (2) a surrogate image of the performance, for example a loudspeaker's image reproducing the performance, (3) no visual image of the performance (empty room), or (4) a multi-source visual stimulus (actual video of the performance coupled with two images of loudspeakers positioned to the left and right of the performer). For this experiment, perceived auditory events of sound were measured in terms of two subjective spatial metrics: Listener Envelopment (LEV) and Apparent Source Width (ASW) These metrics were hypothesized to be dependent on the visual imagery of the presented performance. Data was also collected by participants matching direct and reverberant sound levels for the presented audio-visual scenes. In the final experiment, participants judged spatial expectations of an ensemble of musicians presented in the five physical spaces from Experiment 1. Supporting data was accumulated in two stages. First, participants were given an audio-visual matching test, in which they were instructed to align the auditory width of a performing ensemble to a varying set of audio and visual cues. In the second stage, a conjoint analysis design paradigm was explored to extrapolate the relative magnitude of explored audio-visual factors in affecting three assessed response criteria: Congruency (the perceived match-up of the auditory and visual cues in the assessed performance), ASW and LEV. Results show that both auditory and visual factors affect the collected responses, and that the two sensory modalities coincide in distinct interactions. This study reveals participant resiliency in the presence of forced auditory-visual mismatch: Participants are able to adjust the acoustic component of the cross-modal environment in a statistically similar way despite randomized starting values for the monitored parameters. Subjective results of the experiments are presented along with objective measurements for verification.
Lee, D H; Mehta, M D
2003-06-01
Effective risk communication in transfusion medicine is important for health-care consumers, but understanding the numerical magnitude of risks can be difficult. The objective of this study was to determine the effect of a visual risk communication tool on the knowledge and perception of transfusion risk. Laypeople were randomly assigned to receive transfusion risk information with either a written or a visual presentation format for communicating and comparing the probabilities of transfusion risks relative to other hazards. Knowledge of transfusion risk was ascertained with a multiple-choice quiz and risk perception was ascertained by psychometric scaling and principal components analysis. Two-hundred subjects were recruited and randomly assigned. Risk communication with both written and visual presentation formats increased knowledge of transfusion risk and decreased the perceived dread and severity of transfusion risk. Neither format changed the perceived knowledge and control of transfusion risk, nor the perceived benefit of transfusion. No differences in knowledge or risk perception outcomes were detected between the groups randomly assigned to written or visual presentation formats. Risk communication that incorporates risk comparisons in either written or visual presentation formats can improve knowledge and reduce the perception of transfusion risk in laypeople.
Sawada, H
1995-10-01
This study aimed at descriptive understanding of traditional methods involved in locating fishing points and navigating to them in the sea, and investigate associated cognitive activities. Participant observations and interviews were conducted for more than 30 fishermen who employed hand-line or long-line fishing methods near Toyoshima Island, Hiroshima Prefecture. The main findings were: (1) Fishermen readily perceived environmental cues when locating fishing points, which enabled them to navigate to a correct point on the sea. (2) Their memory of fishing points was not verbal, but visual, directly tied to the cue perception, and was constantly renewed during fishing activities. (3) They grasped configurations of various natural conditions (e.g., swiftness of the tide, surface structure of the sea bottom) through tactile information from the fishing line, and comprehended their surroundings with accumulated knowledge and inductive inferences. And (4) their cognitive processes of perception, memory, and understanding were functionally coordinated in the series of fishing work.
ERIC Educational Resources Information Center
Cascio, Carissa J.; Foss-Feig, Jennifer H.; Burnette, Courtney P.; Heacock, Jessica L.; Cosby, Akua A.
2012-01-01
In the rubber hand illusion, perceived hand ownership can be transferred to a rubber hand after synchronous visual and tactile stimulation. Perceived body ownership and self-other relation are foundational for development of self-awareness, imitation, and empathy, which are all affected in autism spectrum disorders (ASD). We examined the rubber…
ERIC Educational Resources Information Center
Amit, Elinor; Mehoudar, Eyal; Trope, Yaacov; Yovel, Galit
2012-01-01
It is well established that scenes and objects elicit a highly selective response in specific brain regions in the ventral visual cortex. An inherent difference between these categories that has not been explored yet is their perceived distance from the observer (i.e. scenes are distal whereas objects are proximal). The current study aimed to test…
Perception of Emotion: Differences in Mode of Presentation, Sex of Perceiver, and Race of Expressor.
ERIC Educational Resources Information Center
Kozel, Nicholas J.; Gitter, A. George
A 2 x 2 x 4 factorial design was utilized to investigate the effects of sex of perceiver, race of expressor (Negro and White), and mode of presentation of stimuli (audio and visual, visual only, audio only, and still pictures) on perception of emotion (POE). Perception of seven emotions (anger, happiness, surprise, fear, disgust, pain, and…
What's in a Typeface? Evidence of the Existence of Print Personalities in Arabic.
Jordan, Timothy R; AlShamsi, Alya S; Yekani, Hajar A K; AlJassmi, Maryam; Al Dosari, Nada; Hermena, Ehab W; Sheen, Mercedes
2017-01-01
Previous research suggests that different typefaces can be perceived as having distinct personality characteristics (such as strength, elegance, friendliness, romance, and humor) and that these "print personalities" elicit information in the reader that is in addition to the meaning conveyed linguistically by words. However, research in this area has previously been conducted using only English stimuli and so it may be that typefaces in English, and other languages using the Latinate alphabet, lend themselves unusually well to eliciting perception of print personalities, and the phenomenon is not a language universal. But not all written languages are Latinate languages, and one language that is especially visually distinct is Arabic. In particular, apart from being read from right to left, Arabic is formed in a cursive script in which the visual appearance of letters contrasts strongly with those used for Latinate languages. In addition, spaces between letters seldom exist in Arabic and the visual appearance of even the same letters can vary considerably within the same typeface depending on their contextual location within a word. Accordingly, the purpose of the present study was to investigate whether, like English, different Arabic typefaces inspire the attribution of print personalities. Eleven different typefaces were presented in Arabic sentences to skilled readers of Arabic and participants rated each typeface according to 20 different personality characteristics. The results showed that each typeface produced a different pattern of ratings of personality characteristics and suggest that, like English, Arabic typefaces are perceived as having distinct print personalities. Some of the implications of these results for the processes involved in reading are discussed.
Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu
2015-01-01
Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828
The Perspective Structure of Visual Space
2015-01-01
Luneburg’s model has been the reference for experimental studies of visual space for almost seventy years. His claim for a curved visual space has been a source of inspiration for visual scientists as well as philosophers. The conclusion of many experimental studies has been that Luneburg’s model does not describe visual space in various tasks and conditions. Remarkably, no alternative model has been suggested. The current study explores perspective transformations of Euclidean space as a model for visual space. Computations show that the geometry of perspective spaces is considerably different from that of Euclidean space. Collinearity but not parallelism is preserved in perspective space and angles are not invariant under translation and rotation. Similar relationships have shown to be properties of visual space. Alley experiments performed early in the nineteenth century have been instrumental in hypothesizing curved visual spaces. Alleys were computed in perspective space and compared with reconstructed alleys of Blumenfeld. Parallel alleys were accurately described by perspective geometry. Accurate distance alleys were derived from parallel alleys by adjusting the interstimulus distances according to the size-distance invariance hypothesis. Agreement between computed and experimental alleys and accommodation of experimental results that rejected Luneburg’s model show that perspective space is an appropriate model for how we perceive orientations and angles. The model is also appropriate for perceived distance ratios between stimuli but fails to predict perceived distances. PMID:27648222
NASA Astrophysics Data System (ADS)
Li, Heng; Zeng, Yajie; Lu, Zhuofan; Cao, Xiaofei; Su, Xiaofan; Sui, Xiaohong; Wang, Jing; Chai, Xinyu
2018-04-01
Objective. Retinal prosthesis devices have shown great value in restoring some sight for individuals with profoundly impaired vision, but the visual acuity and visual field provided by prostheses greatly limit recipients’ visual experience. In this paper, we employ computer vision approaches to seek to expand the perceptible visual field in patients implanted potentially with a high-density retinal prosthesis while maintaining visual acuity as much as possible. Approach. We propose an optimized content-aware image retargeting method, by introducing salient object detection based on color and intensity-difference contrast, aiming to remap important information of a scene into a small visual field and preserve their original scale as much as possible. It may improve prosthetic recipients’ perceived visual field and aid in performing some visual tasks (e.g. object detection and object recognition). To verify our method, psychophysical experiments, detecting object number and recognizing objects, are conducted under simulated prosthetic vision. As control, we use three other image retargeting techniques, including Cropping, Scaling, and seam-assisted shrinkability. Main results. Results show that our method outperforms in preserving more key features and has significantly higher recognition accuracy in comparison with other three image retargeting methods under the condition of small visual field and low-resolution. Significance. The proposed method is beneficial to expand the perceived visual field of prosthesis recipients and improve their object detection and recognition performance. It suggests that our method may provide an effective option for image processing module in future high-density retinal implants.
Perception of Affordance during Short-Term Exposure to Weightlessness in Parabolic Flight
Bourrelly, Aurore; McIntyre, Joseph; Morio, Cédric; Despretz, Pascal; Luyat, Marion
2016-01-01
We investigated the role of the visual eye-height (VEH) in the perception of affordance during short-term exposure to weightlessness. Sixteen participants were tested during parabolic flight (0g) and on the ground (1g). Participants looked at a laptop showing a room in which a doorway-like aperture was presented. They were asked to adjust the opening of the virtual doorway until it was perceived to be just wide enough to pass through (i.e., the critical aperture). We manipulated VEH by raising the level of the floor in the visual room by 25 cm. The results showed effects of VEH and of gravity on the perceived critical aperture. When VEH was reduced (i.e., when the floor was raised), the critical aperture diminished, suggesting that widths relative to the body were perceived to be larger. The critical aperture was also lower in 0g, for a given VEH, suggesting that participants perceived apertures to be wider or themselves to be smaller in weightlessness, as compared to normal gravity. However, weightlessness also had an effect on the subjective level of the eyes projected into the visual scene. Thus, setting the critical aperture as a fixed percentage of the subjective visual eye-height remains a viable hypothesis to explain how human observers judge visual scenes in terms of potential for action or “affordances”. PMID:27097218
DEVELOPMENT AND APPLICAIONS OF A STANDARD VISUAL INDEX
A standard visual index appropriate for characterizing visibility through uniform hazes, is defined in terms of either of the traditional metrics: visual range or extinction coefficient. This index was designed to be linear with respect to perceived visual changes over its entire...
Saccadic Corollary Discharge Underlies Stable Visual Perception
Berman, Rebecca A.; Joiner, Wilsaan M.; Wurtz, Robert H.
2016-01-01
Saccadic eye movements direct the high-resolution foveae of our retinas toward objects of interest. With each saccade, the image jumps on the retina, causing a discontinuity in visual input. Our visual perception, however, remains stable. Philosophers and scientists over centuries have proposed that visual stability depends upon an internal neuronal signal that is a copy of the neuronal signal driving the eye movement, now referred to as a corollary discharge (CD) or efference copy. In the old world monkey, such a CD circuit for saccades has been identified extending from superior colliculus through MD thalamus to frontal cortex, but there is little evidence that this circuit actually contributes to visual perception. We tested the influence of this CD circuit on visual perception by first training macaque monkeys to report their perceived eye direction, and then reversibly inactivating the CD as it passes through the thalamus. We found that the monkey's perception changed; during CD inactivation, there was a difference between where the monkey perceived its eyes to be directed and where they were actually directed. Perception and saccade were decoupled. We established that the perceived eye direction at the end of the saccade was not derived from proprioceptive input from eye muscles, and was not altered by contextual visual information. We conclude that the CD provides internal information contributing to the brain's creation of perceived visual stability. More specifically, the CD might provide the internal saccade vector used to unite separate retinal images into a stable visual scene. SIGNIFICANCE STATEMENT Visual stability is one of the most remarkable aspects of human vision. The eyes move rapidly several times per second, displacing the retinal image each time. The brain compensates for this disruption, keeping our visual perception stable. A major hypothesis explaining this stability invokes a signal within the brain, a corollary discharge, that informs visual regions of the brain when and where the eyes are about to move. Such a corollary discharge circuit for eye movements has been identified in macaque monkey. We now show that selectively inactivating this brain circuit alters the monkey's visual perception. We conclude that this corollary discharge provides a critical signal that can be used to unite jumping retinal images into a consistent visual scene. PMID:26740647
Todorović, Dejan
2005-01-01
New geometric analyses are presented of three impressive examples of the effects of location of the vantage point on virtual 3-D spaces conveyed by linear-perspective images. In the 'egocentric-road' effect, the perceived direction of the depicted road is always pointed towards the observer, for any position of the vantage point. It is shown that perspective images of real-observer-aimed roads are characterised by a specific, simple pattern of projected side lines. Given that pattern, the position of the observer, and certain assumptions and perspective arguments, the perceived direction of the virtual road towards the observer can be predicted. In the 'skewed balcony' and the 'collapsing ceiling' effects, the position of the vantage point affects the impression of alignment of the virtual architecture conveyed by large-scale illusionistic paintings and the real architecture surrounding them. It is shown that the dislocation of the vantage point away from the viewing position prescribed by the perspective construction induces a mismatch between the painted vanishing point of elements in the picture and the real vanishing point of corresponding elements of the actual architecture. This mismatch of vanishing points provides visual information that the elements of the two architectures are not mutually parallel.
Retrieval evaluation and distance learning from perceived similarity between endomicroscopy videos.
André, Barbara; Vercauteren, Tom; Buchner, Anna M; Wallace, Michael B; Ayache, Nicholas
2011-01-01
Evaluating content-based retrieval (CBR) is challenging because it requires an adequate ground-truth. When the available groundtruth is limited to textual metadata such as pathological classes, retrieval results can only be evaluated indirectly, for example in terms of classification performance. In this study we first present a tool to generate perceived similarity ground-truth that enables direct evaluation of endomicroscopic video retrieval. This tool uses a four-points Likert scale and collects subjective pairwise similarities perceived by multiple expert observers. We then evaluate against the generated ground-truth a previously developed dense bag-of-visual-words method for endomicroscopic video retrieval. Confirming the results of previous indirect evaluation based on classification, our direct evaluation shows that this method significantly outperforms several other state-of-the-art CBR methods. In a second step, we propose to improve the CBR method by learning an adjusted similarity metric from the perceived similarity ground-truth. By minimizing a margin-based cost function that differentiates similar and dissimilar video pairs, we learn a weight vector applied to the visual word signatures of videos. Using cross-validation, we demonstrate that the learned similarity distance is significantly better correlated with the perceived similarity than the original visual-word-based distance.
NASA Astrophysics Data System (ADS)
Dostal, P.; Krasula, L.; Klima, M.
2012-06-01
Various image processing techniques in multimedia technology are optimized using visual attention feature of the human visual system. Spatial non-uniformity causes that different locations in an image are of different importance in terms of perception of the image. In other words, the perceived image quality depends mainly on the quality of important locations known as regions of interest. The performance of such techniques is measured by subjective evaluation or objective image quality criteria. Many state-of-the-art objective metrics are based on HVS properties; SSIM, MS-SSIM based on image structural information, VIF based on the information that human brain can ideally gain from the reference image or FSIM utilizing the low-level features to assign the different importance to each location in the image. But still none of these objective metrics utilize the analysis of regions of interest. We solve the question if these objective metrics can be used for effective evaluation of images reconstructed by processing techniques based on ROI analysis utilizing high-level features. In this paper authors show that the state-of-the-art objective metrics do not correlate well with subjective evaluation while the demosaicing based on ROI analysis is used for reconstruction. The ROI were computed from "ground truth" visual attention data. The algorithm combining two known demosaicing techniques on the basis of ROI location is proposed to reconstruct the ROI in fine quality while the rest of image is reconstructed with low quality. The color image reconstructed by this ROI approach was compared with selected demosaicing techniques by objective criteria and subjective testing. The qualitative comparison of the objective and subjective results indicates that the state-of-the-art objective metrics are still not suitable for evaluation image processing techniques based on ROI analysis and new criteria is demanded.
EEG activity evoked in preparation for multi-talker listening by adults and children.
Holmes, Emma; Kitterick, Padraig T; Summerfield, A Quentin
2016-06-01
Selective attention is critical for successful speech perception because speech is often encountered in the presence of other sounds, including the voices of competing talkers. Faced with the need to attend selectively, listeners perceive speech more accurately when they know characteristics of upcoming talkers before they begin to speak. However, the neural processes that underlie the preparation of selective attention for voices are not fully understood. The current experiments used electroencephalography (EEG) to investigate the time course of brain activity during preparation for an upcoming talker in young adults aged 18-27 years with normal hearing (Experiments 1 and 2) and in typically-developing children aged 7-13 years (Experiment 3). Participants reported key words spoken by a target talker when an opposite-gender distractor talker spoke simultaneously. The two talkers were presented from different spatial locations (±30° azimuth). Before the talkers began to speak, a visual cue indicated either the location (left/right) or the gender (male/female) of the target talker. Adults evoked preparatory EEG activity that started shortly after (<50 ms) the visual cue was presented and was sustained until the talkers began to speak. The location cue evoked similar preparatory activity in Experiments 1 and 2 with different samples of participants. The gender cue did not evoke preparatory activity when it predicted gender only (Experiment 1) but did evoke preparatory activity when it predicted the identity of a specific talker with greater certainty (Experiment 2). Location cues evoked significant preparatory EEG activity in children but gender cues did not. The results provide converging evidence that listeners evoke consistent preparatory brain activity for selecting a talker by their location (regardless of their gender or identity), but not by their gender alone. Copyright © 2016 Elsevier B.V. All rights reserved.
Virtual Environments in Scientific Visualization
NASA Technical Reports Server (NTRS)
Bryson, Steve; Lisinski, T. A. (Technical Monitor)
1994-01-01
Virtual environment technology is a new way of approaching the interface between computers and humans. Emphasizing display and user control that conforms to the user's natural ways of perceiving and thinking about space, virtual environment technologies enhance the ability to perceive and interact with computer generated graphic information. This enhancement potentially has a major effect on the field of scientific visualization. Current examples of this technology include the Virtual Windtunnel being developed at NASA Ames Research Center. Other major institutions such as the National Center for Supercomputing Applications and SRI International are also exploring this technology. This talk will be describe several implementations of virtual environments for use in scientific visualization. Examples include the visualization of unsteady fluid flows (the virtual windtunnel), the visualization of geodesics in curved spacetime, surface manipulation, and examples developed at various laboratories.
Broughton, Mary C.; Davidson, Jane W.
2016-01-01
Musicians' expressive bodily movements can influence observers' perception of performance. Furthermore, individual differences in observers' music and motor expertise can shape how they perceive and respond to music performance. However, few studies have investigated the bodily movements that different observers of music performance perceive as expressive, in order to understand how they might relate to the music being produced, and the particular instrument type. In this paper, we focus on marimba performance through two case studies—one solo and one collaborative context. This study aims to investigate the existence of a core repertoire of marimba performance expressive bodily movements, identify key music-related features associated with the core repertoire, and explore how observers' perception of expressive bodily movements might vary according to individual differences in their music and motor expertise. Of the six professional musicians who observed and analyzed the marimba performances, three were percussionists and experienced marimba players. Following training, observers implemented the Laban effort-shape movement analysis system to analyze marimba players' bodily movements that they perceived as expressive in audio-visual recordings of performance. Observations that were agreed by all participants as being the same type of action at the same location in the performance recording were examined in each case study, then across the two studies. A small repertoire of bodily movements emerged that the observers perceived as being expressive. Movements were primarily allied to elements of the music structure, technique, and expressive interpretation, however, these elements appeared to be interactive. A type of body sway movement and more localized sound generating actions were perceived as expressive. These movements co-occurred and also appeared separately. Individual participant data revealed slightly more variety in the types and locations of actions observed, with judges revealing preferences for observing particular types of expressive bodily movements. The particular expressive bodily movements that are produced and perceived in marimba performance appear to be shaped by music-related and sound generating features, musical context, and observer music and motor expertise. With an understanding of bodily movements that are generated and perceived as expressive, embodied music performance training programs might be developed to enhance expressive performer-audience communication. PMID:27630585
Broughton, Mary C; Davidson, Jane W
2016-01-01
Musicians' expressive bodily movements can influence observers' perception of performance. Furthermore, individual differences in observers' music and motor expertise can shape how they perceive and respond to music performance. However, few studies have investigated the bodily movements that different observers of music performance perceive as expressive, in order to understand how they might relate to the music being produced, and the particular instrument type. In this paper, we focus on marimba performance through two case studies-one solo and one collaborative context. This study aims to investigate the existence of a core repertoire of marimba performance expressive bodily movements, identify key music-related features associated with the core repertoire, and explore how observers' perception of expressive bodily movements might vary according to individual differences in their music and motor expertise. Of the six professional musicians who observed and analyzed the marimba performances, three were percussionists and experienced marimba players. Following training, observers implemented the Laban effort-shape movement analysis system to analyze marimba players' bodily movements that they perceived as expressive in audio-visual recordings of performance. Observations that were agreed by all participants as being the same type of action at the same location in the performance recording were examined in each case study, then across the two studies. A small repertoire of bodily movements emerged that the observers perceived as being expressive. Movements were primarily allied to elements of the music structure, technique, and expressive interpretation, however, these elements appeared to be interactive. A type of body sway movement and more localized sound generating actions were perceived as expressive. These movements co-occurred and also appeared separately. Individual participant data revealed slightly more variety in the types and locations of actions observed, with judges revealing preferences for observing particular types of expressive bodily movements. The particular expressive bodily movements that are produced and perceived in marimba performance appear to be shaped by music-related and sound generating features, musical context, and observer music and motor expertise. With an understanding of bodily movements that are generated and perceived as expressive, embodied music performance training programs might be developed to enhance expressive performer-audience communication.
Event processing in the visual world: Projected motion paths during spoken sentence comprehension.
Kamide, Yuki; Lindsay, Shane; Scheepers, Christoph; Kukona, Anuenue
2016-05-01
Motion events in language describe the movement of an entity to another location along a path. In 2 eye-tracking experiments, we found that comprehension of motion events involves the online construction of a spatial mental model that integrates language with the visual world. In Experiment 1, participants listened to sentences describing the movement of an agent to a goal while viewing visual scenes depicting the agent, goal, and empty space in between. Crucially, verbs suggested either upward (e.g., jump) or downward (e.g., crawl) paths. We found that in the rare event of fixating the empty space between the agent and goal, visual attention was biased upward or downward in line with the verb. In Experiment 2, visual scenes depicted a central obstruction, which imposed further constraints on the paths and increased the likelihood of fixating the empty space between the agent and goal. The results from this experiment corroborated and refined the previous findings. Specifically, eye-movement effects started immediately after hearing the verb and were in line with data from an additional mouse-tracking task that encouraged a more explicit spatial reenactment of the motion event. In revealing how event comprehension operates in the visual world, these findings suggest a mental simulation process whereby spatial details of motion events are mapped onto the world through visual attention. The strength and detectability of such effects in overt eye-movements is constrained by the visual world and the fact that perceivers rarely fixate regions of empty space. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Tracking without perceiving: a dissociation between eye movements and motion perception.
Spering, Miriam; Pomplun, Marc; Carrasco, Marisa
2011-02-01
Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept.
Tracking Without Perceiving: A Dissociation Between Eye Movements and Motion Perception
Spering, Miriam; Pomplun, Marc; Carrasco, Marisa
2011-01-01
Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept. PMID:21189353
Aging and the visual perception of exocentric distance.
Norman, J Farley; Adkins, Olivia C; Norman, Hideko F; Cox, Andrea G; Rogers, Connor E
2015-04-01
The ability of 18 younger and older adults to visually perceive exocentric distances was evaluated. The observers judged the extent of fronto-parallel and in-depth spatial intervals at a variety of viewing distances from 50cm to 164.3cm. Most of the observers perceived in-depth intervals to be significantly smaller than fronto-parallel intervals, a finding that is consistent with previous studies. While none of the individual observers' judgments of exocentric distance were accurate, the judgments of the older observers were significantly more accurate than those of the younger observers. The precision of the observers' judgments across repeated trials, however, was not affected by age. The results demonstrate that increases in age can produce significant improvements in the visual ability to perceive the magnitude of exocentric distances. Copyright © 2015 Elsevier Ltd. All rights reserved.
Designed Natural Spaces: Informal Gardens Are Perceived to Be More Restorative than Formal Gardens
Twedt, Elyssa; Rainey, Reuben M.; Proffitt, Dennis R.
2016-01-01
Experimental research shows that there are perceived and actual benefits to spending time in natural spaces compared to urban spaces, such as reduced cognitive fatigue, improved mood, and reduced stress. Whereas past research has focused primarily on distinguishing between distinct categories of spaces (i.e., nature vs. urban), less is known about variability in perceived restorative potential of environments within a particular category of outdoor spaces, such as gardens. Conceptually, gardens are often considered to be restorative spaces and to contain an abundance of natural elements, though there is great variability in how gardens are designed that might impact their restorative potential. One common practice for classifying gardens is along a spectrum ranging from “formal or geometric” to “informal or naturalistic,” which often corresponds to the degree to which built or natural elements are present, respectively. In the current study, we tested whether participants use design informality as a cue to predict perceived restorative potential of different gardens. Participants viewed a set of gardens and rated each on design informality, perceived restorative potential, naturalness, and visual appeal. Participants perceived informal gardens to have greater restorative potential than formal gardens. In addition, gardens that were more visually appealing and more natural-looking were perceived to have greater restorative potential than less visually appealing and less natural gardens. These perceptions and precedents are highly relevant for the design of gardens and other similar green spaces intended to provide relief from stress and to foster cognitive restoration. PMID:26903899
Designed Natural Spaces: Informal Gardens Are Perceived to Be More Restorative than Formal Gardens.
Twedt, Elyssa; Rainey, Reuben M; Proffitt, Dennis R
2016-01-01
Experimental research shows that there are perceived and actual benefits to spending time in natural spaces compared to urban spaces, such as reduced cognitive fatigue, improved mood, and reduced stress. Whereas past research has focused primarily on distinguishing between distinct categories of spaces (i.e., nature vs. urban), less is known about variability in perceived restorative potential of environments within a particular category of outdoor spaces, such as gardens. Conceptually, gardens are often considered to be restorative spaces and to contain an abundance of natural elements, though there is great variability in how gardens are designed that might impact their restorative potential. One common practice for classifying gardens is along a spectrum ranging from "formal or geometric" to "informal or naturalistic," which often corresponds to the degree to which built or natural elements are present, respectively. In the current study, we tested whether participants use design informality as a cue to predict perceived restorative potential of different gardens. Participants viewed a set of gardens and rated each on design informality, perceived restorative potential, naturalness, and visual appeal. Participants perceived informal gardens to have greater restorative potential than formal gardens. In addition, gardens that were more visually appealing and more natural-looking were perceived to have greater restorative potential than less visually appealing and less natural gardens. These perceptions and precedents are highly relevant for the design of gardens and other similar green spaces intended to provide relief from stress and to foster cognitive restoration.
Moro, Stefania S; Steeves, Jennifer K E
2018-04-13
Previously, we have shown that people who have had one eye surgically removed early in life during visual development have enhanced sound localization [1] and lack visual dominance, commonly observed in binocular and monocular (eye-patched) viewing controls [2]. Despite these changes, people with one eye integrate auditory and visual components of multisensory events optimally [3]. The current study investigates how people with one eye perceive the McGurk effect, an audiovisual illusion where a new syllable is perceived when visual lip movements do not match the corresponding sound [4]. We compared individuals with one eye to binocular and monocular viewing controls and found that they have a significantly smaller McGurk effect compared to binocular controls. Additionally, monocular controls tended to perceive the McGurk effect less often than binocular controls suggesting a small transient modulation of the McGurk effect. These results suggest altered weighting of the auditory and visual modalities with both short and long-term monocular viewing. These results indicate the presence of permanent adaptive perceptual accommodations in people who have lost one eye early in life that may serve to mitigate the loss of binocularity during early brain development. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.
Perceived synchrony for realistic and dynamic audiovisual events.
Eg, Ragnhild; Behne, Dawn M
2015-01-01
In well-controlled laboratory experiments, researchers have found that humans can perceive delays between auditory and visual signals as short as 20 ms. Conversely, other experiments have shown that humans can tolerate audiovisual asynchrony that exceeds 200 ms. This seeming contradiction in human temporal sensitivity can be attributed to a number of factors such as experimental approaches and precedence of the asynchronous signals, along with the nature, duration, location, complexity and repetitiveness of the audiovisual stimuli, and even individual differences. In order to better understand how temporal integration of audiovisual events occurs in the real world, we need to close the gap between the experimental setting and the complex setting of everyday life. With this work, we aimed to contribute one brick to the bridge that will close this gap. We compared perceived synchrony for long-running and eventful audiovisual sequences to shorter sequences that contain a single audiovisual event, for three types of content: action, music, and speech. The resulting windows of temporal integration showed that participants were better at detecting asynchrony for the longer stimuli, possibly because the long-running sequences contain multiple corresponding events that offer audiovisual timing cues. Moreover, the points of subjective simultaneity differ between content types, suggesting that the nature of a visual scene could influence the temporal perception of events. An expected outcome from this type of experiment was the rich variation among participants' distributions and the derived points of subjective simultaneity. Hence, the designs of similar experiments call for more participants than traditional psychophysical studies. Heeding this caution, we conclude that existing theories on multisensory perception are ready to be tested on more natural and representative stimuli.
Perceived synchrony for realistic and dynamic audiovisual events
Eg, Ragnhild; Behne, Dawn M.
2015-01-01
In well-controlled laboratory experiments, researchers have found that humans can perceive delays between auditory and visual signals as short as 20 ms. Conversely, other experiments have shown that humans can tolerate audiovisual asynchrony that exceeds 200 ms. This seeming contradiction in human temporal sensitivity can be attributed to a number of factors such as experimental approaches and precedence of the asynchronous signals, along with the nature, duration, location, complexity and repetitiveness of the audiovisual stimuli, and even individual differences. In order to better understand how temporal integration of audiovisual events occurs in the real world, we need to close the gap between the experimental setting and the complex setting of everyday life. With this work, we aimed to contribute one brick to the bridge that will close this gap. We compared perceived synchrony for long-running and eventful audiovisual sequences to shorter sequences that contain a single audiovisual event, for three types of content: action, music, and speech. The resulting windows of temporal integration showed that participants were better at detecting asynchrony for the longer stimuli, possibly because the long-running sequences contain multiple corresponding events that offer audiovisual timing cues. Moreover, the points of subjective simultaneity differ between content types, suggesting that the nature of a visual scene could influence the temporal perception of events. An expected outcome from this type of experiment was the rich variation among participants' distributions and the derived points of subjective simultaneity. Hence, the designs of similar experiments call for more participants than traditional psychophysical studies. Heeding this caution, we conclude that existing theories on multisensory perception are ready to be tested on more natural and representative stimuli. PMID:26082738
Big data in medical informatics: improving education through visual analytics.
Vaitsis, Christos; Nilsson, Gunnar; Zary, Nabil
2014-01-01
A continuous effort to improve healthcare education today is currently driven from the need to create competent health professionals able to meet healthcare demands. Limited research reporting how educational data manipulation can help in healthcare education improvement. The emerging research field of visual analytics has the advantage to combine big data analysis and manipulation techniques, information and knowledge representation, and human cognitive strength to perceive and recognise visual patterns. The aim of this study was therefore to explore novel ways of representing curriculum and educational data using visual analytics. Three approaches of visualization and representation of educational data were presented. Five competencies at undergraduate medical program level addressed in courses were identified to inaccurately correspond to higher education board competencies. Different visual representations seem to have a potential in impacting on the ability to perceive entities and connections in the curriculum data.
Lee Masson, Haemy; Bulthé, Jessica; Op de Beeck, Hans P; Wallraven, Christian
2016-08-01
Humans are highly adept at multisensory processing of object shape in both vision and touch. Previous studies have mostly focused on where visually perceived object-shape information can be decoded, with haptic shape processing receiving less attention. Here, we investigate visuo-haptic shape processing in the human brain using multivoxel correlation analyses. Importantly, we use tangible, parametrically defined novel objects as stimuli. Two groups of participants first performed either a visual or haptic similarity-judgment task. The resulting perceptual object-shape spaces were highly similar and matched the physical parameter space. In a subsequent fMRI experiment, objects were first compared within the learned modality and then in the other modality in a one-back task. When correlating neural similarity spaces with perceptual spaces, visually perceived shape was decoded well in the occipital lobe along with the ventral pathway, whereas haptically perceived shape information was mainly found in the parietal lobe, including frontal cortex. Interestingly, ventrolateral occipito-temporal cortex decoded shape in both modalities, highlighting this as an area capable of detailed visuo-haptic shape processing. Finally, we found haptic shape representations in early visual cortex (in the absence of visual input), when participants switched from visual to haptic exploration, suggesting top-down involvement of visual imagery on haptic shape processing. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Auditory environmental context affects visual distance perception.
Etchemendy, Pablo E; Abregú, Ezequiel; Calcagno, Esteban R; Eguia, Manuel C; Vechiatti, Nilda; Iasi, Federico; Vergara, Ramiro O
2017-08-03
In this article, we show that visual distance perception (VDP) is influenced by the auditory environmental context through reverberation-related cues. We performed two VDP experiments in two dark rooms with extremely different reverberation times: an anechoic chamber and a reverberant room. Subjects assigned to the reverberant room perceived the targets farther than subjects assigned to the anechoic chamber. Also, we found a positive correlation between the maximum perceived distance and the auditorily perceived room size. We next performed a second experiment in which the same subjects of Experiment 1 were interchanged between rooms. We found that subjects preserved the responses from the previous experiment provided they were compatible with the present perception of the environment; if not, perceived distance was biased towards the auditorily perceived boundaries of the room. Results of both experiments show that the auditory environment can influence VDP, presumably through reverberation cues related to the perception of room size.
Gosselt, Jordy F; Strump, Tanja; Van Hoof, Joris
2016-12-01
Based on the existing literature, relevant determinants of availability for on-premises locations, off-premises locations, and the Internet were qualitatively explored and categorized by "experts" consisting of underage alcohol purchasers. In total, 14 focus group discussions were conducted with 94 Dutch adolescents. For on-premises locations, the high prices were perceived as the biggest disadvantage, and the ease to circumvent legal age limits as the biggest advantage. For off-premises locations, the cheap pricing was perceived as the most positive aspect, and the legal age limit as the biggest disadvantage. For online purchases, the waiting time was perceived as the most negative aspect, and the proximity of online stores as the biggest advantage. © The Author(s) 2015.
Internal attention to features in visual short-term memory guides object learning
Fan, Judith E.; Turk-Browne, Nicholas B.
2013-01-01
Attending to objects in the world affects how we perceive and remember them. What are the consequences of attending to an object in mind? In particular, how does reporting the features of a recently seen object guide visual learning? In three experiments, observers were presented with abstract shapes in a particular color, orientation, and location. After viewing each object, observers were cued to report one feature from visual short-term memory (VSTM). In a subsequent test, observers were cued to report features of the same objects from visual long-term memory (VLTM). We tested whether reporting a feature from VSTM: (1) enhances VLTM for just that feature (practice-benefit hypothesis), (2) enhances VLTM for all features (object-based hypothesis), or (3) simultaneously enhances VLTM for that feature and suppresses VLTM for unreported features (feature-competition hypothesis). The results provided support for the feature-competition hypothesis, whereby the representation of an object in VLTM was biased towards features reported from VSTM and away from unreported features (Experiment 1). This bias could not be explained by the amount of sensory exposure or response learning (Experiment 2) and was amplified by the reporting of multiple features (Experiment 3). Taken together, these results suggest that selective internal attention induces competitive dynamics among features during visual learning, flexibly tuning object representations to align with prior mnemonic goals. PMID:23954925
fMRI response during visual motion stimulation in patients with late whiplash syndrome.
Freitag, P; Greenlee, M W; Wachter, K; Ettlin, T M; Radue, E W
2001-01-01
After whiplash trauma, up to one fourth of patients develop chronic symptoms including head and neck pain and cognitive disturbances. Resting perfusion single-photon-emission computed tomography (SPECT) found decreased temporoparietooccipital tracer uptake among these long-term symptomatic patients with late whiplash syndrome. As MT/MST (V5/V5a) are located in that area, this study addressed the question whether these patients show impairments in visual motion perception. We examined five symptomatic patients with late whiplash syndrome, five asymptomatic patients after whiplash trauma, and a control group of seven volunteers without the history of trauma. Tests for visual motion perception and functional magnetic resonance imaging (fMRI) measurements during visual motion stimulation were performed. Symptomatic patients showed a significant reduction in their ability to perceive coherent visual motion compared with controls, whereas the asymptomatic patients did not show this effect. fMRI activation was similar during random dot motion in all three groups, but was significantly decreased during coherent dot motion in the symptomatic patients compared with the other two groups. Reduced psychophysical motion performance and reduced fMRI responses in symptomatic patients with late whiplash syndrome both point to a functional impairment in cortical areas sensitive to coherent motion. Larger studies are needed to confirm these clinical and functional imaging results to provide a possible additional diagnostic criterion for the evaluation of patients with late whiplash syndrome.
Internal attention to features in visual short-term memory guides object learning.
Fan, Judith E; Turk-Browne, Nicholas B
2013-11-01
Attending to objects in the world affects how we perceive and remember them. What are the consequences of attending to an object in mind? In particular, how does reporting the features of a recently seen object guide visual learning? In three experiments, observers were presented with abstract shapes in a particular color, orientation, and location. After viewing each object, observers were cued to report one feature from visual short-term memory (VSTM). In a subsequent test, observers were cued to report features of the same objects from visual long-term memory (VLTM). We tested whether reporting a feature from VSTM: (1) enhances VLTM for just that feature (practice-benefit hypothesis), (2) enhances VLTM for all features (object-based hypothesis), or (3) simultaneously enhances VLTM for that feature and suppresses VLTM for unreported features (feature-competition hypothesis). The results provided support for the feature-competition hypothesis, whereby the representation of an object in VLTM was biased towards features reported from VSTM and away from unreported features (Experiment 1). This bias could not be explained by the amount of sensory exposure or response learning (Experiment 2) and was amplified by the reporting of multiple features (Experiment 3). Taken together, these results suggest that selective internal attention induces competitive dynamics among features during visual learning, flexibly tuning object representations to align with prior mnemonic goals. Copyright © 2013 Elsevier B.V. All rights reserved.
Perceived vision-related quality of life and risk of falling among community living elderly people.
Källstrand-Eriksson, Jeanette; Baigi, Amir; Buer, Nina; Hildingh, Cathrine
2013-06-01
Falls and fall injuries among the elderly population are common, since ageing is a risk factor of falling. Today, this is a major problem because the ageing population is increasing. There are predictive factors of falling and visual impairment is one of them. Usually, only visual acuity is considered when measuring visual impairment, and nothing regarding a person's functional visual ability is taken into account. Therefore, the aim of this study was to assess the perceived vision-related quality of life among the community living elderly using the 25-item National Eye Institute Visual Function Questionnaire (NEI VFQ-25) and to investigate whether there was any association among vision-related quality of life and falls. There were 212 randomly selected elderly people participating in the study. Our study indicated that the participants had an impaired perceived vision-related health status. General health was the only NEI VFQ-25 variable significantly associated with falls in both men and women. However, among men, near and distance activities, vision-specific social functioning, role difficulties and dependency, color and peripheral vision were related to falls. © 2012 Nordic College of Caring Science.
The same-location cost is unrelated to attentional settings: an object-updating account.
Carmel, Tomer; Lamy, Dominique
2014-08-01
What mechanisms allow us to ignore salient yet irrelevant visual information has been a matter of intense debate. According to the contingent-capture hypothesis, such information is filtered out, whereas according to the salience-based account, it captures attention automatically. Several recent studies have reported a same-location cost that appears to fit neither of these accounts. These showed that responses may actually be slower when the target appears at the location just occupied by an irrelevant singleton distractor. Here, we investigated the mechanisms underlying this same-location cost. Our findings show that the same-location cost is unrelated to automatic attentional capture or strategic setting of attentional priorities, and therefore invalidate the feature-based inhibition and fast attentional disengagement accounts of this effect. In addition, we show that the cost is wiped out when the cue and target are not perceived as parts of the same object. We interpret these findings as indicating that the same-location cost has been previously misinterpreted by both bottom-up and top-down theories of attentional capture. We propose that it is better understood as a consequence of object updating, namely, as the cost of updating the information stored about an object when this object changes across time.
Wilkinson, Krista M; Light, Janice; Drager, Kathryn
2012-09-01
Aided augmentative and alternative (AAC) interventions have been demonstrated to facilitate a variety of communication outcomes in persons with intellectual disabilities. Most aided AAC systems rely on a visual modality. When the medium for communication is visual, it seems likely that the effectiveness of intervention depends in part on the effectiveness and efficiency with which the information presented in the display can be perceived, identified, and extracted by communicators and their partners. Understanding of visual-cognitive processing - that is, how a user attends, perceives, and makes sense of the visual information on the display - therefore seems critical to designing effective aided AAC interventions. In this Forum Note, we discuss characteristics of one particular type of aided AAC display, that is, Visual Scene Displays (VSDs) as they may relate to user visual and cognitive processing. We consider three specific ways in which bodies of knowledge drawn from the visual cognitive sciences may be relevant to the composition of VSDs, with the understanding the direct research with children with complex communication needs is necessary to verify or refute our speculations.
Virtual Reality: Visualization in Three Dimensions.
ERIC Educational Resources Information Center
McLellan, Hilary
Virtual reality is a newly emerging tool for scientific visualization that makes possible multisensory, three-dimensional modeling of scientific data. While the emphasis is on visualization, the other senses are added to enhance what the scientist can visualize. Researchers are working to extend the sensory range of what can be perceived in…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minelli, Annalisa, E-mail: Annalisa.Minelli@univ-brest.fr; Marchesini, Ivan, E-mail: Ivan.Marchesini@irpi.cnr.it; Taylor, Faith E., E-mail: Faith.Taylor@kcl.ac.uk
Although there are clear economic and environmental incentives for producing energy from solar and wind power, there can be local opposition to their installation due to their impact upon the landscape. To date, no international guidelines exist to guide quantitative visual impact assessment of these facilities, making the planning process somewhat subjective. In this paper we demonstrate the development of a method and an Open Source GIS tool to quantitatively assess the visual impact of these facilities using line-of-site techniques. The methods here build upon previous studies by (i) more accurately representing the shape of energy producing facilities, (ii) takingmore » into account the distortion of the perceived shape and size of facilities caused by the location of the observer, (iii) calculating the possible obscuring of facilities caused by terrain morphology and (iv) allowing the combination of various facilities to more accurately represent the landscape. The tool has been applied to real and synthetic case studies and compared to recently published results from other models, and demonstrates an improvement in accuracy of the calculated visual impact of facilities. The tool is named r.wind.sun and is freely available from GRASS GIS AddOns. - Highlights: • We develop a tool to quantify wind turbine and photovoltaic panel visual impact. • The tool is freely available to download and edit as a module of GRASS GIS. • The tool takes into account visual distortion of the shape and size of objects. • The accuracy of calculation of visual impact is improved over previous methods.« less
Modulation of visually evoked movement responses in moving virtual environments.
Reed-Jones, Rebecca J; Vallis, Lori Ann
2009-01-01
Virtual-reality technology is being increasingly used to understand how humans perceive and act in the moving world around them. What is currently not clear is how virtual reality technology is perceived by human participants and what virtual scenes are effective in evoking movement responses to visual stimuli. We investigated the effect of virtual-scene context on human responses to a virtual visual perturbation. We hypothesised that exposure to a natural scene that matched the visual expectancies of the natural world would create a perceptual set towards presence, and thus visual guidance of body movement in a subsequently presented virtual scene. Results supported this hypothesis; responses to a virtual visual perturbation presented in an ambiguous virtual scene were increased when participants first viewed a scene that consisted of natural landmarks which provided 'real-world' visual motion cues. Further research in this area will provide a basis of knowledge for the effective use of this technology in the study of human movement responses.
Role of parafovea in blur perception.
Venkataraman, Abinaya Priya; Radhakrishnan, Aiswaryah; Dorronsoro, Carlos; Lundström, Linda; Marcos, Susana
2017-09-01
The blur experienced by our visual system is not uniform across the visual field. Additionally, lens designs with variable power profile such as contact lenses used in presbyopia correction and to control myopia progression create variable blur from the fovea to the periphery. The perceptual changes associated with varying blur profile across the visual field are unclear. We therefore measured the perceived neutral focus with images of different angular subtense (from 4° to 20°) and found that the amount of blur, for which focus is perceived as neutral, increases when the stimulus was extended to cover the parafovea. We also studied the changes in central perceived neutral focus after adaptation to images with similar magnitude of optical blur across the image or varying blur from center to the periphery. Altering the blur in the periphery had little or no effect on the shift of perceived neutral focus following adaptation to normal/blurred central images. These perceptual outcomes should be considered while designing bifocal optical solutions for myopia or presbyopia. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Bock, Otmar; Bury, Nils
2018-03-01
Our perception of the vertical corresponds to the weighted sum of gravicentric, egocentric, and visual cues. Here we evaluate the interplay of those cues not for the perceived but rather for the motor vertical. Participants were asked to flip an omnidirectional switch down while their egocentric vertical was dissociated from their visual-gravicentric vertical. Responses were directed mid-between the two verticals; specifically, the data suggest that the relative weight of congruent visual-gravicentric cues averages 0.62, and correspondingly, the relative weight of egocentric cues averages 0.38. We conclude that the interplay of visual-gravicentric cues with egocentric cues is similar for the motor and for the perceived vertical. Unexpectedly, we observed a consistent dependence of the motor vertical on hand position, possibly mediated by hand orientation or by spatial selective attention.
Brain-computer interface on the basis of EEG system Encephalan
NASA Astrophysics Data System (ADS)
Maksimenko, Vladimir; Badarin, Artem; Nedaivozov, Vladimir; Kirsanov, Daniil; Hramov, Alexander
2018-04-01
We have proposed brain-computer interface (BCI) for the estimation of the brain response on the presented visual tasks. Proposed BCI is based on the EEG recorder Encephalan-EEGR-19/26 (Medicom MTD, Russia) supplemented by a special home-made developed acquisition software. BCI is tested during experimental session while subject is perceiving the bistable visual stimuli and classifying them according to the interpretation. We have subjected the participant to the different external conditions and observed the significant decrease in the response, associated with the perceiving the bistable visual stimuli, during the presence of distraction. Based on the obtained results we have proposed possibility to use of BCI for estimation of the human alertness during solving the tasks required substantial visual attention.
Structural Salience and the Nonaccidentality of a Gestalt
ERIC Educational Resources Information Center
Strother, Lars; Kubovy, Michael
2012-01-01
We perceive structure through a process of perceptual organization. Here we report a new perceptual organization phenomenon--the facilitation of visual grouping by global curvature. Observers viewed patterns that they perceived as organized into collections of curves. The patterns were perceptually ambiguous such that the perceived orientation of…
Tayler, Laramie D
2005-05-01
Previous studies of the effects of sexual television content have resulted in mixed findings. Based on the information processing model of media effects, I proposed that the messages embodied n such content, the degree to which viewers perceive television content as realistic, and whether sexual content is conveyed using visual or verbal symbols may influence the nature or degree of such effects. I explored this possibility through an experiment in which 182 college undergraduates were exposed to visual or verbal sexual television content, neutral television content, or no television at all prior to completing measures of sexual attitudes and beliefs. Although exposure to sexual content generally did not produce significant main effects, it did influence the attitudes of those who perceive television to be relatively realistic. Verbal sexual content was found to influence beliefs about women's sexual activity among the same group.
Neuropsychology: the touchy, feely side of vision.
Walsh, V
2000-01-13
Some visual attributes, such as colour, are purely visual, but others, such as orientation and movement, can be perceived by touch or audition. A magnetic stimulation study has now shown that the perception of tactile orientation may be influenced by visual Information.
Bio-inspired display of polarization information using selected visual cues
NASA Astrophysics Data System (ADS)
Yemelyanov, Konstantin M.; Lin, Shih-Schon; Luis, William Q.; Pugh, Edward N., Jr.; Engheta, Nader
2003-12-01
For imaging systems the polarization of electromagnetic waves carries much potentially useful information about such features of the world as the surface shape, material contents, local curvature of objects, as well as about the relative locations of the source, object and imaging system. The imaging system of the human eye however, is "polarization-blind", and cannot utilize the polarization of light without the aid of an artificial, polarization-sensitive instrument. Therefore, polarization information captured by a man-made polarimetric imaging system must be displayed to a human observer in the form of visual cues that are naturally processed by the human visual system, while essentially preserving the other important non-polarization information (such as spectral and intensity information) in an image. In other words, some forms of sensory substitution are needed for representing polarization "signals" without affecting other visual information such as color and brightness. We are investigating several bio-inspired representational methodologies for mapping polarization information into visual cues readily perceived by the human visual system, and determining which mappings are most suitable for specific applications such as object detection, navigation, sensing, scene classifications, and surface deformation. The visual cues and strategies we are exploring are the use of coherently moving dots superimposed on image to represent various range of polarization signals, overlaying textures with spatial and/or temporal signatures to segregate regions of image with differing polarization, modulating luminance and/or color contrast of scenes in terms of certain aspects of polarization values, and fusing polarization images into intensity-only images. In this talk, we will present samples of our findings in this area.
Difference in Visual Processing Assessed by Eye Vergence Movements
Solé Puig, Maria; Puigcerver, Laura; Aznar-Casanova, J. Antonio; Supèr, Hans
2013-01-01
Orienting visual attention is closely linked to the oculomotor system. For example, a shift of attention is usually followed by a saccadic eye movement and can be revealed by micro saccades. Recently we reported a novel role of another type of eye movement, namely eye vergence, in orienting visual attention. Shifts in visuospatial attention are characterized by the response modulation to a selected target. However, unlike (micro-) saccades, eye vergence movements do not carry spatial information (except for depth) and are thus not specific to a particular visual location. To further understand the role of eye vergence in visual attention, we tested subjects with different perceptual styles. Perceptual style refers to the characteristic way individuals perceive environmental stimuli, and is characterized by a spatial difference (local vs. global) in perceptual processing. We tested field independent (local; FI) and field dependent (global; FD) observers in a cue/no-cue task and a matching task. We found that FI observers responded faster and had stronger modulation in eye vergence in both tasks than FD subjects. The results may suggest that eye vergence modulation may relate to the trade-off between the size of spatial region covered by attention and the processing efficiency of sensory information. Alternatively, vergence modulation may have a role in the switch in cortical state to prepare the visual system for new incoming sensory information. In conclusion, vergence eye movements may be added to the growing list of functions of fixational eye movements in visual perception. However, further studies are needed to elucidate its role. PMID:24069140
ERIC Educational Resources Information Center
Lansu, Tessa A. M.; Cillessen, Antonius H. N.; Karremans, Johan C.
2014-01-01
Previous research has shown that adolescents' attention for a peer is determined by the peer's status. This study examined how it is also determined by the status of the perceiving adolescent, and the gender of both parties involved (perceiver and perceived). Participants were 122 early adolescents (M age = 11.0 years) who completed…
Figure-ground segregation in a recurrent network architecture.
Roelfsema, Pieter R; Lamme, Victor A F; Spekreijse, Henk; Bosch, Holger
2002-05-15
Here we propose a model of how the visual brain segregates textured scenes into figures and background. During texture segregation, locations where the properties of texture elements change abruptly are assigned to boundaries, whereas image regions that are relatively homogeneous are grouped together. Boundary detection and grouping of image regions require different connection schemes, which are accommodated in a single network architecture by implementing them in different layers. As a result, all units carry signals related to boundary detection as well as grouping of image regions, in accordance with cortical physiology. Boundaries yield an early enhancement of network responses, but at a later point, an entire figural region is grouped together, because units that respond to it are labeled with enhanced activity. The model predicts which image regions are preferentially perceived as figure or as background and reproduces the spatio-temporal profile of neuronal activity in the visual cortex during texture segregation in intact animals, as well as in animals with cortical lesions.
Aperture Synthesis Shows Perceptual Integration of Geometrical Form Across Saccades.
Schreiber, Kai; Morgan, Michael
2018-03-01
We investigated the perceptual bias in perceived relative lengths in the Brentano version of the Müller-Lyer arrowheads figure. The magnitude of the bias was measured both under normal whole-figure viewing condition and under an aperture viewing condition, where participants moved their gaze around the figure but could see only one arrowhead at a time through a Gaussian-weighted contrast window. The extent of the perceptual bias was similar under the two conditions. The stimuli were presented on a CRT in a light-proof room with room-lights off, but visual context was provided by a rectangular frame surrounding the figure. The frame was either stationary with respect to the figure or moved in such a manner that the bias would be counteracted if the observer were locating features with respect to the frame. Biases were reduced in the latter condition. We conclude that integration occurs over saccades, but largely in an external visual framework, rather than in a body-centered frame using an extraretinal signal.
Exploring multivariate representations of indices along linear geographic features
NASA Astrophysics Data System (ADS)
Bleisch, Susanne; Hollenstein, Daria
2018-05-01
A study of the walkability of a Swiss town required finding suitable representations of multivariate geographical da-ta. The goal was to represent multiple indices of walkability concurrently and visualizing the data along the street network it relates to. Different indices of pedestrian friendliness were assessed for short street sections and then mapped to an overlaid grid. Basic and composite glyphs were designed using square- or triangle-areas to display one to four index values concurrently within the grid structure. Color was used to indicate different indices. Implement-ing visualizations for different combinations of index sets, we find that single values can be emphasized or de-emphasized by selecting the color scheme accordingly and that different color selections either allow perceiving sin-gle values or overall trends over the evaluated area. Values for up to four indices can be displayed in combination within the resulting geovisualizations and the underlying gridded road network references the data to its real world locations.
Saunders, Jeffrey A.
2014-01-01
Direction of self-motion during walking is indicated by multiple cues, including optic flow, nonvisual sensory cues, and motor prediction. I measured the reliability of perceived heading from visual and nonvisual cues during walking, and whether cues are weighted in an optimal manner. I used a heading alignment task to measure perceived heading during walking. Observers walked toward a target in a virtual environment with and without global optic flow. The target was simulated to be infinitely far away, so that it did not provide direct feedback about direction of self-motion. Variability in heading direction was low even without optic flow, with average RMS error of 2.4°. Global optic flow reduced variability to 1.9°–2.1°, depending on the structure of the environment. The small amount of variance reduction was consistent with optimal use of visual information. The relative contribution of visual and nonvisual information was also measured using cue conflict conditions. Optic flow specified a conflicting heading direction (±5°), and bias in walking direction was used to infer relative weighting. Visual feedback influenced heading direction by 16%–34% depending on scene structure, with more effect with dense motion parallax. The weighting of visual feedback was close to the predictions of an optimal integration model given the observed variability measures. PMID:24648194
Leder, Helmut
2017-01-01
Visual complexity is relevant for many areas ranging from improving usability of technical displays or websites up to understanding aesthetic experiences. Therefore, many attempts have been made to relate objective properties of images to perceived complexity in artworks and other images. It has been argued that visual complexity is a multidimensional construct mainly consisting of two dimensions: A quantitative dimension that increases complexity through number of elements, and a structural dimension representing order negatively related to complexity. The objective of this work is to study human perception of visual complexity utilizing two large independent sets of abstract patterns. A wide range of computational measures of complexity was calculated, further combined using linear models as well as machine learning (random forests), and compared with data from human evaluations. Our results confirm the adequacy of existing two-factor models of perceived visual complexity consisting of a quantitative and a structural factor (in our case mirror symmetry) for both of our stimulus sets. In addition, a non-linear transformation of mirror symmetry giving more influence to small deviations from symmetry greatly increased explained variance. Thus, we again demonstrate the multidimensional nature of human complexity perception and present comprehensive quantitative models of the visual complexity of abstract patterns, which might be useful for future experiments and applications. PMID:29099832
Comparison of vision through surface modulated and spatial light modulated multifocal optics.
Vinas, Maria; Dorronsoro, Carlos; Radhakrishnan, Aiswaryah; Benedi-Garcia, Clara; LaVilla, Edward Anthony; Schwiegerling, Jim; Marcos, Susana
2017-04-01
Spatial-light-modulators (SLM) are increasingly used as active elements in adaptive optics (AO) systems to simulate optical corrections, in particular multifocal presbyopic corrections. In this study, we compared vision with lathe-manufactured multi-zone (2-4) multifocal, angularly and radially, segmented surfaces and through the same corrections simulated with a SLM in a custom-developed two-active-element AO visual simulator. We found that perceived visual quality measured through real manufactured surfaces and SLM-simulated phase maps corresponded highly. Optical simulations predicted differences in perceived visual quality across different designs at Far distance, but showed some discrepancies at intermediate and near.
Comparison of vision through surface modulated and spatial light modulated multifocal optics
Vinas, Maria; Dorronsoro, Carlos; Radhakrishnan, Aiswaryah; Benedi-Garcia, Clara; LaVilla, Edward Anthony; Schwiegerling, Jim; Marcos, Susana
2017-01-01
Spatial-light-modulators (SLM) are increasingly used as active elements in adaptive optics (AO) systems to simulate optical corrections, in particular multifocal presbyopic corrections. In this study, we compared vision with lathe-manufactured multi-zone (2-4) multifocal, angularly and radially, segmented surfaces and through the same corrections simulated with a SLM in a custom-developed two-active-element AO visual simulator. We found that perceived visual quality measured through real manufactured surfaces and SLM-simulated phase maps corresponded highly. Optical simulations predicted differences in perceived visual quality across different designs at Far distance, but showed some discrepancies at intermediate and near. PMID:28736655
A Spatial and Temporal Frequency Based Figure-Ground Processor
NASA Astrophysics Data System (ADS)
Weisstein, Namoi; Wong, Eva
1990-03-01
Recent findings in visual psychophysics have shown that figure-ground perception can be specified by the spatial and temporal response characteristics of the visual system. Higher spatial frequency regions of the visual field are perceived as figure and lower spatial frequency regions are perceived as background/ (Klymenko and Weisstein, 1986, Wong and Weisstein, 1989). Higher temporal frequency regions are seen as background and lower temporal frequency regions are seen as figure (Wong and Weisstein, 1987, Klymenko, Weisstein, Topolski, and Hsieh, 1988). Thus, high spatial and low temporal frequencies appear to be associated with figure and low spatial and high temporal frequencies appear to be associated with background.
Funnell, Elaine; Wilding, John
2011-02-01
We report a longitudinal study of an exceptional child (S.R.) whose early-acquired visual agnosia, following encephalitis at 8 weeks of age, did not prevent her from learning to construct an increasing vocabulary of visual object forms (drawn from different categories), albeit slowly. S.R. had problems perceiving subtle differences in shape; she was unable to segment local letters within global displays; and she would bring complex scenes close to her eyes: a symptom suggestive of an attempt to reduce visual crowding. Investigations revealed a robust ability to use the gestalt grouping factors of proximity and collinearity to detect fragmented forms in noisy backgrounds, compared with a very weak ability to segment fragmented forms on the basis of contrasts of shape. When contrasts in spatial grouping and shape were pitted against each other, shape made little contribution, consistent with problems in perceiving complex scenes, but when shape contrast was varied, and spatial grouping was held constant, S.R. showed the same hierarchy of difficulty as the controls, although her responses were slowed. This is the first report of a child's visual-perceptual development following very early neurological impairments to the visual cortex. Her ability to learn to perceive visual shape following damage at a rudimentary stage of perceptual development contrasts starkly with the loss of such ability in childhood cases of acquired visual agnosia that follow damage to the established perceptual system. Clearly, there is a critical period during which neurological damage to the highly active, early developing visual-perceptual system does not prevent but only impairs further learning.
Drawing experts have better visual memory while drawing.
Perdreau, Florian; Cavanagh, Patrick
2015-01-01
Drawing involves frequent shifts of gaze between the original and the drawing and visual memory helps compare the original object and the drawing across these gaze shifts while creating and correcting the drawing. It remains unclear whether this memory encodes all of the object or only the features around the current drawing position and whether both the original and the copy are equally well represented. To address these questions, we designed a "drawing" experiment coupled with a change detection task. A polygon was displayed on one screen and participants had to copy it on another, with the original and the drawing presented in alternation. At unpredictable moments during the copying process, modifications were made on the drawing and the original figure (while they were not in view). Participants had to correct their drawing every time they perceived a change so that their drawing always matched the current original figure. Our results show a better memory representation of the original figure than of the drawing, with locations relevant to the current production most accurately represented. Critically, experts showed better memory for both the original and the drawing than did novices, suggesting that experts have specialized advantages for encoding visual shapes.
Eramudugolla, Ranmalee; Mattingley, Jason B
2008-01-01
Patients with unilateral spatial neglect following right hemisphere damage are impaired in detecting contralesional targets in both visual and haptic search tasks, and often show a graded improvement in detection performance for more ipsilesional spatial locations. In audition, multiple simultaneous sounds are most effectively perceived if they are distributed along the frequency dimension. Thus, attention to spectro-temporal features alone can allow detection of a target sound amongst multiple simultaneous distracter sounds, regardless of whether these sounds are spatially separated. Spatial bias in attention associated with neglect should not affect auditory search based on spectro-temporal features of a sound target. We report that a right brain damaged patient with neglect demonstrated a significant gradient favouring the ipsilesional side on a visual search task as well as an auditory search task in which the target was a frequency modulated tone amongst steady distractor tones. No such asymmetry was apparent in the auditory search performance of a control patient with a right hemisphere lesion but no neglect. The results suggest that the spatial bias in attention exhibited by neglect patients affects stimulus processing even when spatial information is irrelevant to the task.
Rogerson, Mike; Barton, Jo
2015-01-01
Green exercise research often reports psychological health outcomes without rigorously controlling exercise. This study examines effects of visual exercise environments on directed attention, perceived exertion and time to exhaustion, whilst measuring and controlling the exercise component. Participants completed three experimental conditions in a randomized counterbalanced order. Conditions varied by video content viewed (nature; built; control) during two consistently-ordered exercise bouts (Exercise 1: 60% VO2peakInt for 15-mins; Exercise 2: 85% VO2peakInt to voluntary exhaustion). In each condition, participants completed modified Backwards Digit Span tests (a measure of directed attention) pre- and post-Exercise 1. Energy expenditure, respiratory exchange ratio and perceived exertion were measured during both exercise bouts. Time to exhaustion in Exercise 2 was also recorded. There was a significant time by condition interaction for Backwards Digit Span scores (F2,22 = 6.267, p = 0.007). Scores significantly improved in the nature condition (p < 0.001) but did not in the built or control conditions. There were no significant differences between conditions for either perceived exertion or physiological measures during either Exercise 1 or Exercise 2, or for time to exhaustion in Exercise 2. This was the first study to demonstrate effects of controlled exercise conducted in different visual environments on post-exercise directed attention. Via psychological mechanisms alone, visual nature facilitates attention restoration during moderate-intensity exercise. PMID:26133125
Rogerson, Mike; Barton, Jo
2015-06-30
Green exercise research often reports psychological health outcomes without rigorously controlling exercise. This study examines effects of visual exercise environments on directed attention, perceived exertion and time to exhaustion, whilst measuring and controlling the exercise component. Participants completed three experimental conditions in a randomized counterbalanced order. Conditions varied by video content viewed (nature; built; control) during two consistently-ordered exercise bouts (Exercise 1: 60% VO2peakInt for 15-mins; Exercise 2: 85% VO2peakInt to voluntary exhaustion). In each condition, participants completed modified Backwards Digit Span tests (a measure of directed attention) pre- and post-Exercise 1. Energy expenditure, respiratory exchange ratio and perceived exertion were measured during both exercise bouts. Time to exhaustion in Exercise 2 was also recorded. There was a significant time by condition interaction for Backwards Digit Span scores (F2,22 = 6.267, p = 0.007). Scores significantly improved in the nature condition (p < 0.001) but did not in the built or control conditions. There were no significant differences between conditions for either perceived exertion or physiological measures during either Exercise 1 or Exercise 2, or for time to exhaustion in Exercise 2. This was the first study to demonstrate effects of controlled exercise conducted in different visual environments on post-exercise directed attention. Via psychological mechanisms alone, visual nature facilitates attention restoration during moderate-intensity exercise.
Do domestic dogs (Canis lupus familiaris) perceive the Delboeuf illusion?
Miletto Petrazzini, Maria Elena; Bisazza, Angelo; Agrillo, Christian
2017-05-01
In the last decade, visual illusions have been repeatedly used as a tool to compare visual perception among species. Several studies have investigated whether non-human primates perceive visual illusions in a human-like fashion, but little attention has been paid to other mammals, and sensitivity to visual illusions has been never investigated in the dog. Here, we studied whether domestic dogs perceive the Delboeuf illusion. In human and non-human primates, this illusion creates a misperception of item size as a function of its surrounding context. To examine this effect in dogs, we adapted the spontaneous preference paradigm recently used with chimpanzees. Subjects were presented with two plates containing food. In control trials, two different amounts of food were presented in two identical plates. In this circumstance, dogs were expected to select the larger amount. In test trials, equal food portion sizes were presented in two plates differing in size: if dogs perceived the illusion as primates do, they were expected to select the amount of food presented in the smaller plate. Dogs significantly discriminated the two alternatives in control trials, whereas their performance did not differ from chance in test trials with the illusory pattern. The fact that dogs do not seem to be susceptible to the Delboeuf illusion suggests a potential discontinuity in the perceptual biases affecting size judgments between primates and dogs.
Torrens-Burton, Anna; Basoudan, Nasreen; Bayer, Antony J; Tales, Andrea
2017-01-01
This study examines the relationships between two measures of information processing speed associated with executive function (Trail Making Test and a computer-based visual search test), the perceived difficulty of the tasks, and perceived memory function (measured by the Memory Functioning Questionnaire) in older adults (aged 50+ y) with normal general health, cognition (Montreal Cognitive Assessment score of 26+), and mood. The participants were recruited from the community rather than through clinical services, and none had ever sought or received help from a health professional for a memory complaint or mental health problem. For both the trail making and the visual search tests, mean information processing speed was not correlated significantly with perceived memory function. Some individuals did, however, reveal substantially slower information processing speeds (outliers) that may have clinical significance and indicate those who may benefit most from further assessment and follow up. For the trail making, but not the visual search task, higher levels of subjective memory dysfunction were associated with a greater perception of task difficulty. The relationship between actual information processing speed and perceived task difficulty also varied with respect to the task used. These findings highlight the importance of taking into account the type of task and metacognition factors when examining the integrity of information processing speed in older adults, particularly as this measure is now specifically cited as a key cognitive subdomain within the diagnostic framework for neurocognitive disorders.
Torrens-Burton, Anna; Basoudan, Nasreen; Bayer, Antony J.; Tales, Andrea
2017-01-01
This study examines the relationships between two measures of information processing speed associated with executive function (Trail Making Test and a computer-based visual search test), the perceived difficulty of the tasks, and perceived memory function (measured by the Memory Functioning Questionnaire) in older adults (aged 50+ y) with normal general health, cognition (Montreal Cognitive Assessment score of 26+), and mood. The participants were recruited from the community rather than through clinical services, and none had ever sought or received help from a health professional for a memory complaint or mental health problem. For both the trail making and the visual search tests, mean information processing speed was not correlated significantly with perceived memory function. Some individuals did, however, reveal substantially slower information processing speeds (outliers) that may have clinical significance and indicate those who may benefit most from further assessment and follow up. For the trail making, but not the visual search task, higher levels of subjective memory dysfunction were associated with a greater perception of task difficulty. The relationship between actual information processing speed and perceived task difficulty also varied with respect to the task used. These findings highlight the importance of taking into account the type of task and metacognition factors when examining the integrity of information processing speed in older adults, particularly as this measure is now specifically cited as a key cognitive subdomain within the diagnostic framework for neurocognitive disorders. PMID:28984584
Raudies, Florian; Hasselmo, Michael E.
2015-01-01
Firing fields of grid cells in medial entorhinal cortex show compression or expansion after manipulations of the location of environmental barriers. This compression or expansion could be selective for individual grid cell modules with particular properties of spatial scaling. We present a model for differences in the response of modules to barrier location that arise from different mechanisms for the influence of visual features on the computation of location that drives grid cell firing patterns. These differences could arise from differences in the position of visual features within the visual field. When location was computed from the movement of visual features on the ground plane (optic flow) in the ventral visual field, this resulted in grid cell spatial firing that was not sensitive to barrier location in modules modeled with small spacing between grid cell firing fields. In contrast, when location was computed from static visual features on walls of barriers, i.e. in the more dorsal visual field, this resulted in grid cell spatial firing that compressed or expanded based on the barrier locations in modules modeled with large spacing between grid cell firing fields. This indicates that different grid cell modules might have differential properties for computing location based on visual cues, or the spatial radius of sensitivity to visual cues might differ between modules. PMID:26584432
Foggy perception slows us down.
Pretto, Paolo; Bresciani, Jean-Pierre; Rainer, Gregor; Bülthoff, Heinrich H
2012-10-30
Visual speed is believed to be underestimated at low contrast, which has been proposed as an explanation of excessive driving speed in fog. Combining psychophysics measurements and driving simulation, we confirm that speed is underestimated when contrast is reduced uniformly for all objects of the visual scene independently of their distance from the viewer. However, we show that when contrast is reduced more for distant objects, as is the case in real fog, visual speed is actually overestimated, prompting drivers to decelerate. Using an artificial anti-fog-that is, fog characterized by better visibility for distant than for close objects, we demonstrate for the first time that perceived speed depends on the spatial distribution of contrast over the visual scene rather than the global level of contrast per se. Our results cast new light on how reduced visibility conditions affect perceived speed, providing important insight into the human visual system.DOI:http://dx.doi.org/10.7554/eLife.00031.001.
Viewpoint in the Visual-Spatial Modality: The Coordination of Spatial Perspective
Pyers, Jennie E.; Perniss, Pamela; Emmorey, Karen
2015-01-01
Sign languages express viewpoint-dependent spatial relations (e.g., left, right) iconically but must conventionalize from whose viewpoint the spatial relation is being described, the signer's or the perceiver's. In Experiment 1, ASL signers and sign-naïve gesturers expressed viewpoint-dependent relations egocentrically, but only signers successfully interpreted the descriptions non-egocentrically, suggesting that viewpoint convergence in the visual modality emerges with language conventionalization. In Experiment 2, we observed that the cost of adopting a non-egocentric viewpoint was greater for producers than for perceivers, suggesting that sign languages have converged on the most cognitively efficient means of expressing left-right spatial relations. We suggest that non-linguistic cognitive factors such as visual perspective-taking and motor embodiment may constrain viewpoint convergence in the visual-spatial modality. PMID:26981027
Viewpoint in the Visual-Spatial Modality: The Coordination of Spatial Perspective.
Pyers, Jennie E; Perniss, Pamela; Emmorey, Karen
2015-06-01
Sign languages express viewpoint-dependent spatial relations (e.g., left, right) iconically but must conventionalize from whose viewpoint the spatial relation is being described, the signer's or the perceiver's. In Experiment 1, ASL signers and sign-naïve gesturers expressed viewpoint-dependent relations egocentrically, but only signers successfully interpreted the descriptions non-egocentrically, suggesting that viewpoint convergence in the visual modality emerges with language conventionalization. In Experiment 2, we observed that the cost of adopting a non-egocentric viewpoint was greater for producers than for perceivers, suggesting that sign languages have converged on the most cognitively efficient means of expressing left-right spatial relations. We suggest that non-linguistic cognitive factors such as visual perspective-taking and motor embodiment may constrain viewpoint convergence in the visual-spatial modality.
NASA Astrophysics Data System (ADS)
Karam, Lina J.; Zhu, Tong
2015-03-01
The varying quality of face images is an important challenge that limits the effectiveness of face recognition technology when applied in real-world applications. Existing face image databases do not consider the effect of distortions that commonly occur in real-world environments. This database (QLFW) represents an initial attempt to provide a set of labeled face images spanning the wide range of quality, from no perceived impairment to strong perceived impairment for face detection and face recognition applications. Types of impairment include JPEG2000 compression, JPEG compression, additive white noise, Gaussian blur and contrast change. Subjective experiments are conducted to assess the perceived visual quality of faces under different levels and types of distortions and also to assess the human recognition performance under the considered distortions. One goal of this work is to enable automated performance evaluation of face recognition technologies in the presence of different types and levels of visual distortions. This will consequently enable the development of face recognition systems that can operate reliably on real-world visual content in the presence of real-world visual distortions. Another goal is to enable the development and assessment of visual quality metrics for face images and for face detection and recognition applications.
Illusory color mixing upon perceptual fading and filling-in does not result in 'forbidden colors'.
Hsieh, P-J; Tse, P U
2006-07-01
A retinally stabilized object readily undergoes perceptual fading. It is commonly believed that the color of the apparently vanished object is filled in with the color of the background because the features of the filled-in area are determined by features located outside the stabilized boundary. Crane, H. D., & Piantanida, T. P. (1983) (On seeing reddish green and yellowish blue. Science, 221, 1078-1080) reported that the colors that are perceived upon full or partial perceptual fading can be 'forbidden' in the sense that they violate color opponency theory. For example, they claimed that their subjects could perceive "reddish greens" and "yellowish blues." Here we use visual stimuli composed of spatially alternating stripes of two different colors to investigate the characteristics of color mixing during perceptual filling-in, and to determine whether 'forbidden colors' really occur. Our results show that (1) the filled-in color is not solely determined by the background color, but can be the mixture of the background and the foreground color; (2) apparent color mixing can occur even when the two colors are presented to different eyes, implying that color mixing during filling-in is in part a cortical phenomenon; and (3) perceived colors are not 'forbidden colors' at all, but rather intermediate colors.
McHugh, Joanna E; Kearney, Gavin; Rice, Henry; Newell, Fiona N
2012-02-01
Although both auditory and visual information can influence the perceived emotion of an individual, how these modalities contribute to the perceived emotion of a crowd of characters was hitherto unknown. Here, we manipulated the ambiguity of the emotion of either a visual or auditory crowd of characters by varying the proportions of characters expressing one of two emotional states. Using an intersensory bias paradigm, unambiguous emotional information from an unattended modality was presented while participants determined the emotion of a crowd in an attended, but different, modality. We found that emotional information in an unattended modality can disambiguate the perceived emotion of a crowd. Moreover, the size of the crowd had little effect on these crossmodal influences. The role of audiovisual information appears to be similar in perceiving emotion from individuals or crowds. Our findings provide novel insights into the role of multisensory influences on the perception of social information from crowds of individuals. PsycINFO Database Record (c) 2012 APA, all rights reserved
Wilkinson, Krista M.; Light, Janice; Drager, Kathryn
2013-01-01
Aided augmentative and alternative (AAC) interventions have been demonstrated to facilitate a variety of communication outcomes in persons with intellectual disabilities. Most aided AAC systems rely on a visual modality. When the medium for communication is visual, it seems likely that the effectiveness of intervention depends in part on the effectiveness and efficiency with which the information presented in the display can be perceived, identified, and extracted by communicators and their partners. Understanding of visual-cognitive processing – that is, how a user attends, perceives, and makes sense of the visual information on the display – therefore seems critical to designing effective aided AAC interventions. In this Forum Note, we discuss characteristics of one particular type of aided AAC display, that is, Visual Scene Displays (VSDs) as they may relate to user visual and cognitive processing. We consider three specific ways in which bodies of knowledge drawn from the visual cognitive sciences may be relevant to the composition of VSDs, with the understanding the direct research with children with complex communication needs is necessary to verify or refute our speculations. PMID:22946989
A massively asynchronous, parallel brain.
Zeki, Semir
2015-05-19
Whether the visual brain uses a parallel or a serial, hierarchical, strategy to process visual signals, the end result appears to be that different attributes of the visual scene are perceived asynchronously--with colour leading form (orientation) by 40 ms and direction of motion by about 80 ms. Whatever the neural root of this asynchrony, it creates a problem that has not been properly addressed, namely how visual attributes that are perceived asynchronously over brief time windows after stimulus onset are bound together in the longer term to give us a unified experience of the visual world, in which all attributes are apparently seen in perfect registration. In this review, I suggest that there is no central neural clock in the (visual) brain that synchronizes the activity of different processing systems. More likely, activity in each of the parallel processing-perceptual systems of the visual brain is reset independently, making of the brain a massively asynchronous organ, just like the new generation of more efficient computers promise to be. Given the asynchronous operations of the brain, it is likely that the results of activities in the different processing-perceptual systems are not bound by physiological interactions between cells in the specialized visual areas, but post-perceptually, outside the visual brain.
Improving Scores on Computerized Reading Assessments: The Effects of Colored Overlay Use
ERIC Educational Resources Information Center
Adams, Tracy A.
2012-01-01
Visual stress is a perceptual dysfunction that appears to affect how information is processed as it passes from the eyes to the brain. Photophobia, visual resolution, restricted focus, sustaining focus, and depth perception are all components of visual stress. Because visual stress affects what is perceived by the eye, students with this disorder…
Clinical and Laboratory Evaluation of Peripheral Prism Glasses for Hemianopia
Giorgi, Robert G.; Woods, Russell L.; Peli, Eli
2008-01-01
Purpose Homonymous hemianopia (the loss of vision on the same side in each eye) impairs the ability to navigate and walk safely. We evaluated peripheral prism glasses as a low vision optical device for hemianopia in an extended wearing trial. Methods Twenty-three patients with complete hemianopia (13 right) with neither visual neglect nor cognitive deficit enrolled in the 5-visit study. To expand the horizontal visual field, patients’ spectacles were fitted with both upper and lower Press-On™ Fresnel prism segments (each 40 prism diopters) across the upper and lower portions of the lens on the hemianopic (“blind”) side. Patients were asked to wear these spectacles as much as possible for the duration of the study, which averaged 9 (range: 5 to 13) weeks. Clinical success (continued wear, indicating perceived overall benefit), visual field expansion, perceived direction and perceived quality of life were measured. Results Clinical Success: 14 of 21 (67%) patients chose to continue to wear the peripheral prism glasses at the end of the study (2 patients did not complete the study for non-vision reasons). At long-term follow-up (8 to 51 months), 5 of 12 (42%) patients reported still wearing the device. Visual Field Expansion: Expansion of about 22 degrees in both the upper and lower quadrants was demonstrated for all patients (binocular perimetry, Goldmann V4e). Perceived Direction: Two patients demonstrated a transient adaptation to the change in visual direction produced by the peripheral prism glasses. Quality of Life: At study end, reduced difficulty noticing obstacles on the hemianopic side was reported. Conclusions The peripheral prism glasses provided reported benefits (usually in obstacle avoidance) to 2/3 of the patients completing the study, a very good success rate for a vision rehabilitation device. Possible reasons for long-term discontinuation and limited adaptation of perceived direction are discussed. PMID:19357552
Hallucinators find meaning in noises: pareidolic illusions in dementia with Lewy bodies.
Yokoi, Kayoko; Nishio, Yoshiyuki; Uchiyama, Makoto; Shimomura, Tatsuo; Iizuka, Osamu; Mori, Etsuro
2014-04-01
By definition, visual illusions and hallucinations differ in whether the perceived objects exist in reality. A recent study challenged this dichotomy, in which pareidolias, a type of complex visual illusion involving ambiguous forms being perceived as meaningful objects, are very common and phenomenologically similar to visual hallucinations in dementia with Lewy bodies (DLB). We hypothesise that a common psychological mechanism exists between pareidolias and visual hallucinations in DLB that confers meaning upon meaningless visual information. Furthermore, we believe that these two types of visual misperceptions have a common underlying neural mechanism, namely, cholinergic insufficiency. The current study investigated pareidolic illusions using meaningless visual noise stimuli (the noise pareidolia test) in 34 patients with DLB, 34 patients with Alzheimer׳s disease and 28 healthy controls. Fifteen patients with DLB were administered the noise pareidolia test twice, before and after donepezil treatment. Three major findings were discovered: (1) DLB patients saw meaningful illusory images (pareidolias) in meaningless visual stimuli, (2) the number of pareidolic responses correlated with the severity of visual hallucinations, and (3) cholinergic enhancement reduced both the number of pareidolias and the severity of visual hallucinations in patients with DLB. These findings suggest that a common underlying psychological and neural mechanism exists between pareidolias and visual hallucinations in DLB. Copyright © 2014 Elsevier Ltd. All rights reserved.
Assessment of visual landscape quality using IKONOS imagery.
Ozkan, Ulas Yunus
2014-07-01
The assessment of visual landscape quality is of importance to the management of urban woodlands. Satellite remote sensing may be used for this purpose as a substitute for traditional survey techniques that are both labour-intensive and time-consuming. This study examines the association between the quality of the perceived visual landscape in urban woodlands and texture measures extracted from IKONOS satellite data, which features 4-m spatial resolution and four spectral bands. The study was conducted in the woodlands of Istanbul (the most important element of urban mosaic) lying along both shores of the Bosporus Strait. The visual quality assessment applied in this study is based on the perceptual approach and was performed via a survey of expressed preferences. For this purpose, representative photographs of real scenery were used to elicit observers' preferences. A slide show comprising 33 images was presented to a group of 153 volunteers (all undergraduate students), and they were asked to rate the visual quality of each on a 10-point scale (1 for very low visual quality, 10 for very high). Average visual quality scores were calculated for landscape. Texture measures were acquired using the two methods: pixel-based and object-based. Pixel-based texture measures were extracted from the first principle component (PC1) image. Object-based texture measures were extracted by using the original four bands. The association between image texture measures and perceived visual landscape quality was tested via Pearson's correlation coefficient. The analysis found a strong linear association between image texture measures and visual quality. The highest correlation coefficient was calculated between standard deviation of gray levels (SDGL) (one of the pixel-based texture measures) and visual quality (r = 0.82, P < 0.05). The results showed that perceived visual quality of urban woodland landscapes can be estimated by using texture measures extracted from satellite data in combination with appropriate modelling techniques.
Combined Induction of Rubber-Hand Illusion and Out-of-Body Experiences
Olivé, Isadora; Berthoz, Alain
2012-01-01
The emergence of self-consciousness depends on several processes: those of body ownership, attributing self-identity to the body, and those of self-location, localizing our sense of self. Studies of phenomena like the rubber-hand illusion (RHi) and out-of-body experience (OBE) investigate these processes, respectively for representations of a body-part and the full-body. It is supposed that RHi only target processes related to body-part representations, while OBE only relates to full-body representations. The fundamental question whether the body-part and the full-body illusions relate to each other is nevertheless insufficiently investigated. In search for a link between body-part and full-body illusions in the brain we developed a behavioral task combining adapted versions of the RHi and OBE. Furthermore, for the investigation of this putative link we investigated the role of sensory and motor cues. We established a spatial dissociation between visual and proprioceptive feedback of a hand perceived through virtual reality in rest or action. Two experimental measures were introduced: one for the body-part illusion, the proprioceptive drift of the perceived localization of the hand, and one for the full-body illusion, the shift in subjective-straight-ahead (SSA). In the rest and action conditions it was observed that the proprioceptive drift of the left hand and the shift in SSA toward the manipulation side are equivalent. The combined effect was dependent on the manipulation of the visual representation of body parts, rejecting any main or even modulatory role for relevant motor programs. Our study demonstrates for the first time that there is a systematic relationship between the body-part illusion and the full-body illusion, as shown by our measures. This suggests a link between the representations in the brain of a body-part and the full-body, and consequently a common mechanism underpinning both forms of ownership and self-location. PMID:22675312
Computational model for perception of objects and motions.
Yang, WenLu; Zhang, LiQing; Ma, LiBo
2008-06-01
Perception of objects and motions in the visual scene is one of the basic problems in the visual system. There exist 'What' and 'Where' pathways in the superior visual cortex, starting from the simple cells in the primary visual cortex. The former is able to perceive objects such as forms, color, and texture, and the latter perceives 'where', for example, velocity and direction of spatial movement of objects. This paper explores brain-like computational architectures of visual information processing. We propose a visual perceptual model and computational mechanism for training the perceptual model. The computational model is a three-layer network. The first layer is the input layer which is used to receive the stimuli from natural environments. The second layer is designed for representing the internal neural information. The connections between the first layer and the second layer, called the receptive fields of neurons, are self-adaptively learned based on principle of sparse neural representation. To this end, we introduce Kullback-Leibler divergence as the measure of independence between neural responses and derive the learning algorithm based on minimizing the cost function. The proposed algorithm is applied to train the basis functions, namely receptive fields, which are localized, oriented, and bandpassed. The resultant receptive fields of neurons in the second layer have the characteristics resembling that of simple cells in the primary visual cortex. Based on these basis functions, we further construct the third layer for perception of what and where in the superior visual cortex. The proposed model is able to perceive objects and their motions with a high accuracy and strong robustness against additive noise. Computer simulation results in the final section show the feasibility of the proposed perceptual model and high efficiency of the learning algorithm.
Visual speech discrimination and identification of natural and synthetic consonant stimuli
Files, Benjamin T.; Tjan, Bosco S.; Jiang, Jintao; Bernstein, Lynne E.
2015-01-01
From phonetic features to connected discourse, every level of psycholinguistic structure including prosody can be perceived through viewing the talking face. Yet a longstanding notion in the literature is that visual speech perceptual categories comprise groups of phonemes (referred to as visemes), such as /p, b, m/ and /f, v/, whose internal structure is not informative to the visual speech perceiver. This conclusion has not to our knowledge been evaluated using a psychophysical discrimination paradigm. We hypothesized that perceivers can discriminate the phonemes within typical viseme groups, and that discrimination measured with d-prime (d’) and response latency is related to visual stimulus dissimilarities between consonant segments. In Experiment 1, participants performed speeded discrimination for pairs of consonant-vowel spoken nonsense syllables that were predicted to be same, near, or far in their perceptual distances, and that were presented as natural or synthesized video. Near pairs were within-viseme consonants. Natural within-viseme stimulus pairs were discriminated significantly above chance (except for /k/-/h/). Sensitivity (d’) increased and response times decreased with distance. Discrimination and identification were superior with natural stimuli, which comprised more phonetic information. We suggest that the notion of the viseme as a unitary perceptual category is incorrect. Experiment 2 probed the perceptual basis for visual speech discrimination by inverting the stimuli. Overall reductions in d’ with inverted stimuli but a persistent pattern of larger d’ for far than for near stimulus pairs are interpreted as evidence that visual speech is represented by both its motion and configural attributes. The methods and results of this investigation open up avenues for understanding the neural and perceptual bases for visual and audiovisual speech perception and for development of practical applications such as visual lipreading/speechreading speech synthesis. PMID:26217249
White constancy method for mobile displays
NASA Astrophysics Data System (ADS)
Yum, Ji Young; Park, Hyun Hee; Jang, Seul Ki; Lee, Jae Hyang; Kim, Jong Ho; Yi, Ji Young; Lee, Min Woo
2014-03-01
In these days, consumer's needs for image quality of mobile devices are increasing as smartphone is widely used. For example, colors may be perceived differently when displayed contents under different illuminants. Displayed white in incandescent lamp is perceived as bluish, while same content in LED light is perceived as yellowish. When changed in perceived white under illuminant environment, image quality would be degraded. Objective of the proposed white constancy method is restricted to maintain consistent output colors regardless of the illuminants utilized. Human visual experiments are performed to analyze viewers'perceptual constancy. Participants are asked to choose the displayed white in a variety of illuminants. Relationship between the illuminants and the selected colors with white are modeled by mapping function based on the results of human visual experiments. White constancy values for image control are determined on the predesigned functions. Experimental results indicate that propsed method yields better image quality by keeping the display white.
Shift in speed selectivity of visual cortical neurons: A neural basis of perceived motion contrast
Li, Chao-Yi; Lei, Jing-Jiang; Yao, Hai-Shan
1999-01-01
The perceived speed of motion in one part of the visual field is influenced by the speed of motion in its surrounding fields. Little is known about the cellular mechanisms causing this phenomenon. Recordings from mammalian visual cortex revealed that speed preference of the cortical cells could be changed by displaying a contrast speed in the field surrounding the cell’s classical receptive field. The neuron’s selectivity shifted to prefer faster speed if the contextual surround motion was set at a relatively lower speed, and vice versa. These specific center–surround interactions may underlie the perceptual enhancement of speed contrast between adjacent fields. PMID:10097161
NASA Astrophysics Data System (ADS)
Hansen, Christian; Schlichting, Stefan; Zidowitz, Stephan; Köhn, Alexander; Hindennach, Milo; Kleemann, Markus; Peitgen, Heinz-Otto
2008-03-01
Tumor resections from the liver are complex surgical interventions. With recent planning software, risk analyses based on individual liver anatomy can be carried out preoperatively. However, additional tumors within the liver are frequently detected during oncological interventions using intraoperative ultrasound. These tumors are not visible in preoperative data and their existence may require changes to the resection strategy. We propose a novel method that allows an intraoperative risk analysis adaptation by merging newly detected tumors with a preoperative risk analysis. To determine the exact positions and sizes of these tumors we make use of a navigated ultrasound-system. A fast communication protocol enables our application to exchange crucial data with this navigation system during an intervention. A further motivation for our work is to improve the visual presentation of a moving ultrasound plane within a complex 3D planning model including vascular systems, tumors, and organ surfaces. In case the ultrasound plane is located inside the liver, occlusion of the ultrasound plane by the planning model is an inevitable problem for the applied visualization technique. Our system allows the surgeon to focus on the ultrasound image while perceiving context-relevant planning information. To improve orientation ability and distance perception, we include additional depth cues by applying new illustrative visualization algorithms. Preliminary evaluations confirm that in case of intraoperatively detected tumors a risk analysis adaptation is beneficial for precise liver surgery. Our new GPU-based visualization approach provides the surgeon with a simultaneous visualization of planning models and navigated 2D ultrasound data while minimizing occlusion problems.
The Impact of Visual Impairment on Perceived School Climate
ERIC Educational Resources Information Center
Schade, Benjamin; Larwin, Karen H.
2015-01-01
The current investigation examines whether visual impairment has an impact on a student's perception of the school climate. Using a large national sample of high school students, perceptions were examined for students with vision impairment relative to students with no visual impairments. Three factors were examined: self-reported level of…
A Study on the Visualization Skills of 6th Grade Students
ERIC Educational Resources Information Center
Özkan, Ayten; Arikan, Elif Esra; Özkan, Erdogan Mehmet
2018-01-01
Visualization is an effective method for students to internalize concepts and to establish correlations between concepts. Visualization method is especially more important in mathematics which is perceived as the combination of abstract concepts. In this study, whether 6th grade students can solve questions about "Fractions" by using…
Visual Speech Primes Open-Set Recognition of Spoken Words
ERIC Educational Resources Information Center
Buchwald, Adam B.; Winters, Stephen J.; Pisoni, David B.
2009-01-01
Visual speech perception has become a topic of considerable interest to speech researchers. Previous research has demonstrated that perceivers neurally encode and use speech information from the visual modality, and this information has been found to facilitate spoken word recognition in tasks such as lexical decision (Kim, Davis, & Krins,…
Visualizing Gender with Fifth Grade Students
ERIC Educational Resources Information Center
Brown, David W., Jr.; Albers, Peggy
2014-01-01
How do fifth-grade students in a gifted class construct understandings of the opposite sex? In what ways do these constructions manifest in the visual texts created in literacy and language arts classrooms? This qualitative study integrated visual arts to understand how fifth-grade gifted students represented and perceived gender roles. Using…
Etchemendy, Pablo E; Spiousas, Ignacio; Vergara, Ramiro
2018-01-01
In a recently published work by our group [ Scientific Reports, 7, 7189 (2017)], we performed experiments of visual distance perception in two dark rooms with extremely different reverberation times: one anechoic ( T ∼ 0.12 s) and the other reverberant ( T ∼ 4 s). The perceived distance of the targets was systematically greater in the reverberant room when contrasted to the anechoic chamber. Participants also provided auditorily perceived room-size ratings which were greater for the reverberant room. Our hypothesis was that distance estimates are affected by room size, resulting in farther responses for the room perceived larger. Of much importance to the task was the subjects' ability to infer room size from reverberation. In this article, we report a postanalysis showing that participants having musical expertise were better able to extract and translate reverberation cues into room-size information than nonmusicians. However, the degree to which musical expertise affects visual distance estimates remains unclear.
Stephan-Otto, Christian; Siddi, Sara; Senior, Carl; Muñoz-Samons, Daniel; Ochoa, Susana; Sánchez-Laforga, Ana María; Brébion, Gildas
2017-01-01
Visual mental imagery might be critical in the ability to discriminate imagined from perceived pictures. Our aim was to investigate the neural bases of this specific type of reality-monitoring process in individuals with high visual imagery abilities. A reality-monitoring task was administered to twenty-six healthy participants using functional magnetic resonance imaging. During the encoding phase, 45 words designating common items, and 45 pictures of other common items, were presented in random order. During the recall phase, participants were required to remember whether a picture of the item had been presented, or only a word. Two subgroups of participants with a propensity for high vs. low visual imagery were contrasted. Activation of the amygdala, left inferior occipital gyrus, insula, and precuneus were observed when high visual imagers encoded words later remembered as pictures. At the recall phase, these same participants activated the middle frontal gyrus and inferior and superior parietal lobes when erroneously remembering pictures. The formation of visual mental images might activate visual brain areas as well as structures involved in emotional processing. High visual imagers demonstrate increased activation of a fronto-parietal source-monitoring network that enables distinction between imagined and perceived pictures.
Petrini, Karin; McAleer, Phil; Pollick, Frank
2010-04-06
In the present study we applied a paradigm often used in face-voice affect perception to solo music improvisation to examine how the emotional valence of sound and gesture are integrated when perceiving an emotion. Three brief excerpts expressing emotion produced by a drummer and three by a saxophonist were selected. From these bimodal congruent displays the audio-only, visual-only, and audiovisually incongruent conditions (obtained by combining the two signals both within and between instruments) were derived. In Experiment 1 twenty musical novices judged the perceived emotion and rated the strength of each emotion. The results indicate that sound dominated the visual signal in the perception of affective expression, though this was more evident for the saxophone. In Experiment 2 a further sixteen musical novices were asked to either pay attention to the musicians' movements or to the sound when judging the perceived emotions. The results showed no effect of visual information when judging the sound. On the contrary, when judging the emotional content of the visual information, a worsening in performance was obtained for the incongruent condition that combined different emotional auditory and visual information for the same instrument. The effect of emotionally discordant information thus became evident only when the auditory and visual signals belonged to the same categorical event despite their temporal mismatch. This suggests that the integration of emotional information may be reinforced by its semantic attributes but might be independent from temporal features. Copyright 2010 Elsevier B.V. All rights reserved.
Transient cardio-respiratory responses to visually induced tilt illusions
NASA Technical Reports Server (NTRS)
Wood, S. J.; Ramsdell, C. D.; Mullen, T. J.; Oman, C. M.; Harm, D. L.; Paloski, W. H.
2000-01-01
Although the orthostatic cardio-respiratory response is primarily mediated by the baroreflex, studies have shown that vestibular cues also contribute in both humans and animals. We have demonstrated a visually mediated response to illusory tilt in some human subjects. Blood pressure, heart and respiration rate, and lung volume were monitored in 16 supine human subjects during two types of visual stimulation, and compared with responses to real passive whole body tilt from supine to head 80 degrees upright. Visual tilt stimuli consisted of either a static scene from an overhead mirror or constant velocity scene motion along different body axes generated by an ultra-wide dome projection system. Visual vertical cues were initially aligned with the longitudinal body axis. Subjective tilt and self-motion were reported verbally. Although significant changes in cardio-respiratory parameters to illusory tilts could not be demonstrated for the entire group, several subjects showed significant transient decreases in mean blood pressure resembling their initial response to passive head-up tilt. Changes in pulse pressure and a slight elevation in heart rate were noted. These transient responses are consistent with the hypothesis that visual-vestibular input contributes to the initial cardiovascular adjustment to a change in posture in humans. On average the static scene elicited perceived tilt without rotation. Dome scene pitch and yaw elicited perceived tilt and rotation, and dome roll motion elicited perceived rotation without tilt. A significant correlation between the magnitude of physiological and subjective reports could not be demonstrated.
McBride, Sebastian; Huelse, Martin; Lee, Mark
2013-01-01
Computational visual attention systems have been constructed in order for robots and other devices to detect and locate regions of interest in their visual world. Such systems often attempt to take account of what is known of the human visual system and employ concepts, such as 'active vision', to gain various perceived advantages. However, despite the potential for gaining insights from such experiments, the computational requirements for visual attention processing are often not clearly presented from a biological perspective. This was the primary objective of this study, attained through two specific phases of investigation: 1) conceptual modeling of a top-down-bottom-up framework through critical analysis of the psychophysical and neurophysiological literature, 2) implementation and validation of the model into robotic hardware (as a representative of an active vision system). Seven computational requirements were identified: 1) transformation of retinotopic to egocentric mappings, 2) spatial memory for the purposes of medium-term inhibition of return, 3) synchronization of 'where' and 'what' information from the two visual streams, 4) convergence of top-down and bottom-up information to a centralized point of information processing, 5) a threshold function to elicit saccade action, 6) a function to represent task relevance as a ratio of excitation and inhibition, and 7) derivation of excitation and inhibition values from object-associated feature classes. The model provides further insight into the nature of data representation and transfer between brain regions associated with the vertebrate 'active' visual attention system. In particular, the model lends strong support to the functional role of the lateral intraparietal region of the brain as a primary area of information consolidation that directs putative action through the use of a 'priority map'.
McBride, Sebastian; Huelse, Martin; Lee, Mark
2013-01-01
Computational visual attention systems have been constructed in order for robots and other devices to detect and locate regions of interest in their visual world. Such systems often attempt to take account of what is known of the human visual system and employ concepts, such as ‘active vision’, to gain various perceived advantages. However, despite the potential for gaining insights from such experiments, the computational requirements for visual attention processing are often not clearly presented from a biological perspective. This was the primary objective of this study, attained through two specific phases of investigation: 1) conceptual modeling of a top-down-bottom-up framework through critical analysis of the psychophysical and neurophysiological literature, 2) implementation and validation of the model into robotic hardware (as a representative of an active vision system). Seven computational requirements were identified: 1) transformation of retinotopic to egocentric mappings, 2) spatial memory for the purposes of medium-term inhibition of return, 3) synchronization of ‘where’ and ‘what’ information from the two visual streams, 4) convergence of top-down and bottom-up information to a centralized point of information processing, 5) a threshold function to elicit saccade action, 6) a function to represent task relevance as a ratio of excitation and inhibition, and 7) derivation of excitation and inhibition values from object-associated feature classes. The model provides further insight into the nature of data representation and transfer between brain regions associated with the vertebrate ‘active’ visual attention system. In particular, the model lends strong support to the functional role of the lateral intraparietal region of the brain as a primary area of information consolidation that directs putative action through the use of a ‘priority map’. PMID:23437044
Kudoh, Nobuo
2005-01-01
Walking without vision to previously viewed targets was compared with visual perception of allocentric distance in two experiments. Experimental evidence had shown that physically equal distances in a sagittal plane on the ground were perceptually underestimated as compared with those in a frontoparallel plane, even under full-cue conditions. In spite of this perceptual anisotropy of space, Loomis et al (1992 Journal of Experimental Psychology. Human Perception and Performance 18 906-921) found that subjects could match both types of distances in a blind-walking task. In experiment 1 of the present study, subjects were required to reproduce the extent of allocentric distance between two targets by either walking towards the targets, or by walking in a direction incompatible with the locations of the targets. The latter condition required subjects to derive an accurate allocentric distance from information based on the perceived locations of the two targets. The walked distance in the two conditions was almost identical whether the two targets were presented in depth (depth-presentation condition) or in the frontoparallel plane (width-presentation condition). The results of a perceptual-matching task showed that the depth distances had to be much greater than the width distances in order to be judged to be equal in length (depth compression). In experiment 2, subjects were required to reproduce the extent of allocentric distance from the viewing point by blindly walking in a direction other than toward the targets. The walked distance in the depth-presentation condition was shorter than that in the width-presentation condition. This anisotropy in motor responses, however, was mainly caused by apparent overestimation of length oriented in width, not by depth compression. In addition, the walked distances were much better scaled than those in experiment 1. These results suggest that the perceptual and motor systems share a common representation of the location of targets, whereas a dissociation in allocentric distance exists between the two systems in full-cue conditions.
Perceived change in orientation from optic flow in the central visual field
NASA Technical Reports Server (NTRS)
Dyre, Brian P.; Andersen, George J.
1988-01-01
The effects of internal depth within a simulation display on perceived changes in orientation have been studied. Subjects monocularly viewed displays simulating observer motion within a volume of randomly positioned points through a window which limited the field of view to 15 deg. Changes in perceived spatial orientation were measured by changes in posture. The extent of internal depth within the display, the presence or absence of visual information specifying change in orientation, and the frequency of motion supplied by the display were examined. It was found that increased sway occurred at frequencies equal to or below 0.375 Hz when motion at these frequencies was displayed. The extent of internal depth had no effect on the perception of changing orientation.
Neural pathways for visual speech perception
Bernstein, Lynne E.; Liebenthal, Einat
2014-01-01
This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody) can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns of activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1) The visual perception of speech relies on visual pathway representations of speech qua speech. (2) A proposed site of these representations, the temporal visual speech area (TVSA) has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS). (3) Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA. PMID:25520611
Direct Relationship Between Perceptual and Motor Variability
NASA Technical Reports Server (NTRS)
Liston, Dorion B.; Stone, Leland S.
2010-01-01
The time that elapses between stimulus onset and the onset of a saccadic eye movement is longer and more variable than can be explained by neural transmission times and synaptic delays (Carpenter, 1981, in: Eye Movements: Cognition & Visual Perception, Earlbaum). In theory, noise underlying response-time (RT) variability could arise at any point along the sensorimotor cascade, from sensory noise arising Vvithin the early visual processing shared Vvith perception to noise in the motor criterion or commands necessary to trigger movements. These two loci for internal noise can be distinguished empirically; sensory internal noise predicts that response time Vvill correlate Vvith perceived stimulus magnitude whereas motor internal noise predicts no such correlation. Methods. We used the data described by Liston and Stone (2008, JNS 28:13866-13875), in which subjects performed a 2AFC saccadic brightness discrimination task and the perceived brightness of the chosen stimulus was then quantified in a second 21FC perceptual task. Results. We binned each subject's data into quartiles for both signal strength (from dimmest to brightest) and RT (from slowest to fastest) and analyzed the trends in perceived brightness. We found significant effects of both signal strength (as expected) and RT on normalized perceived brightness (both p less than 0.0001, 2-way ANOVA), without significant interaction (p = 0.95, 2-way ANOVA). A plot of normalized perceived brightness versus normalized RT show's that more than half of the variance was shared (r2 = 0.56, P less than 0.0001). To rule out any possibility that some signal-strength related artifact was generating this effect, we ran a control analysis on pairs of trials with repeated presentations of identical stimuli and found that stimuli are perceived to be brighter on trials with faster saccades (p less than 0.001, paired t-test across subjects). Conclusion. These data show that shared early visual internal noise jitters perceived brightness and the saccadic motor output in parallel. While the present correlation could theoretically result, either directly or indirectly, from some low-level brainstem or retinal mechanism (e.g., arousal, pupil size, photoreceptor noise) that influences both visual and oculomotor circuits, this is unlikely given the earlier fin ding that the variability in perceived motion direction and smooth-pursuit motor output is highly correlated (Stone and Krauzlis, 2003, JOV 3:725-736), suggesting that cortical circuits contribute to the shared internal noise.
Global Statistical Learning in a Visual Search Task
ERIC Educational Resources Information Center
Jones, John L.; Kaschak, Michael P.
2012-01-01
Locating a target in a visual search task is facilitated when the target location is repeated on successive trials. Global statistical properties also influence visual search, but have often been confounded with local regularities (i.e., target location repetition). In two experiments, target locations were not repeated for four successive trials,…
Matin, L; Li, W
2001-10-01
An individual line or a combination of lines viewed in darkness has a large influence on the elevation to which an observer sets a target so that it is perceived to lie at eye level (VPEL). These influences are systematically related to the orientation of pitched-from-vertical lines on pitched plane(s) and to the lengths of the lines, as well as to the orientations of lines of 'equivalent pitch' that lie on frontoparallel planes. A three-stage model processes the visual influence: The first stage parallel processes the orientations of the lines utilizing 2 classes of orientation-sensitive neural units in each hemisphere, with the two classes sensitive to opposing ranges of orientations; the signal delivered by each class is of opposite sign in the two hemispheres. The second stage generates the total visual influence from the parallel combination of inputs delivered by the 4 groups of the first stage, and a third stage combines the total visual influence from the second stage with signals from the body-referenced mechanism that contains information about the position and orientation of the eyes, head, and body. The circuit equation describing the combined influence of n separate inputs from stage 1 on the output of the stage 2 integrating neuron is derived for n stimulus lines which possess any combination of orientations and lengths; Each of the n lines is assumed to stimulate one of the groups of orientation-sensitive units in visual cortex (stage 1) whose signals converge on to a dendrite of the integrating neuron (stage 2), and to produce changes in postsynaptic membrane conductance (g(i)) and potential (V(i)) there. The net current from the n dendrites results in a voltage change (V(A)) at the initial segment of the axon of the integrating neuron. Nerve impulse frequency proportional to this voltage change signals the total visual influence on perceived elevation of the visual field. The circuit equation corresponding to the total visual influence for n equal length inducing lines is V(A)= sum V(i)/[n+(g(A)/g(S))], where the potential change due to line i, V(i), is proportional to line orientation, g(A) is the conductance at the axon's summing point, and g(S)=g(i) for each i for the equal length case; the net conductance change due to a line is proportional to the line's length. The circuit equation is interpreted as a basis for quantitative predictions from the model that can be compared to psychophysical measurements of the elevation of VPEL. The interpretation provides the predicted relation for the visual influence on VPEL, V, by n inducing lines each with length l: thus, V=a+[k(i) sum theta(i)/n+(k(2)/l)], where theta(i) is the orientation of line i, a is the effect of the body-referenced mechanism, and k(1) and k(2) are constants. The model's output is fitted to the results of five sets of experiments in which the elevation of VPEL measured with a small target in the median plane is systematically influenced by distantly located 1-line or 2-line inducing stimuli varying in orientation and length and viewed in otherwise total darkness with gaze restricted to the median plane; each line is located at either 25 degrees eccentricity to the left or right of the median plane. The model predicts the negatively accelerated growth of VPEL with line length for each orientation and the change of slope constant of the linear combination rule among lines from 1.00 (linear summation; short lines) to 0.61 (near-averaging; long lines). Fits to the data are obtained over a range of orientations from -30 degrees to +30 degrees of pitch for 1-line visual fields from lengths of 3 degrees to 64 degrees, for parallel 2-line visual fields over the same range of lengths and orientations, for short and long 2-line combinations in which each of the two members may have any orientation (parallel or nonparallel pairs), and for the well-illuminated and fully structured pitchroom. In addition, similar experiments with 2-line stimuli of equivalent pitch in the frontoparallel plane were also fitted to the model. The model accounts for more than 98% of the variance of the results in each case.
From genes to brain oscillations: is the visual pathway the epigenetic clue to schizophrenia?
González-Hernández, J A; Pita-Alcorta, C; Cedeño, I R
2006-01-01
Molecular data and gene expression data and recently mitochondrial genes and possible epigenetic regulation by non-coding genes is revolutionizing our views on schizophrenia. Genes and epigenetic mechanisms are triggered by cell-cell interaction and by external stimuli. A number of recent clinical and molecular observations indicate that epigenetic factors may be operational in the origin of the illness. Based on the molecular insights, gene expression profiles and epigenetic regulation of gene, we went back to the neurophysiology (brain oscillations) and found a putative role of the visual experiences (i.e. visual stimuli) as epigenetic factor. The functional evidences provided here, establish a direct link between the striate and extrastriate unimodal visual cortex and the neurobiology of the schizophrenia. This result support the hypothesis that 'visual experience' has a potential role as epigenetic factor and contribute to trigger and/or to maintain the progression of the schizophrenia. In this case, candidate genes sensible for the visual 'insult' may be located within the visual cortex including associative areas, while the integrity of the visual pathway before reaching the primary visual cortex is preserved. The same effect can be perceived if target genes are localised within the visual pathway, which actually, is more sensitive for 'insult' during the early life than the cortex per se. If this process affects gene expression at these sites a stably sensory specific 'insult', i.e. distorted visual information, is entering the visual system and expanded to fronto-temporo-parietal multimodal areas even from early maturation periods. The difference in the timing of postnatal neuroanatomical events between such areas and the primary visual cortex in humans (with the formers reaching the same development landmarks later in life than the latter) is 'optimal' to establish an abnormal 'cell- communication' mediated by the visual system that may further interfere with the local physiology. In this context the strategy to search target genes need to be rearrangement and redirected to visual-related genes. Otherwise, psychophysics studies combining functional neuroimage, and electrophysiology are strongly recommended, for the search of epigenetic clues that will allow to carrier gene association studies in schizophrenia.
Adaptation to Skew Distortions of Natural Scenes and Retinal Specificity of Its Aftereffects
Habtegiorgis, Selam W.; Rifai, Katharina; Lappe, Markus; Wahl, Siegfried
2017-01-01
Image skew is one of the prominent distortions that exist in optical elements, such as in spectacle lenses. The present study evaluates adaptation to image skew in dynamic natural images. Moreover, the cortical levels involved in skew coding were probed using retinal specificity of skew adaptation aftereffects. Left and right skewed natural image sequences were shown to observers as adapting stimuli. The point of subjective equality (PSE), i.e., the skew amplitude in simple geometrical patterns that is perceived to be unskewed, was used to quantify the aftereffect of each adapting skew direction. The PSE, in a two-alternative forced choice paradigm, shifted toward the adapting skew direction. Moreover, significant adaptation aftereffects were obtained not only at adapted, but also at non-adapted retinal locations during fixation. Skew adaptation information was transferred partially to non-adapted retinal locations. Thus, adaptation to skewed natural scenes induces coordinated plasticity in lower and higher cortical areas of the visual pathway. PMID:28751870
Perception of Stand-on-ability: Do Geographical Slants Feel Steeper Than They Look?
Hajnal, Alen; Wagman, Jeffrey B; Doyon, Jonathan K; Clark, Joseph D
2016-07-01
Past research has shown that haptically perceived surface slant by foot is matched with visually perceived slant by a factor of 0.81. Slopes perceived visually appear shallower than when stood on without looking. We sought to identify the sources of this discrepancy by asking participants to judge whether they would be able to stand on an inclined ramp. In the first experiment, visual perception was compared to pedal perception in which participants took half a step with one foot onto an occluded ramp. Visual perception closely matched the actual maximal slope angle that one could stand on, whereas pedal perception underestimated it. Participants may have been less stable in the pedal condition while taking half a step onto the ramp. We controlled for this by having participants hold onto a sturdy tripod in the pedal condition (Experiment 2). This did not eliminate the difference between visual and haptic perception, but repeating the task while sitting on a chair did (Experiment 3). Beyond balance requirements, pedal perception may also be constrained by the limited range of motion at the ankle and knee joints while standing. Indeed, when we restricted range of motion by wearing an ankle brace pedal perception underestimated the affordance (Experiment 4). Implications for ecological theory were offered by discussing the notion of functional equivalence and the role of exploration in perception. © The Author(s) 2016.
Ventral and Dorsal Visual Stream Contributions to the Perception of Object Shape and Object Location
Zachariou, Valentinos; Klatzky, Roberta; Behrmann, Marlene
2017-01-01
Growing evidence suggests that the functional specialization of the two cortical visual pathways may not be as distinct as originally proposed. Here, we explore possible contributions of the dorsal “where/how” visual stream to shape perception and, conversely, contributions of the ventral “what” visual stream to location perception in human adults. Participants performed a shape detection task and a location detection task while undergoing fMRI. For shape detection, comparable BOLD activation in the ventral and dorsal visual streams was observed, and the magnitude of this activation was correlated with behavioral performance. For location detection, cortical activation was significantly stronger in the dorsal than ventral visual pathway and did not correlate with the behavioral outcome. This asymmetry in cortical profile across tasks is particularly noteworthy given that the visual input was identical and that the tasks were matched for difficulty in performance. We confirmed the asymmetry in a subsequent psychophysical experiment in which participants detected changes in either object location or shape, while ignoring the other, task-irrelevant dimension. Detection of a location change was slowed by an irrelevant shape change matched for difficulty, but the reverse did not hold. We conclude that both ventral and dorsal visual streams contribute to shape perception, but that location processing appears to be essentially a function of the dorsal visual pathway. PMID:24001005
A massively asynchronous, parallel brain
Zeki, Semir
2015-01-01
Whether the visual brain uses a parallel or a serial, hierarchical, strategy to process visual signals, the end result appears to be that different attributes of the visual scene are perceived asynchronously—with colour leading form (orientation) by 40 ms and direction of motion by about 80 ms. Whatever the neural root of this asynchrony, it creates a problem that has not been properly addressed, namely how visual attributes that are perceived asynchronously over brief time windows after stimulus onset are bound together in the longer term to give us a unified experience of the visual world, in which all attributes are apparently seen in perfect registration. In this review, I suggest that there is no central neural clock in the (visual) brain that synchronizes the activity of different processing systems. More likely, activity in each of the parallel processing-perceptual systems of the visual brain is reset independently, making of the brain a massively asynchronous organ, just like the new generation of more efficient computers promise to be. Given the asynchronous operations of the brain, it is likely that the results of activities in the different processing-perceptual systems are not bound by physiological interactions between cells in the specialized visual areas, but post-perceptually, outside the visual brain. PMID:25823871
Foggy perception slows us down
Pretto, Paolo; Bresciani, Jean-Pierre; Rainer, Gregor; Bülthoff, Heinrich H
2012-01-01
Visual speed is believed to be underestimated at low contrast, which has been proposed as an explanation of excessive driving speed in fog. Combining psychophysics measurements and driving simulation, we confirm that speed is underestimated when contrast is reduced uniformly for all objects of the visual scene independently of their distance from the viewer. However, we show that when contrast is reduced more for distant objects, as is the case in real fog, visual speed is actually overestimated, prompting drivers to decelerate. Using an artificial anti-fog—that is, fog characterized by better visibility for distant than for close objects, we demonstrate for the first time that perceived speed depends on the spatial distribution of contrast over the visual scene rather than the global level of contrast per se. Our results cast new light on how reduced visibility conditions affect perceived speed, providing important insight into the human visual system. DOI: http://dx.doi.org/10.7554/eLife.00031.001 PMID:23110253
Testing the distinctiveness of visual imagery and motor imagery in a reach paradigm.
Gabbard, Carl; Ammar, Diala; Cordova, Alberto
2009-01-01
We examined the distinctiveness of motor imagery (MI) and visual imagery (VI) in the context of perceived reachability. The aim was to explore the notion that the two visual modes have distinctive processing properties tied to the two-visual-system hypothesis. The experiment included an interference tactic whereby participants completed two tasks at the same time: a visual or motor-interference task combined with a MI or VI-reaching task. We expected increased error would occur when the imaged task and the interference task were matched (e.g., MI with the motor task), suggesting an association based on the assumption that the two tasks were in competition for space on the same processing pathway. Alternatively, if there were no differences, dissociation could be inferred. Significant increases in the number of errors were found when the modalities for the imaged (both MI and VI) task and the interference task were matched. Therefore, it appears that MI and VI in the context of perceived reachability recruit different processing mechanisms.
The internal representation of head orientation differs for conscious perception and balance control
Dalton, Brian H.; Rasman, Brandon G.; Inglis, J. Timothy
2017-01-01
Key points We tested perceived head‐on‐feet orientation and the direction of vestibular‐evoked balance responses in passively and actively held head‐turned postures.The direction of vestibular‐evoked balance responses was not aligned with perceived head‐on‐feet orientation while maintaining prolonged passively held head‐turned postures. Furthermore, static visual cues of head‐on‐feet orientation did not update the estimate of head posture for the balance controller.A prolonged actively held head‐turned posture did not elicit a rotation in the direction of the vestibular‐evoked balance response despite a significant rotation in perceived angular head posture.It is proposed that conscious perception of head posture and the transformation of vestibular signals for standing balance relying on this head posture are not dependent on the same internal representation. Rather, the balance system may operate under its own sensorimotor principles, which are partly independent from perception. Abstract Vestibular signals used for balance control must be integrated with other sensorimotor cues to allow transformation of descending signals according to an internal representation of body configuration. We explored two alternative models of sensorimotor integration that propose (1) a single internal representation of head‐on‐feet orientation is responsible for perceived postural orientation and standing balance or (2) conscious perception and balance control are driven by separate internal representations. During three experiments, participants stood quietly while passively or actively maintaining a prolonged head‐turned posture (>10 min). Throughout the trials, participants intermittently reported their perceived head angular position, and subsequently electrical vestibular stimuli were delivered to elicit whole‐body balance responses. Visual recalibration of head‐on‐feet posture was used to determine whether static visual cues are used to update the internal representation of body configuration for perceived orientation and standing balance. All three experiments involved situations in which the vestibular‐evoked balance response was not orthogonal to perceived head‐on‐feet orientation, regardless of the visual information provided. For prolonged head‐turned postures, balance responses consistent with actual head‐on‐feet posture occurred only during the active condition. Our results indicate that conscious perception of head‐on‐feet posture and vestibular control of balance do not rely on the same internal representation, but instead treat sensorimotor cues in parallel and may arrive at different conclusions regarding head‐on‐feet posture. The balance system appears to bypass static visual cues of postural orientation and mainly use other sensorimotor signals of head‐on‐feet position to transform vestibular signals of head motion, a mechanism appropriate for most daily activities. PMID:28035656
The Impact of Continuity Editing in Narrative Film on Event Segmentation
Magliano, Joseph P.; Zacks, Jeffrey M.
2011-01-01
Filmmakers use continuity editing to engender a sense of situational continuity or discontinuity at editing boundaries. The goal of this study was to assess the impact of continuity editing on how people perceive the structure of events in a narrative film and to identify brain networks that are associated with the processing of different types of continuity editing boundaries. Participants viewed a commercially produced film and segmented it into meaningful events while brain activity was recorded with functional MRI. We identified three degrees of continuity that can occur at editing locations: edits that are continuous in space, time, and action; edits that are discontinuous in space or time but continuous in action; and edits that are discontinuous in action as well as space or time. Discontinuities in action had the biggest impact on behavioral event segmentation and discontinuities in space and time had minor effects. Edits were associated with large transient increases in early visual areas. Spatial-temporal changes and action changes produced strikingly different patterns of transient change, and provided evidence that specialized mechanisms in higher-order perceptual processing regions are engaged to maintain continuity of action in the face of spatiotemporal discontinuities. These results suggest that commercial film editing is shaped to support the comprehension of meaningful events that bridge breaks in low-level visual continuity, and even breaks in continuity of spatial and temporal location. PMID:21972849
2014-01-01
The phonetic properties of six Malay vowels are investigated using magnetic resonance imaging (MRI) to visualize the vocal tract in order to obtain dynamic articulatory parameters during speech production. To resolve image blurring due to the tongue movement during the scanning process, a method based on active contour extraction is used to track tongue contours. The proposed method efficiently tracks tongue contours despite the partial blurring of MRI images. Consequently, the articulatory parameters that are effectively measured as tongue movement is observed, and the specific shape of the tongue and its position for all six uttered Malay vowels are determined. Speech rehabilitation procedure demands some kind of visual perceivable prototype of speech articulation. To investigate the validity of the measured articulatory parameters based on acoustic theory of speech production, an acoustic analysis based on the uttered vowels by subjects has been performed. As the acoustic speech and articulatory parameters of uttered speech were examined, a correlation between formant frequencies and articulatory parameters was observed. The experiments reported a positive correlation between the constriction location of the tongue body and the first formant frequency, as well as a negative correlation between the constriction location of the tongue tip and the second formant frequency. The results demonstrate that the proposed method is an effective tool for the dynamic study of speech production. PMID:25060583
Zourmand, Alireza; Mirhassani, Seyed Mostafa; Ting, Hua-Nong; Bux, Shaik Ismail; Ng, Kwan Hoong; Bilgen, Mehmet; Jalaludin, Mohd Amin
2014-07-25
The phonetic properties of six Malay vowels are investigated using magnetic resonance imaging (MRI) to visualize the vocal tract in order to obtain dynamic articulatory parameters during speech production. To resolve image blurring due to the tongue movement during the scanning process, a method based on active contour extraction is used to track tongue contours. The proposed method efficiently tracks tongue contours despite the partial blurring of MRI images. Consequently, the articulatory parameters that are effectively measured as tongue movement is observed, and the specific shape of the tongue and its position for all six uttered Malay vowels are determined.Speech rehabilitation procedure demands some kind of visual perceivable prototype of speech articulation. To investigate the validity of the measured articulatory parameters based on acoustic theory of speech production, an acoustic analysis based on the uttered vowels by subjects has been performed. As the acoustic speech and articulatory parameters of uttered speech were examined, a correlation between formant frequencies and articulatory parameters was observed. The experiments reported a positive correlation between the constriction location of the tongue body and the first formant frequency, as well as a negative correlation between the constriction location of the tongue tip and the second formant frequency. The results demonstrate that the proposed method is an effective tool for the dynamic study of speech production.
Visual Knowledge in Tactical Planning: Preliminary Knowledge Acquisition Phase 1 Technical Report
1990-04-05
MANAGEMENT INFORMATION , COMMUNICATIONS, AND COMPUTER SCIENCES Visual Knowledge in Tactical Planning: Preliminary Knowledge Acquisition Phase I Technical...perceived provides information in multiple modalities and, in fact, we may rely on a non-verbal mode for much of our understanding of the situation...some tasks, almost all the pertinent information is provided via diagrams, maps, znd other illustrations. Visual Knowledge Visual experience forms a
The Best Colors for Audio-Visual Materials for More Effective Instruction.
ERIC Educational Resources Information Center
Start, Jay
A number of variables may affect the ability of students to perceive, and learn from, instructional materials. The objectives of the study presented here were to determine the projected color that provided the best visual acuity for the viewer, and the necessary minimum exposure time for achieving maximum visual acuity. Fifty…
PONS, FERRAN; ANDREU, LLORENC.; SANZ-TORRENT, MONICA; BUIL-LEGAZ, LUCIA; LEWKOWICZ, DAVID J.
2014-01-01
Speech perception involves the integration of auditory and visual articulatory information and, thus, requires the perception of temporal synchrony between this information. There is evidence that children with Specific Language Impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component followed the visual component. None of the groups perceived an audiovisual asynchrony of 366ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception. PMID:22874648
Pons, Ferran; Andreu, Llorenç; Sanz-Torrent, Monica; Buil-Legaz, Lucía; Lewkowicz, David J
2013-06-01
Speech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666 ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component preceded [corrected] the visual component. None of the groups perceived an audiovisual asynchrony of 366 ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception.
Murthy, Aditya; Ray, Supriya; Shorter, Stephanie M; Schall, Jeffrey D; Thompson, Kirk G
2009-05-01
The dynamics of visual selection and saccade preparation by the frontal eye field was investigated in macaque monkeys performing a search-step task combining the classic double-step saccade task with visual search. Reward was earned for producing a saccade to a color singleton. On random trials the target and one distractor swapped locations before the saccade and monkeys were rewarded for shifting gaze to the new singleton location. A race model accounts for the probabilities and latencies of saccades to the initial and final singleton locations and provides a measure of the duration of a covert compensation process-target-step reaction time. When the target stepped out of a movement field, noncompensated saccades to the original location were produced when movement-related activity grew rapidly to a threshold. Compensated saccades to the final location were produced when the growth of the original movement-related activity was interrupted within target-step reaction time and was replaced by activation of other neurons producing the compensated saccade. When the target stepped into a receptive field, visual neurons selected the new target location regardless of the monkeys' response. When the target stepped out of a receptive field most visual neurons maintained the representation of the original target location, but a minority of visual neurons showed reduced activity. Chronometric analyses of the neural responses to the target step revealed that the modulation of visually responsive neurons and movement-related neurons occurred early enough to shift attention and saccade preparation from the old to the new target location. These findings indicate that visual activity in the frontal eye field signals the location of targets for orienting, whereas movement-related activity instantiates saccade preparation.
Perceived orientation in free-fall dependson visual, postural, and architectural factors
NASA Technical Reports Server (NTRS)
Lackner, J. R.; Graybiel, A.
1983-01-01
In orbital flight and in the free-fall phase of parabolic flight, feelings of inversion of self and spacecraft, or aircraft, are often experienced. It is shown here that perceived orientation in free-fall is dependent on the position of one's body in relation to the aircraft, the architectural features of the aircraft, and one's visual appreciation of the relative configurations of his body and the aircraft. Compelling changes in the apparent orientation of one's body and of the aircraft can be reliably and systematically induced by manipulating this relationship. Moreover, while free-floating in the absence of visual, touch, and pressure stimulation, all sense of orientation to the surroundings may be lost with only an awareness of the relative configuration of the body preserved. The absences of falling sensations during weightlessness points to the importance of visual and cognitive factors in eliciting such sensations.
Priming with real motion biases visual cortical response to bistable apparent motion
Zhang, Qing-fang; Wen, Yunqing; Zhang, Deng; She, Liang; Wu, Jian-young; Dan, Yang; Poo, Mu-ming
2012-01-01
Apparent motion quartet is an ambiguous stimulus that elicits bistable perception, with the perceived motion alternating between two orthogonal paths. In human psychophysical experiments, the probability of perceiving motion in each path is greatly enhanced by a brief exposure to real motion along that path. To examine the neural mechanism underlying this priming effect, we used voltage-sensitive dye (VSD) imaging to measure the spatiotemporal activity in the primary visual cortex (V1) of awake mice. We found that a brief real motion stimulus transiently biased the cortical response to subsequent apparent motion toward the spatiotemporal pattern representing the real motion. Furthermore, intracellular recording from V1 neurons in anesthetized mice showed a similar increase in subthreshold depolarization in the neurons representing the path of real motion. Such short-term plasticity in early visual circuits may contribute to the priming effect in bistable visual perception. PMID:23188797
Visual motion integration for perception and pursuit
NASA Technical Reports Server (NTRS)
Stone, L. S.; Beutter, B. R.; Lorenceau, J.
2000-01-01
To examine the relationship between visual motion processing for perception and pursuit, we measured the pursuit eye-movement and perceptual responses to the same complex-motion stimuli. We show that humans can both perceive and pursue the motion of line-figure objects, even when partial occlusion makes the resulting image motion vastly different from the underlying object motion. Our results show that both perception and pursuit can perform largely accurate motion integration, i.e. the selective combination of local motion signals across the visual field to derive global object motion. Furthermore, because we manipulated perceived motion while keeping image motion identical, the observed parallel changes in perception and pursuit show that the motion signals driving steady-state pursuit and perception are linked. These findings disprove current pursuit models whose control strategy is to minimize retinal image motion, and suggest a new framework for the interplay between visual cortex and cerebellum in visuomotor control.
Jack, Bradley N; Roeber, Urte; O'Shea, Robert P
2017-01-01
When dissimilar images are presented one to each eye, we do not see both images; rather, we see one at a time, alternating unpredictably. This is called binocular rivalry, and it has recently been used to study brain processes that correlate with visual consciousness, because perception changes without any change in the sensory input. Such studies have used various types of images, but the most popular have been gratings: sets of bright and dark lines of orthogonal orientations presented one to each eye. We studied whether using cardinal rival gratings (vertical, 0°, and horizontal, 90°) versus oblique rival gratings (left-oblique, -45°, and right-oblique, 45°) influences early neural correlates of visual consciousness, because of the oblique effect: the tendency for visual performance to be greater for cardinal gratings than for oblique gratings. Participants viewed rival gratings and pressed keys indicating which of the two gratings they perceived, was dominant. Next, we changed one of the gratings to match the grating shown to the other eye, yielding binocular fusion. Participants perceived the rivalry-to-fusion change to the dominant grating and not to the other, suppressed grating. Using event-related potentials (ERPs), we found neural correlates of visual consciousness at the P1 for both sets of gratings, as well as at the P1-N1 for oblique gratings, and we found a neural correlate of the oblique effect at the N1, but only for perceived changes. These results show that the P1 is the earliest neural activity associated with visual consciousness and that visual consciousness might be necessary to elicit the oblique effect.
A Pursuit Theory Account for the Perception of Common Motion in Motion Parallax.
Ratzlaff, Michael; Nawrot, Mark
2016-09-01
The visual system uses an extraretinal pursuit eye movement signal to disambiguate the perception of depth from motion parallax. Visual motion in the same direction as the pursuit is perceived nearer in depth while visual motion in the opposite direction as pursuit is perceived farther in depth. This explanation of depth sign applies to either an allocentric frame of reference centered on the fixation point or an egocentric frame of reference centered on the observer. A related problem is that of depth order when two stimuli have a common direction of motion. The first psychophysical study determined whether perception of egocentric depth order is adequately explained by a model employing an allocentric framework, especially when the motion parallax stimuli have common rather than divergent motion. A second study determined whether a reversal in perceived depth order, produced by a reduction in pursuit velocity, is also explained by this model employing this allocentric framework. The results show than an allocentric model can explain both the egocentric perception of depth order with common motion and the perceptual depth order reversal created by a reduction in pursuit velocity. We conclude that an egocentric model is not the only explanation for perceived depth order in these common motion conditions. © The Author(s) 2016.
Perception of Animacy from the Motion of a Single Sound Object.
Nielsen, Rasmus Høll; Vuust, Peter; Wallentin, Mikkel
2015-02-01
Research in the visual modality has shown that the presence of certain dynamics in the motion of an object has a strong effect on whether or not the entity is perceived as animate. Cues for animacy are, among others, self-propelled motion and direction changes that are seemingly not caused by entities external to, or in direct contact with, the moving object. The present study aimed to extend this research into the auditory domain by determining if similar dynamics could influence the perceived animacy of a sound source. In two experiments, participants were presented with single, synthetically generated 'mosquito' sounds moving along trajectories in space, and asked to rate how certain they were that each sound-emitting entity was alive. At a random point on a linear motion trajectory, the sound source would deviate from its initial path and speed. Results confirm findings from the visual domain that a change in the velocity of motion is positively correlated with perceived animacy, and changes in direction were found to influence animacy judgment as well. This suggests that an ability to facilitate and sustain self-movement is perceived as a living quality not only in the visual domain, but in the auditory domain as well. © 2015 SAGE Publications.
Smell or vision? The use of different sensory modalities in predator discrimination.
Fischer, Stefan; Oberhummer, Evelyne; Cunha-Saraiva, Filipa; Gerber, Nina; Taborsky, Barbara
2017-01-01
Theory predicts that animals should adjust their escape responses to the perceived predation risk. The information animals obtain about potential predation risk may differ qualitatively depending on the sensory modality by which a cue is perceived. For instance, olfactory cues may reveal better information about the presence or absence of threats, whereas visual information can reliably transmit the position and potential attack distance of a predator. While this suggests a differential use of information perceived through the two sensory channels, the relative importance of visual vs. olfactory cues when distinguishing between different predation threats is still poorly understood. Therefore, we exposed individuals of the cooperatively breeding cichlid Neolamprologus pulcher to a standardized threat stimulus combined with either predator or non-predator cues presented either visually or chemically. We predicted that flight responses towards a threat stimulus are more pronounced if cues of dangerous rather than harmless heterospecifics are presented and that N. pulcher , being an aquatic species, relies more on olfaction when discriminating between dangerous and harmless heterospecifics. N. pulcher responded faster to the threat stimulus, reached a refuge faster and entered a refuge more likely when predator cues were perceived. Unexpectedly, the sensory modality used to perceive the cues did not affect the escape response or the duration of the recovery phase. This suggests that N. pulcher are able to discriminate heterospecific cues with similar acuity when using vision or olfaction. We discuss that this ability may be advantageous in aquatic environments where the visibility conditions strongly vary over time. The ability to rapidly discriminate between dangerous predators and harmless heterospecifics is crucial for the survival of prey animals. In seasonally fluctuating environment, sensory conditions may change over the year and may make the use of multiple sensory modalities for heterospecific discrimination highly beneficial. Here we compared the efficacy of visual and olfactory senses in the discrimination ability of the cooperatively breeding cichlid Neolamprologus pulcher . We presented individual fish with visual or olfactory cues of predators or harmless heterospecifics and recorded their flight response. When exposed to predator cues, individuals responded faster, reached a refuge faster and were more likely to enter the refuge. Unexpectedly, the olfactory and visual senses seemed to be equally efficient in this discrimination task, suggesting that seasonal variation of water conditions experienced by N. pulcher may necessitate the use of multiple sensory channels for the same task.
Coordinates of Human Visual and Inertial Heading Perception.
Crane, Benjamin Thomas
2015-01-01
Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results.
Coordinates of Human Visual and Inertial Heading Perception
Crane, Benjamin Thomas
2015-01-01
Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results. PMID:26267865
Stephan-Otto, Christian; Siddi, Sara; Senior, Carl; Muñoz-Samons, Daniel; Ochoa, Susana; Sánchez-Laforga, Ana María; Brébion, Gildas
2017-01-01
Background Visual mental imagery might be critical in the ability to discriminate imagined from perceived pictures. Our aim was to investigate the neural bases of this specific type of reality-monitoring process in individuals with high visual imagery abilities. Methods A reality-monitoring task was administered to twenty-six healthy participants using functional magnetic resonance imaging. During the encoding phase, 45 words designating common items, and 45 pictures of other common items, were presented in random order. During the recall phase, participants were required to remember whether a picture of the item had been presented, or only a word. Two subgroups of participants with a propensity for high vs. low visual imagery were contrasted. Results Activation of the amygdala, left inferior occipital gyrus, insula, and precuneus were observed when high visual imagers encoded words later remembered as pictures. At the recall phase, these same participants activated the middle frontal gyrus and inferior and superior parietal lobes when erroneously remembering pictures. Conclusions The formation of visual mental images might activate visual brain areas as well as structures involved in emotional processing. High visual imagers demonstrate increased activation of a fronto-parietal source-monitoring network that enables distinction between imagined and perceived pictures. PMID:28046076
Social Beliefs and Visual Attention: How the Social Relevance of a Cue Influences Spatial Orienting.
Gobel, Matthias S; Tufft, Miles R A; Richardson, Daniel C
2018-05-01
We are highly tuned to each other's visual attention. Perceiving the eye or hand movements of another person can influence the timing of a saccade or the reach of our own. However, the explanation for such spatial orienting in interpersonal contexts remains disputed. Is it due to the social appearance of the cue-a hand or an eye-or due to its social relevance-a cue that is connected to another person with attentional and intentional states? We developed an interpersonal version of the Posner spatial cueing paradigm. Participants saw a cue and detected a target at the same or a different location, while interacting with an unseen partner. Participants were led to believe that the cue was either connected to the gaze location of their partner or was generated randomly by a computer (Experiment 1), and that their partner had higher or lower social rank while engaged in the same or a different task (Experiment 2). We found that spatial cue-target compatibility effects were greater when the cue related to a partner's gaze. This effect was amplified by the partner's social rank, but only when participants believed their partner was engaged in the same task. Taken together, this is strong evidence in support of the idea that spatial orienting is interpersonally attuned to the social relevance of the cue-whether the cue is connected to another person, who this person is, and what this person is doing-and does not exclusively rely on the social appearance of the cue. Visual attention is not only guided by the physical salience of one's environment but also by the mental representation of its social relevance. © 2017 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.
Timing of Visual Bodily Behavior in Repair Sequences: Evidence from Three Languages
ERIC Educational Resources Information Center
Floyd, Simeon; Manrique, Elizabeth; Rossi, Giovanni; Torreira, Francisco
2016-01-01
This article expands the study of other-initiated repair in conversation--when one party signals a problem with producing or perceiving another's turn at talk--into the domain of visual bodily behavior. It presents one primary cross-linguistic finding about the timing of visual bodily behavior in repair sequences: if the party who initiates repair…
Visual land-use compatibility and scenic-resource quality
William G. Hendrix
1977-01-01
The effect that land-use relationships have upon perceived quality of the visual landscape is discussed, and a case is made for expansion of fit-misfit theory into what has been called visual land-use compatibility. An assessment methodology that was designed to test people's perceptions of land-use relationships is presented and the results are discussed.
ERIC Educational Resources Information Center
Railo, H.; Tallus, J.; Hamalainen, H.
2011-01-01
Studies have suggested that supramodal attentional resources are biased rightward due to asymmetric spatial fields of the two hemispheres. This bias has been observed especially in right-handed subjects. We presented left and right-handed subjects with brief uniform grey visual stimuli in either the left or right visual hemifield. Consistent with…
ERIC Educational Resources Information Center
Russo-Zimet, Gila; Segel, Sarit
2014-01-01
This research was designed to examine how early-childhood educators pursuing their graduate degrees perceive the concept of happiness, as conveyed in visual representations. The research methodology combines qualitative and quantitative paradigms using the metaphoric collage, a tool used to analyze visual and verbal aspects. The research…
Sensitivity to Visual Prosodic Cues in Signers and Nonsigners
ERIC Educational Resources Information Center
Brentari, Diane; Gonzalez, Carolina; Seidl, Amanda; Wilbur, Ronnie
2011-01-01
Three studies are presented in this paper that address how nonsigners perceive the visual prosodic cues in a sign language. In Study 1, adult American nonsigners and users of American Sign Language (ASL) were compared on their sensitivity to the visual cues in ASL Intonational Phrases. In Study 2, hearing, nonsigning American infants were tested…
Lange, Eckart; Hehl-Lange, Sigrid; Brewer, Mark J
2008-11-01
The provision of green space is increasingly being perceived as an important factor for quality of life. However, green spaces often face high developmental pressure. The main objective of this study is to investigate a prospective approach to green space planning by combining three-dimensional (3D) visualization of green space scenarios and survey techniques to facilitate improved participation of the public. Aside from the 'Status quo', scenarios 'Agriculture', 'Recreation', 'Nature conservation' and 'Wind turbines' are visualized in three dimensions. In order to test responses, a survey was conducted both in print format and on the Internet. Overall, 49 different visualizations that belong to one of the scenarios were available in the survey and were rated according to the perceived esthetic, recreational and ecological values. The highest rated scenes include vegetation elements such as meadows with orchards, single trees, shrubs or forest. The least attractive scenes are those where buildings are highly dominant or where there are no vegetation elements. Based on the ratings for the individual images and on the corresponding scenarios, our study shows that there is high potential for improving the existing landscape. All suggested changes are either rated about equal to or considerably higher than the status quo, with the scenario 'Nature conservation' receiving the highest scores.
Hagura, Nobuhiro; Oouchida, Yutaka; Aramaki, Yu; Okada, Tomohisa; Matsumura, Michikazu; Sadato, Norihiro
2009-01-01
Combination of visual and kinesthetic information is essential to perceive bodily movements. We conducted behavioral and functional magnetic resonance imaging experiments to investigate the neuronal correlates of visuokinesthetic combination in perception of hand movement. Participants experienced illusory flexion movement of their hand elicited by tendon vibration while they viewed video-recorded flexion (congruent: CONG) or extension (incongruent: INCONG) motions of their hand. The amount of illusory experience was graded by the visual velocities only when visual information regarding hand motion was concordant with kinesthetic information (CONG). The left posterolateral cerebellum was specifically recruited under the CONG, and this left cerebellar activation was consistent for both left and right hands. The left cerebellar activity reflected the participants' intensity of illusory hand movement under the CONG, and we further showed that coupling of activity between the left cerebellum and the “right” parietal cortex emerges during this visuokinesthetic combination/perception. The “left” cerebellum, working with the anatomically connected high-order bodily region of the “right” parietal cortex, participates in online combination of exteroceptive (vision) and interoceptive (kinesthesia) information to perceive hand movement. The cerebro–cerebellar interaction may underlie updating of one's “body image,” when perceiving bodily movement from visual and kinesthetic information. PMID:18453537
Schmutz, Sven; Sonderegger, Andreas; Sauer, Juergen
2017-09-01
The present study examined whether implementing recommendations of Web accessibility guidelines would have different effects on nondisabled users than on users with visual impairments. The predominant approach for making Web sites accessible for users with disabilities is to apply accessibility guidelines. However, it has been hardly examined whether this approach has side effects for nondisabled users. A comparison of the effects on both user groups would contribute to a better understanding of possible advantages and drawbacks of applying accessibility guidelines. Participants from two matched samples, comprising 55 participants with visual impairments and 55 without impairments, took part in a synchronous remote testing of a Web site. Each participant was randomly assigned to one of three Web sites, which differed in the level of accessibility (very low, low, and high) according to recommendations of the well-established Web Content Accessibility Guidelines 2.0 (WCAG 2.0). Performance (i.e., task completion rate and task completion time) and a range of subjective variables (i.e., perceived usability, positive affect, negative affect, perceived aesthetics, perceived workload, and user experience) were measured. Higher conformance to Web accessibility guidelines resulted in increased performance and more positive user ratings (e.g., perceived usability or aesthetics) for both user groups. There was no interaction between user group and accessibility level. Higher conformance to WCAG 2.0 may result in benefits for nondisabled users and users with visual impairments alike. Practitioners may use the present findings as a basis for deciding on whether and how to implement accessibility best.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gurtovoi, G.K.; Burdianskaya, E.O.
1960-01-01
The primary substrate excited by threshold doses of x radiation of the normal human eye causes perception of a light flash in the retinal region. The threshold dose for the retina is about 1 mr; the threshold absorbed dose is about 1 mrad. Persons with a removed eyeball, on irradiation of the operated region with a frontal x-ray beam, perceive a flash of light at definite doses of radiation. Six persons taking part in an experiment saw a flash at doses of 17 to 150 mr (different observers saw flash at different doses) and did not see flash at dosesmore » of 5 to 90 mr. The cause of x-ray phosphene on frontal irradiation of the region of the removed eye with threshold doses is neither the reactivity of the optic nerve stump, the reactivity of the parts of the brain irradiated, nor the sensitivity of the skin receptors. In the cases considered, the cause of x-ray phosphene was irradiation of the retina of the nomnal eye by scattered x rays. The averaged coefficient of scatter was about 2%. On irradiation of the occiptal regions of the brain in subjects with normal eyes at a dose of about 150 mr, one subject perceived a flash of light. In this case, the absorbed dose for the occipital regions of the brain was about 40 mrad. The reason for this phenomenon must be explored. Stimulation of the cerebral formations (after atrophic changes in the visual tract and cortex) by x radia tion with a dose of up to 3 r, did not cause visual sensations. With the disposition of the beam, the absorbed dose for the chiasma was about 1 rad and for the occipital regions about 0.2 rad. In the study of threshold visual sensation and their causes on x irradiation of various regions of the head, it is important to apply defined doses of radiation. Scatter of the x rays in the head must be taken into consideration. (auth)« less
Effects of aging on pointing movements under restricted visual feedback conditions.
Zhang, Liancun; Yang, Jiajia; Inai, Yoshinobu; Huang, Qiang; Wu, Jinglong
2015-04-01
The goal of this study was to investigate the effects of aging on pointing movements under restricted visual feedback of hand movement and target location. Fifteen young subjects and fifteen elderly subjects performed pointing movements under four restricted visual feedback conditions that included full visual feedback of hand movement and target location (FV), no visual feedback of hand movement and target location condition (NV), no visual feedback of hand movement (NM) and no visual feedback of target location (NT). This study suggested that Fitts' law applied for pointing movements of the elderly adults under different visual restriction conditions. Moreover, significant main effect of aging on movement times has been found in all four tasks. The peripheral and central changes may be the key factors for these different characteristics. Furthermore, no significant main effects of age on the mean accuracy rate under condition of restricted visual feedback were found. The present study suggested that the elderly subjects made a very similar use of the available sensory information as young subjects under restricted visual feedback conditions. In addition, during the pointing movement, information about the hand's movement was more useful than information about the target location for young and elderly subjects. Copyright © 2014 Elsevier B.V. All rights reserved.
The fate of task-irrelevant visual motion: perceptual load versus feature-based attention.
Taya, Shuichiro; Adams, Wendy J; Graf, Erich W; Lavie, Nilli
2009-11-18
We tested contrasting predictions derived from perceptual load theory and from recent feature-based selection accounts. Observers viewed moving, colored stimuli and performed low or high load tasks associated with one stimulus feature, either color or motion. The resultant motion aftereffect (MAE) was used to evaluate attentional allocation. We found that task-irrelevant visual features received less attention than co-localized task-relevant features of the same objects. Moreover, when color and motion features were co-localized yet perceived to belong to two distinct surfaces, feature-based selection was further increased at the expense of object-based co-selection. Load theory predicts that the MAE for task-irrelevant motion would be reduced with a higher load color task. However, this was not seen for co-localized features; perceptual load only modulated the MAE for task-irrelevant motion when this was spatially separated from the attended color location. Our results suggest that perceptual load effects are mediated by spatial selection and do not generalize to the feature domain. Feature-based selection operates to suppress processing of task-irrelevant, co-localized features, irrespective of perceptual load.
Hand effects on mentally simulated reaching.
Gabbard, Carl; Ammar, Diala; Rodrigues, Luis
2005-08-01
Within the area of simulated (imagined) versus actual movement research, investigators have discovered that mentally simulated movements, like real actions, are controlled primarily by the hemispheres contralateral to the simulated limb. Furthermore, evidence points to a left-brain advantage for accuracy of simulated movements. With this information it could be suggested that, compared to left-handers, most right-handers would have an advantage. To test this hypothesis, strong right- and left-handers were compared on judgments of perceived reachability to visual targets lasting 150 ms in multiple locations of midline, right- and left-visual field (RVF/LVF). In reference to within group responses, we found no hemispheric or hand use advantage for right-handers. Although left-handers revealed no hemispheric advantage, there was a significant hand effect, favoring the non-dominant limb, most notably in LVF. This finding is explained in regard to a possible interference effect for left-handers, not shown for right-handers. Overall, left-handers displayed significantly more errors across hemispace. Therefore, it appears that when comparing hand groups, a left-hemisphere advantage favoring right-handers is plausible.
Perspective Space as a Model for Distance and Size Perception.
Erkelens, Casper J
2017-01-01
In the literature, perspective space has been introduced as a model of visual space. Perspective space is grounded on the perspective nature of visual space during both binocular and monocular vision. A single parameter, that is, the distance of the vanishing point, transforms the geometry of physical space into that of perspective space. The perspective-space model predicts perceived angles, distances, and sizes. The model is compared with other models for distance and size perception. Perspective space predicts that perceived distance and size as a function of physical distance are described by hyperbolic functions. Alternatively, power functions have been widely used to describe perceived distance and size. Comparison of power and hyperbolic functions shows that both functions are equivalent within the range of distances that have been judged in experiments. Two models describing perceived distance on the ground plane appear to be equivalent with the perspective-space model too. The conclusion is that perspective space unifies a number of models of distance and size perception.
Perspective Space as a Model for Distance and Size Perception
2017-01-01
In the literature, perspective space has been introduced as a model of visual space. Perspective space is grounded on the perspective nature of visual space during both binocular and monocular vision. A single parameter, that is, the distance of the vanishing point, transforms the geometry of physical space into that of perspective space. The perspective-space model predicts perceived angles, distances, and sizes. The model is compared with other models for distance and size perception. Perspective space predicts that perceived distance and size as a function of physical distance are described by hyperbolic functions. Alternatively, power functions have been widely used to describe perceived distance and size. Comparison of power and hyperbolic functions shows that both functions are equivalent within the range of distances that have been judged in experiments. Two models describing perceived distance on the ground plane appear to be equivalent with the perspective-space model too. The conclusion is that perspective space unifies a number of models of distance and size perception. PMID:29225765
Age-related changes in visual exploratory behavior in a natural scene setting
Hamel, Johanna; De Beukelaer, Sophie; Kraft, Antje; Ohl, Sven; Audebert, Heinrich J.; Brandt, Stephan A.
2013-01-01
Diverse cognitive functions decline with increasing age, including the ability to process central and peripheral visual information in a laboratory testing situation (useful visual field of view). To investigate whether and how this influences activities of daily life, we studied age-related changes in visual exploratory behavior in a natural scene setting: a driving simulator paradigm of variable complexity was tested in subjects of varying ages with simultaneous eye- and head-movement recordings via a head-mounted camera. Detection and reaction times were also measured by visual fixation and manual reaction. We considered video computer game experience as a possible influence on performance. Data of 73 participants of varying ages were analyzed, driving two different courses. We analyzed the influence of route difficulty level, age, and eccentricity of test stimuli on oculomotor and driving behavior parameters. No significant age effects were found regarding saccadic parameters. In the older subjects head-movements increasingly contributed to gaze amplitude. More demanding courses and more peripheral stimuli locations induced longer reaction times in all age groups. Deterioration of the functionally useful visual field of view with increasing age was not suggested in our study group. However, video game-experienced subjects revealed larger saccade amplitudes and a broader distribution of fixations on the screen. They reacted faster to peripheral objects suggesting the notion of a general detection task rather than perceiving driving as a central task. As the video game-experienced population consisted of younger subjects, our study indicates that effects due to video game experience can easily be misinterpreted as age effects if not accounted for. We therefore view it as essential to consider video game experience in all testing methods using virtual media. PMID:23801970
Nurminen, Lauri; Angelucci, Alessandra
2014-01-01
The responses of neurons in primary visual cortex (V1) to stimulation of their receptive field (RF) are modulated by stimuli in the RF surround. This modulation is suppressive when the stimuli in the RF and surround are of similar orientation, but less suppressive or facilitatory when they are cross-oriented. Similarly, in human vision surround stimuli selectively suppress the perceived contrast of a central stimulus. Although the properties of surround modulation have been thoroughly characterized in many species, cortical areas and sensory modalities, its role in perception remains unknown. Here we argue that surround modulation in V1 consists of multiple components having different spatio-temporal and tuning properties, generated by different neural circuits and serving different visual functions. One component arises from LGN afferents, is fast, untuned for orientation, and spatially restricted to the surround region nearest to the RF (the near-surround); its function is to normalize V1 cell responses to local contrast. Intra-V1 horizontal connections contribute a slower, narrowly orientation-tuned component to near-surround modulation, whose function is to increase the coding efficiency of natural images in manner that leads to the extraction of object boundaries. The third component is generated by topdown feedback connections to V1, is fast, broadly orientation-tuned, and extends into the far-surround; its function is to enhance the salience of behaviorally relevant visual features. Far- and near-surround modulation, thus, act as parallel mechanisms: the former quickly detects and guides saccades/attention to salient visual scene locations, the latter segments object boundaries in the scene. PMID:25204770
Neural Correlates of Interindividual Differences in Children’s Audiovisual Speech Perception
Nath, Audrey R.; Fava, Eswen E.; Beauchamp, Michael S.
2011-01-01
Children use information from both the auditory and visual modalities to aid in understanding speech. A dramatic illustration of this multisensory integration is the McGurk effect, an illusion in which an auditory syllable is perceived differently when it is paired with an incongruent mouth movement. However, there are significant interindividual differences in McGurk perception: some children never perceive the illusion, while others always do. Because converging evidence suggests that the posterior superior temporal sulcus (STS) is a critical site for multisensory integration, we hypothesized that activity within the STS would predict susceptibility to the McGurk effect. To test this idea, we used blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI) in seventeen children aged 6 to 12 years to measure brain responses to three audiovisual stimulus categories: McGurk incongruent, non-McGurk incongruent and congruent syllables. Two separate analysis approaches, one using independent functional localizers and another using whole-brain voxel-based regression, showed differences in the left STS between perceivers and non-perceivers. The STS of McGurk perceivers responded significantly more than non-perceivers to McGurk syllables, but not to other stimuli, and perceivers’ hemodynamic responses in the STS were significantly prolonged. In addition to the STS, weaker differences between perceivers and non-perceivers were observed in the FFA and extrastriate visual cortex. These results suggest that the STS is an important source of interindividual variability in children’s audiovisual speech perception. PMID:21957257
Pollock, Brice; Burton, Melissa; Kelly, Jonathan W; Gilbert, Stephen; Winer, Eliot
2012-04-01
Stereoscopic depth cues improve depth perception and increase immersion within virtual environments (VEs). However, improper display of these cues can distort perceived distances and directions. Consider a multi-user VE, where all users view identical stereoscopic images regardless of physical location. In this scenario, cues are typically customized for one "leader" equipped with a head-tracking device. This user stands at the center of projection (CoP) and all other users ("followers") view the scene from other locations and receive improper depth cues. This paper examines perceived depth distortion when viewing stereoscopic VEs from follower perspectives and the impact of these distortions on collaborative spatial judgments. Pairs of participants made collaborative depth judgments of virtual shapes viewed from the CoP or after displacement forward or backward. Forward and backward displacement caused perceived depth compression and expansion, respectively, with greater compression than expansion. Furthermore, distortion was less than predicted by a ray-intersection model of stereo geometry. Collaboration times were significantly longer when participants stood at different locations compared to the same location, and increased with greater perceived depth discrepancy between the two viewing locations. These findings advance our understanding of spatial distortions in multi-user VEs, and suggest a strategy for reducing distortion.
Shapiro, Arthur; Lu, Zhong-Lin; Huang, Chang-Bing; Knight, Emily; Ennis, Robert
2010-10-13
The human visual system does not treat all parts of an image equally: the central segments of an image, which fall on the fovea, are processed with a higher resolution than the segments that fall in the visual periphery. Even though the differences between foveal and peripheral resolution are large, these differences do not usually disrupt our perception of seamless visual space. Here we examine a motion stimulus in which the shift from foveal to peripheral viewing creates a dramatic spatial/temporal discontinuity. The stimulus consists of a descending disk (global motion) with an internal moving grating (local motion). When observers view the disk centrally, they perceive both global and local motion (i.e., observers see the disk's vertical descent and the internal spinning). When observers view the disk peripherally, the internal portion appears stationary, and the disk appears to descend at an angle. The angle of perceived descent increases as the observer views the stimulus from further in the periphery. We examine the first- and second-order information content in the display with the use of a three-dimensional Fourier analysis and show how our results can be used to describe perceived spatial/temporal discontinuities in real-world situations. The perceived shift of the disk's direction in the periphery is consistent with a model in which foveal processing separates first- and second-order motion information while peripheral processing integrates first- and second-order motion information. We argue that the perceived distortion may influence real-world visual observations. To this end, we present a hypothesis and analysis of the perception of the curveball and rising fastball in the sport of baseball. The curveball is a physically measurable phenomenon: the imbalance of forces created by the ball's spin causes the ball to deviate from a straight line and to follow a smooth parabolic path. However, the curveball is also a perceptual puzzle because batters often report that the flight of the ball undergoes a dramatic and nearly discontinuous shift in position as the ball nears home plate. We suggest that the perception of a discontinuous shift in position results from differences between foveal and peripheral processing.
NASA Astrophysics Data System (ADS)
Tabrizian, P.; Petrasova, A.; Baran, P.; Petras, V.; Mitasova, H.; Meentemeyer, R. K.
2017-12-01
Viewshed modelling- a process of defining, parsing and analysis of landscape visual space's structure within GIS- has been commonly used in applications ranging from landscape planning and ecosystem services assessment to geography and archaeology. However, less effort has been made to understand whether and to what extent these objective analyses predict actual on-the-ground perception of human observer. Moreover, viewshed modelling at the human-scale level require incorporation of fine-grained landscape structure (eg., vegetation) and patterns (e.g, landcover) that are typically omitted from visibility calculations or unrealistically simulated leading to significant error in predicting visual attributes. This poster illustrates how photorealistic Immersive Virtual Environments and high-resolution geospatial data can be used to integrate objective and subjective assessments of visual characteristics at the human-scale level. We performed viewshed modelling for a systematically sampled set of viewpoints (N=340) across an urban park using open-source GIS (GRASS GIS). For each point a binary viewshed was computed on a 3D surface model derived from high-density leaf-off LIDAR (QL2) points. Viewshed map was combined with high-resolution landcover (.5m) derived through fusion of orthoimagery, lidar vegetation, and vector data. Geo-statistics and landscape structure analysis was performed to compute topological and compositional metrics for visual-scale (e.g., openness), complexity (pattern, shape and object diversity), and naturalness. Based on the viewshed model output, a sample of 24 viewpoints representing the variation of visual characteristics were selected and geolocated. For each location, 360o imagery were captured using a DSL camera mounted on a GIGA PAN robot. We programmed a virtual reality application through which human subjects (N=100) immersively experienced a random representation of selected environments via a head-mounted display (Oculus Rift CV1), and rated each location on perceived openness, naturalness and complexity. Regression models were performed to correlate model outputs with participants' responses. The results indicated strong, significant correlations for openness, and naturalness and moderate correlation for complexity estimations.
Buchs, Galit; Maidenbaum, Shachar; Levy-Tzedek, Shelly; Amedi, Amir
2015-01-01
Purpose: To visually perceive our surroundings we constantly move our eyes and focus on particular details, and then integrate them into a combined whole. Current visual rehabilitation methods, both invasive, like bionic-eyes and non-invasive, like Sensory Substitution Devices (SSDs), down-sample visual stimuli into low-resolution images. Zooming-in to sub-parts of the scene could potentially improve detail perception. Can congenitally blind individuals integrate a ‘visual’ scene when offered this information via different sensory modalities, such as audition? Can they integrate visual information –perceived in parts - into larger percepts despite never having had any visual experience? Methods: We explored these questions using a zooming-in functionality embedded in the EyeMusic visual-to-auditory SSD. Eight blind participants were tasked with identifying cartoon faces by integrating their individual components recognized via the EyeMusic’s zooming mechanism. Results: After specialized training of just 6–10 hours, blind participants successfully and actively integrated facial features into cartooned identities in 79±18% of the trials in a highly significant manner, (chance level 10% ; rank-sum P < 1.55E-04). Conclusions: These findings show that even users who lacked any previous visual experience whatsoever can indeed integrate this visual information with increased resolution. This potentially has important practical visual rehabilitation implications for both invasive and non-invasive methods. PMID:26518671
Object-location binding across a saccade: A retinotopic Spatial Congruency Bias
Shafer-Skelton, Anna; Kupitz, Colin N.; Golomb, Julie D.
2017-01-01
Despite frequent eye movements that rapidly shift the locations of objects on our retinas, our visual system creates a stable perception of the world. To do this, it must convert eye-centered (retinotopic) input to world-centered (spatiotopic) percepts. Moreover, for successful behavior we must also incorporate information about object features/identities during this updating – a fundamental challenge that remains to be understood. Here we adapted a recent behavioral paradigm, the “Spatial Congruency Bias”, to investigate object-location binding across an eye movement. In two initial baseline experiments, we showed that the Spatial Congruency Bias was present for both gabor and face stimuli in addition to the object stimuli used in the original paradigm. Then, across three main experiments, we found the bias was preserved across an eye movement, but only in retinotopic coordinates: Subjects were more likely to perceive two stimuli as having the same features/identity when they were presented in the same retinotopic location. Strikingly, there was no evidence of location binding in the more ecologically relevant spatiotopic (world-centered) coordinates; the reference frame did not update to spatiotopic even at longer post-saccade delays, nor did it transition to spatiotopic with more complex stimuli (gabors, shapes, and faces all showed a retinotopic Congruency Bias). Our results suggest that object-location binding may be tied to retinotopic coordinates, and that it may need to be re-established following each eye movement rather than being automatically updated to spatiotopic coordinates. PMID:28070793
Location perception: the X-Files parable.
Prinzmetal, William
2005-01-01
Three aspects of visual object location were investigated: (1) how the visual system integrates information for locating objects, (2) how attention operates to affect location perception, and (3) how the visual system deals with locating an object when multiple objects are present. The theories were described in terms of a parable (the X-Files parable). Then, computer simulations were developed. Finally, predictions derived from the simulations were tested. In the scenario described in the parable, we ask how a system of detectors might locate an alien spaceship, how attention might be implemented in such a spaceship detection system, and how the presence of one spaceship might influence the location perception of another alien spaceship. Experiment 1 demonstrated that location information is integrated with a spatial average rule. In Experiment 2, this rule was applied to a more-samples theory of attention. Experiment 3 demonstrated how the integration rule could account for various visual illusions.
Effects of Motion and Figural Goodness on Haptic Object Perception in Infancy.
ERIC Educational Resources Information Center
Streri, Arlette; Spelke, Elizabeth S.
1989-01-01
After haptic habituation to a ring display, infants perceived the rings in two experiments as parts of one connected object. In both haptic and visual modes, infants appeared to perceive object unity by analyzing motion but not by analyzing figural goodness. (RH)
Visualization Mode, Perceived Immediacy and Audience Evaluation of TV News.
ERIC Educational Resources Information Center
Ksobiech, Kenneth; And Others
1980-01-01
An analysis of audience perceptions of videotaped versus filmed actualities on television newscasts suggested that videotaped actualities were perceived as more immediate than filmed actualities, and that audience evaluation of newscasts using videotaped actualities was higher than audience evaluation of newscasts using filmed actualities. (GT)
Higher Level Visual Cortex Represents Retinotopic, Not Spatiotopic, Object Location
Kanwisher, Nancy
2012-01-01
The crux of vision is to identify objects and determine their locations in the environment. Although initial visual representations are necessarily retinotopic (eye centered), interaction with the real world requires spatiotopic (absolute) location information. We asked whether higher level human visual cortex—important for stable object recognition and action—contains information about retinotopic and/or spatiotopic object position. Using functional magnetic resonance imaging multivariate pattern analysis techniques, we found information about both object category and object location in each of the ventral, dorsal, and early visual regions tested, replicating previous reports. By manipulating fixation position and stimulus position, we then tested whether these location representations were retinotopic or spatiotopic. Crucially, all location information was purely retinotopic. This pattern persisted when location information was irrelevant to the task, and even when spatiotopic (not retinotopic) stimulus position was explicitly emphasized. We also conducted a “searchlight” analysis across our entire scanned volume to explore additional cortex but again found predominantly retinotopic representations. The lack of explicit spatiotopic representations suggests that spatiotopic object position may instead be computed indirectly and continually reconstructed with each eye movement. Thus, despite our subjective impression that visual information is spatiotopic, even in higher level visual cortex, object location continues to be represented in retinotopic coordinates. PMID:22190434
Decoding Visual Location From Neural Patterns in the Auditory Cortex of the Congenitally Deaf
Almeida, Jorge; He, Dongjun; Chen, Quanjing; Mahon, Bradford Z.; Zhang, Fan; Gonçalves, Óscar F.; Fang, Fang; Bi, Yanchao
2016-01-01
Sensory cortices of individuals who are congenitally deprived of a sense can exhibit considerable plasticity and be recruited to process information from the senses that remain intact. Here, we explored whether the auditory cortex of congenitally deaf individuals represents visual field location of a stimulus—a dimension that is represented in early visual areas. We used functional MRI to measure neural activity in auditory and visual cortices of congenitally deaf and hearing humans while they observed stimuli typically used for mapping visual field preferences in visual cortex. We found that the location of a visual stimulus can be successfully decoded from the patterns of neural activity in auditory cortex of congenitally deaf but not hearing individuals. This is particularly true for locations within the horizontal plane and within peripheral vision. These data show that the representations stored within neuroplastically changed auditory cortex can align with dimensions that are typically represented in visual cortex. PMID:26423461
"The Gallery": An Experiential Approach to Visual Aid Construction and Analysis in the Classroom
ERIC Educational Resources Information Center
Tyma, Adam W.
2008-01-01
When working with students to prepare oral presentations, the question--"What makes an effective visual aid?"--often arises. Most teachers realize the value of visual aids, but what makes them effective is sometimes unclear. There seems to be a disconnect between what the teacher, the textbook, and the student actually perceive to be a "good"…
Christoffersen, Gert R. J.; Laugesen, Jakob L.; Møller, Per; Bredie, Wender L. P.; Schachtman, Todd R.; Liljendahl, Christina; Viemose, Ida
2017-01-01
Human recognition of foods and beverages are often based on visual cues associated with flavors. The dynamics of neurophysiological plasticity related to acquisition of such long-term associations has only recently become the target of investigation. In the present work, the effects of appetitive and aversive visuo-gustatory conditioning were studied with high density EEG-recordings focusing on late components in the visual evoked potentials (VEPs), specifically the N2-P3 waves. Unfamiliar images were paired with either a pleasant or an unpleasant juice and VEPs evoked by the images were compared before and 1 day after the pairings. In electrodes located over posterior visual cortex areas, the following changes were observed after conditioning: the amplitude from the N2-peak to the P3-peak increased and the N2 peak delay was reduced. The percentage increase of N2-to-P3 amplitudes was asymmetrically distributed over the posterior hemispheres despite the fact that the images were bilaterally symmetrical across the two visual hemifields. The percentage increases of N2-to-P3 amplitudes in each experimental subject correlated with the subject’s evaluation of positive or negative hedonic valences of the two juices. The results from 118 scalp electrodes gave surface maps of theta power distributions showing increased power over posterior visual areas after the pairings. Source current distributions calculated from swLORETA revealed that visual evoked currents rose as a result of conditioning in five cortical regions—from primary visual areas and into the inferior temporal gyrus (ITG). These learning-induced changes were seen after both appetitive and aversive training while a sham trained control group showed no changes. It is concluded that long-term visuo-gustatory conditioning potentiated the N2-P3 complex, and it is suggested that the changes are regulated by the perceived hedonic valence of the US. PMID:28983243
Touch influences perceived gloss
Adams, Wendy J.; Kerrigan, Iona S.; Graf, Erich W.
2016-01-01
Identifying an object’s material properties supports recognition and action planning: we grasp objects according to how heavy, hard or slippery we expect them to be. Visual cues to material qualities such as gloss have recently received attention, but how they interact with haptic (touch) information has been largely overlooked. Here, we show that touch modulates gloss perception: objects that feel slippery are perceived as glossier (more shiny).Participants explored virtual objects that varied in look and feel. A discrimination paradigm (Experiment 1) revealed that observers integrate visual gloss with haptic information. Observers could easily detect an increase in glossiness when it was paired with a decrease in friction. In contrast, increased glossiness coupled with decreased slipperiness produced a small perceptual change: the visual and haptic changes counteracted each other. Subjective ratings (Experiment 2) reflected a similar interaction – slippery objects were rated as glossier and vice versa. The sensory system treats visual gloss and haptic friction as correlated cues to surface material. Although friction is not a perfect predictor of gloss, the visual system appears to know and use a probabilistic relationship between these variables to bias perception – a sensible strategy given the ambiguity of visual clues to gloss. PMID:26915492
Multifocal planes head-mounted displays.
Rolland, J P; Krueger, M W; Goon, A
2000-07-01
Stereoscopic head-mounted displays (HMD's) provide an effective capability to create dynamic virtual environments. For a user of such environments, virtual objects would be displayed ideally at the appropriate distances, and natural concordant accommodation and convergence would be provided. Under such image display conditions, the user perceives these objects as if they were objects in a real environment. Current HMD technology requires convergent eye movements. However, it is currently limited by fixed visual accommodation, which is inconsistent with real-world vision. A prototype multiplanar volumetric projection display based on a stack of laminated planes was built for medical visualization as discussed in a paper presented at a 1999 Advanced Research Projects Agency workshop (Sullivan, Advanced Research Projects Agency, Arlington, Va., 1999). We show how such technology can be engineered to create a set of virtual planes appropriately configured in visual space to suppress conflicts of convergence and accommodation in HMD's. Although some scanning mechanism could be employed to create a set of desirable planes from a two-dimensional conventional display, multiplanar technology accomplishes such function with no moving parts. Based on optical principles and human vision, we present a comprehensive investigation of the engineering specification of multiplanar technology for integration in HMD's. Using selected human visual acuity and stereoacuity criteria, we show that the display requires at most 27 equally spaced planes, which is within the capability of current research and development display devices, located within a maximal 26-mm-wide stack. We further show that the necessary in-plane resolution is of the order of 5 microm.
McClain, A D; van den Bos, W; Matheson, D; Desai, M; McClure, S M; Robinson, T N
2014-05-01
The Delboeuf Illusion affects perceptions of the relative sizes of concentric shapes. This study was designed to extend research on the application of the Delboeuf illusion to food on a plate by testing whether a plate's rim width and coloring influence perceptual bias to affect perceived food portion size. Within-subjects experimental design. Experiment 1 tested the effect of rim width on perceived food portion size. Experiment 2 tested the effect of rim coloring on perceived food portion size. In both experiments, participants observed a series of photographic images of paired, side-by-side plates varying in designs and amounts of food. From each pair, participants were asked to select the plate that contained more food. Multilevel logistic regression examined the effects of rim width and coloring on perceived food portion size. Experiment 1: participants overestimated the diameter of food portions by 5% and the visual area of food portions by 10% on plates with wider rims compared with plates with very thin rims (P<0.0001). The effect of rim width was greater with larger food portion sizes. Experiment 2: participants overestimated the diameter of food portions by 1.5% and the visual area of food portions by 3% on plates with rim coloring compared with plates with no coloring (P=0.01). The effect of rim coloring was greater with smaller food portion sizes. The Delboeuf illusion applies to food on a plate. Participants overestimated food portion size on plates with wider and colored rims. These findings may help design plates to influence perceptions of food portion sizes.
On Violence against Objects: A Visual Chord
ERIC Educational Resources Information Center
Staley, David J.
2010-01-01
"On Violence Against Objects" is best viewed over several minutes; allow the images to go through several iterations in order to see as many juxtapositions as possible.The visual argument of the work emerges as the viewer perceives analogies between the various images.
An Eye Tracking Examination of Men's Attractiveness by Conceptive Risk Women.
Garza, Ray; Heredia, Roberto R; Cieślicka, Anna B
2017-03-01
Previous research has indicated that women prefer men who exhibit an android physical appearance where fat distribution is deposited on the upper body (i.e., shoulders and arms) and abdomen. This ideal physical shape has been associated with perceived dominance, health, and immunocompetence. Although research has investigated attractability of men with these ideal characteristics, research on how women visually perceive these characteristics is limited. The current study investigated visual perception and attraction toward men in Hispanic women of Mexican American descent. Women exposed to a front-posed image, where the waist-to-chest ratio (WCR) and hair distribution were manipulated, rated men's body image associated with upper body strength (low WCR 0.7) as more attractive. Additionally, conceptive risk did not play a strong role in attractiveness and visual attention. Hair distribution did not contribute to increased ratings of attraction but did contribute to visual attraction when measuring total time where men with both facial and body hair were viewed longer. These findings suggest that physical characteristics in men exhibiting upper body strength and dominance are strong predictors of visual attraction.
To Be Immortal, Do Good or Evil.
Gray, Kurt; Anderson, Stephen; Doyle, Cameron M; Hester, Neil; Schmitt, Peter; Vonasch, Andrew J; Allison, Scott T; Jackson, Joshua C
2018-06-01
Many people believe in immortality, but who is perceived to live on and how exactly do they live on? Seven studies reveal that good- and evil-doers are perceived to possess more immortality-albeit different kinds. Good-doers have "transcendent" immortality, with their souls persisting beyond space and time; evil-doers have "trapped" immortality, with their souls persisting on Earth, bound to a physical location. Studies 1 to 4 reveal bidirectional links between perceptions of morality and type of immortality. Studies 5 to 7 reveal how these links explain paranormal perceptions. People generally tie paranormal events to evil spirits (Study 5), but this depends upon location: Evil spirits are perceived to haunt houses and dense forests, whereas good spirits are perceived in expansive locations such as mountaintops (Study 6). However, even good spirits may be seen as trapped on Earth given extenuating circumstances (Study 7). Materials include a scale for measuring trapped and transcendent immorality.
Solar glare hazard analysis tool on account of determined points of time
Ho, Clifford K; Sims, Cianan Alexander
2014-09-23
Technologies pertaining to determining when glare will be perceived by a hypothetical observer from a glare source and the intensity of glare that will be perceived by the hypothetical observer from the glare source are described herein. A first location of a potential source of solar glare is received, and a second location of the hypothetical observer is received. Based upon such locations, including respective elevations, and known positions of the sun over time, a determination as to when the hypothetical observer will perceive glare from the potential source of solar glare is made. Subsequently, an amount of irradiance entering the eye of the hypothetical observer is calculated to assess potential ocular hazards.
NASA Technical Reports Server (NTRS)
Post, R. B.; Welch, R. B.
1996-01-01
Visually perceived eye level (VPEL) was measured while subjects viewed two vertical lines which were either upright or pitched about the horizontal axis. In separate conditions, the display consisted of a relatively large pair of lines viewed at a distance of 1 m, or a display scaled to one third the dimensions and viewed at a distance of either 1 m or 33.3 cm. The small display viewed at 33.3 cm produced a retinal image the same size as that of the large display at 1 m. Pitch of all three displays top-toward and top-away from the observer caused upward and downward VPEL shifts, respectively. These effects were highly similar for the large display and the small display viewed at 33.3 cm (ie equal retinal size), but were significantly smaller for the small display viewed at 1 m. In a second experiment, perceived size of the three displays was measured and found to be highly accurate. The results of the two experiments indicate that the effect of optical pitch on VPEL depends on the retinal image size of stimuli rather than on perceived size.
Dalton, Brian H; Rasman, Brandon G; Inglis, J Timothy; Blouin, Jean-Sébastien
2017-04-15
We tested perceived head-on-feet orientation and the direction of vestibular-evoked balance responses in passively and actively held head-turned postures. The direction of vestibular-evoked balance responses was not aligned with perceived head-on-feet orientation while maintaining prolonged passively held head-turned postures. Furthermore, static visual cues of head-on-feet orientation did not update the estimate of head posture for the balance controller. A prolonged actively held head-turned posture did not elicit a rotation in the direction of the vestibular-evoked balance response despite a significant rotation in perceived angular head posture. It is proposed that conscious perception of head posture and the transformation of vestibular signals for standing balance relying on this head posture are not dependent on the same internal representation. Rather, the balance system may operate under its own sensorimotor principles, which are partly independent from perception. Vestibular signals used for balance control must be integrated with other sensorimotor cues to allow transformation of descending signals according to an internal representation of body configuration. We explored two alternative models of sensorimotor integration that propose (1) a single internal representation of head-on-feet orientation is responsible for perceived postural orientation and standing balance or (2) conscious perception and balance control are driven by separate internal representations. During three experiments, participants stood quietly while passively or actively maintaining a prolonged head-turned posture (>10 min). Throughout the trials, participants intermittently reported their perceived head angular position, and subsequently electrical vestibular stimuli were delivered to elicit whole-body balance responses. Visual recalibration of head-on-feet posture was used to determine whether static visual cues are used to update the internal representation of body configuration for perceived orientation and standing balance. All three experiments involved situations in which the vestibular-evoked balance response was not orthogonal to perceived head-on-feet orientation, regardless of the visual information provided. For prolonged head-turned postures, balance responses consistent with actual head-on-feet posture occurred only during the active condition. Our results indicate that conscious perception of head-on-feet posture and vestibular control of balance do not rely on the same internal representation, but instead treat sensorimotor cues in parallel and may arrive at different conclusions regarding head-on-feet posture. The balance system appears to bypass static visual cues of postural orientation and mainly use other sensorimotor signals of head-on-feet position to transform vestibular signals of head motion, a mechanism appropriate for most daily activities. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.
Negative dysphotopsia: Causes and rationale for prevention and treatment.
Holladay, Jack T; Simpson, Michael J
2017-02-01
To determine the cause of negative dysphotopsia using standard ray-tracing techniques and identify the primary and secondary causative factors. Department of Ophthalmology, Baylor College of Medicine, Houston, Texas, USA. Experimental study. Zemax ray-tracing software was used to evaluate pseudophakic and phakic eye models to show the location of retinal field images from various visual field objects. Phakic retinal field angles (RFAs) were used as a reference for the perceived field locations for retinal images in pseudophakic eyes. In a nominal acrylic pseudophakic eye model with a 2.5 mm diameter pupil, the maximum RFA from rays refracted by the intraocular lens (IOL) was 85.7 degrees and the minimum RFA for rays missing the optic of the IOL was 88.3 degrees, leaving a dark gap (shadow) of 2.6 degrees in the extreme temporal field. The width of the shadow was more prominent for a smaller pupil, a larger angle kappa, an equi-biconvex or plano-convex IOL shape, and a smaller axial distance from iris to IOL and with the anterior capsule overlying the nasal IOL. Secondary factors included IOL edge design, material, diameter, decentration, tilt, and aspheric surfaces. Standard ray-tracing techniques showed that a shadow is present when there is a gap between the retinal images formed by rays missing the optic of the IOL and rays refracted by the IOL. Primary and secondary factors independently affected the width and location of the gap (or overlap). The ray tracing also showed a constriction and double retinal imaging in the extreme temporal visual field. Copyright © 2017 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Jacobs, Richard H A H; Haak, Koen V; Thumfart, Stefan; Renken, Remco; Henson, Brian; Cornelissen, Frans W
2016-01-01
Our world is filled with texture. For the human visual system, this is an important source of information for assessing environmental and material properties. Indeed-and presumably for this reason-the human visual system has regions dedicated to processing textures. Despite their abundance and apparent relevance, only recently the relationships between texture features and high-level judgments have captured the interest of mainstream science, despite long-standing indications for such relationships. In this study, we explore such relationships, as these might be used to predict perceived texture qualities. This is relevant, not only from a psychological/neuroscience perspective, but also for more applied fields such as design, architecture, and the visual arts. In two separate experiments, observers judged various qualities of visual textures such as beauty, roughness, naturalness, elegance, and complexity. Based on factor analysis, we find that in both experiments, ~75% of the variability in the judgments could be explained by a two-dimensional space, with axes that are closely aligned to the beauty and roughness judgments. That a two-dimensional judgment space suffices to capture most of the variability in the perceived texture qualities suggests that observers use a relatively limited set of internal scales on which to base various judgments, including aesthetic ones. Finally, for both of these judgments, we determined the relationship with a large number of texture features computed for each of the texture stimuli. We find that the presence of lower spatial frequencies, oblique orientations, higher intensity variation, higher saturation, and redness correlates with higher beauty ratings. Features that captured image intensity and uniformity correlated with roughness ratings. Therefore, a number of computational texture features are predictive of these judgments. This suggests that perceived texture qualities-including the aesthetic appreciation-are sufficiently universal to be predicted-with reasonable accuracy-based on the computed feature content of the textures.
Jacobs, Richard H. A. H.; Haak, Koen V.; Thumfart, Stefan; Renken, Remco; Henson, Brian; Cornelissen, Frans W.
2016-01-01
Our world is filled with texture. For the human visual system, this is an important source of information for assessing environmental and material properties. Indeed—and presumably for this reason—the human visual system has regions dedicated to processing textures. Despite their abundance and apparent relevance, only recently the relationships between texture features and high-level judgments have captured the interest of mainstream science, despite long-standing indications for such relationships. In this study, we explore such relationships, as these might be used to predict perceived texture qualities. This is relevant, not only from a psychological/neuroscience perspective, but also for more applied fields such as design, architecture, and the visual arts. In two separate experiments, observers judged various qualities of visual textures such as beauty, roughness, naturalness, elegance, and complexity. Based on factor analysis, we find that in both experiments, ~75% of the variability in the judgments could be explained by a two-dimensional space, with axes that are closely aligned to the beauty and roughness judgments. That a two-dimensional judgment space suffices to capture most of the variability in the perceived texture qualities suggests that observers use a relatively limited set of internal scales on which to base various judgments, including aesthetic ones. Finally, for both of these judgments, we determined the relationship with a large number of texture features computed for each of the texture stimuli. We find that the presence of lower spatial frequencies, oblique orientations, higher intensity variation, higher saturation, and redness correlates with higher beauty ratings. Features that captured image intensity and uniformity correlated with roughness ratings. Therefore, a number of computational texture features are predictive of these judgments. This suggests that perceived texture qualities—including the aesthetic appreciation—are sufficiently universal to be predicted—with reasonable accuracy—based on the computed feature content of the textures. PMID:27493628
Wolf, Amparo; Coros, Alexandra; Bierer, Joel; Goncalves, Sandy; Cooper, Paul; Van Uum, Stan; Lee, Donald H; Proulx, Alain; Nicolle, David; Fraser, J Alexander; Rotenberg, Brian W; Duggal, Neil
2017-08-01
OBJECTIVE Endoscopic resection of pituitary adenomas has been reported to improve vision function in up to 80%-90% of patients with visual impairment due to these adenomas. It is unclear how these reported rates translate into improvement in visual outcomes and general health as perceived by the patients. The authors evaluated self-assessed health-related quality of life (HR-QOL) and vision-related QOL (VR-QOL) in patients before and after endoscopic resection of pituitary adenomas. METHODS The authors prospectively collected data from 50 patients who underwent endoscopic resection of pituitary adenomas. This cohort included 32 patients (64%) with visual impairment preoperatively. Twenty-seven patients (54%) had pituitary dysfunction, including 17 (34%) with hormone-producing tumors. Patients completed the National Eye Institute Visual Functioning Questionnaire and the 36-Item Short Form Health Survey preoperatively and 6 weeks and 6 months after surgery. RESULTS Patients with preoperative visual impairment reported a significant impact of this condition on VR-QOL preoperatively, including general vision, near activities, and peripheral vision; they also noted vision-specific impacts on mental health, role difficulties, dependency, and driving. After endoscopic resection of adenomas, patients reported improvement across all these categories 6 weeks postoperatively, and this improvement was maintained by 6 months postoperatively. Patients with preoperative pituitary dysfunction, including hormone-producing tumors, perceived their general health and physical function as poorer, with some of these patients reporting improvement in perceived general health after the endoscopic surgery. All patients noted that their ability to work or perform activities of daily living was transiently reduced 6 weeks postoperatively, followed by significant improvement by 6 months after the surgery. CONCLUSIONS Both VR-QOL and patient's perceptions of their ability to do work and perform other daily activities as a result of their physical health significantly improved by 6 months after endoscopic resection of pituitary adenoma. The use of multidimensional QOL questionnaires provides a precise assessment of perceived outcomes after endoscopic surgery.
Tactile mental body parts representation in obesity.
Scarpina, Federica; Castelnuovo, Gianluca; Molinari, Enrico
2014-12-30
Obese people׳s distortions in visually-based mental body-parts representations have been reported in previous studies, but other sensory modalities have largely been neglected. In the present study, we investigated possible differences in tactilely-based body-parts representation between an obese and a healthy-weight group; additionally we explore the possible relationship between the tactile- and the visually-based body representation. Participants were asked to estimate the distance between two tactile stimuli that were simultaneously administered on the arm or on the abdomen, in the absence of visual input. The visually-based body-parts representation was investigated by a visual imagery method in which subjects were instructed to compare the horizontal extension of body part pairs. According to the results, the obese participants overestimated the size of the tactilely-perceived distances more than the healthy-weight group when the arm, and not the abdomen, was stimulated. Moreover, they reported a lower level of accuracy than did the healthy-weight group when estimating horizontal distances relative to their bodies, confirming an inappropriate visually-based mental body representation. Our results imply that body representation disturbance in obese people is not limited to the visual mental domain, but it spreads to the tactilely perceived distances. The inaccuracy was not a generalized tendency but was body-part related. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Matching optical flow to motor speed in virtual reality while running on a treadmill
Lafortuna, Claudio L.; Mugellini, Elena; Abou Khaled, Omar
2018-01-01
We investigated how visual and kinaesthetic/efferent information is integrated for speed perception in running. Twelve moderately trained to trained subjects ran on a treadmill at three different speeds (8, 10, 12 km/h) in front of a moving virtual scene. They were asked to match the visual speed of the scene to their running speed–i.e., treadmill’s speed. For each trial, participants indicated whether the scene was moving slower or faster than they were running. Visual speed was adjusted according to their response using a staircase until the Point of Subjective Equality (PSE) was reached, i.e., until visual and running speed were perceived as equivalent. For all three running speeds, participants systematically underestimated the visual speed relative to their actual running speed. Indeed, the speed of the visual scene had to exceed the actual running speed in order to be perceived as equivalent to the treadmill speed. The underestimation of visual speed was speed-dependent, and percentage of underestimation relative to running speed ranged from 15% at 8km/h to 31% at 12km/h. We suggest that this fact should be taken into consideration to improve the design of attractive treadmill-mediated virtual environments enhancing engagement into physical activity for healthier lifestyles and disease prevention and care. PMID:29641564
Matching optical flow to motor speed in virtual reality while running on a treadmill.
Caramenti, Martina; Lafortuna, Claudio L; Mugellini, Elena; Abou Khaled, Omar; Bresciani, Jean-Pierre; Dubois, Amandine
2018-01-01
We investigated how visual and kinaesthetic/efferent information is integrated for speed perception in running. Twelve moderately trained to trained subjects ran on a treadmill at three different speeds (8, 10, 12 km/h) in front of a moving virtual scene. They were asked to match the visual speed of the scene to their running speed-i.e., treadmill's speed. For each trial, participants indicated whether the scene was moving slower or faster than they were running. Visual speed was adjusted according to their response using a staircase until the Point of Subjective Equality (PSE) was reached, i.e., until visual and running speed were perceived as equivalent. For all three running speeds, participants systematically underestimated the visual speed relative to their actual running speed. Indeed, the speed of the visual scene had to exceed the actual running speed in order to be perceived as equivalent to the treadmill speed. The underestimation of visual speed was speed-dependent, and percentage of underestimation relative to running speed ranged from 15% at 8km/h to 31% at 12km/h. We suggest that this fact should be taken into consideration to improve the design of attractive treadmill-mediated virtual environments enhancing engagement into physical activity for healthier lifestyles and disease prevention and care.
ERIC Educational Resources Information Center
Hammitt, William E.; And Others
1984-01-01
Use level, visual encounters, crowding expectations, and feelings were examined by regression techniques to explain perceived crowding among innertube floaters. Degree of user specialization and specificity for any given activity and place is offered as an explanation for the discrepancy from previous findings. (Author/DF)
Listening Natively across Perceptual Domains?
ERIC Educational Resources Information Center
Langus, Alan; Seyed-Allaei, Shima; Uysal, Ertugrul; Pirmoradian, Sahar; Marino, Caterina; Asaadi, Sina; Eren, Ömer; Toro, Juan M.; Peña, Marcela; Bion, Ricardo A. H.; Nespor, Marina
2016-01-01
Our native tongue influences the way we perceive other languages. But does it also determine the way we perceive nonlinguistic sounds? The authors investigated how speakers of Italian, Turkish, and Persian group sequences of syllables, tones, or visual shapes alternating in either frequency or duration. We found strong native listening effects…
Visual Cues and Perceived Reachability
ERIC Educational Resources Information Center
Gabbard, Carl; Ammar, Diala
2005-01-01
A rather consistent finding in studies of perceived (imagined) compared to actual movement in a reaching paradigm is the tendency to overestimate at midline. Explanations of such behavior have focused primarily on perceptions of postural constraints and the notion that individuals calibrate reachability in reference to multiple degrees of freedom,…
Perceived Reachability in Hemispace
ERIC Educational Resources Information Center
Gabbard, C.; Ammar, D.; Rodrigues, L.
2005-01-01
A common observation in studies of perceived (imagined) compared to actual movement in a reaching paradigm is the tendency to overestimate. Of the studies noted, reaching tasks have been presented in the general midline range. In the present study, strong right-handers were asked to judge the reachability of visual targets projected onto a table…
Perceived state of self during motion can differentially modulate numerical magnitude allocation.
Arshad, Q; Nigmatullina, Y; Roberts, R E; Goga, U; Pikovsky, M; Khan, S; Lobo, R; Flury, A-S; Pettorossi, V E; Cohen-Kadosh, R; Malhotra, P A; Bronstein, A M
2016-09-01
Although a direct relationship between numerical allocation and spatial attention has been proposed, recent research suggests that these processes are not directly coupled. In keeping with this, spatial attention shifts induced either via visual or vestibular motion can modulate numerical allocation in some circumstances but not in others. In addition to shifting spatial attention, visual or vestibular motion paradigms also (i) elicit compensatory eye movements which themselves can influence numerical processing and (ii) alter the perceptual state of 'self', inducing changes in bodily self-consciousness impacting upon cognitive mechanisms. Thus, the precise mechanism by which motion modulates numerical allocation remains unknown. We sought to investigate the influence that different perceptual experiences of motion have upon numerical magnitude allocation while controlling for both eye movements and task-related effects. We first used optokinetic visual motion stimulation (OKS) to elicit the perceptual experience of either 'visual world' or 'self'-motion during which eye movements were identical. In a second experiment, we used a vestibular protocol examining the effects of perceived and subliminal angular rotations in darkness, which also provoked identical eye movements. We observed that during the perceptual experience of 'visual world' motion, rightward OKS-biased judgments towards smaller numbers, whereas leftward OKS-biased judgments towards larger numbers. During the perceptual experience of 'self-motion', judgments were biased towards larger numbers irrespective of the OKS direction. Contrastingly, vestibular motion perception was found not to modulate numerical magnitude allocation, nor was there any differential modulation when comparing 'perceived' vs. 'subliminal' rotations. We provide a novel demonstration that numerical magnitude allocation can be differentially modulated by the perceptual state of self during visual but not vestibular mediated motion. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Audiovisual Temporal Processing and Synchrony Perception in the Rat.
Schormans, Ashley L; Scott, Kaela E; Vo, Albert M Q; Tyker, Anna; Typlt, Marei; Stolzberg, Daniel; Allman, Brian L
2016-01-01
Extensive research on humans has improved our understanding of how the brain integrates information from our different senses, and has begun to uncover the brain regions and large-scale neural activity that contributes to an observer's ability to perceive the relative timing of auditory and visual stimuli. In the present study, we developed the first behavioral tasks to assess the perception of audiovisual temporal synchrony in rats. Modeled after the parameters used in human studies, separate groups of rats were trained to perform: (1) a simultaneity judgment task in which they reported whether audiovisual stimuli at various stimulus onset asynchronies (SOAs) were presented simultaneously or not; and (2) a temporal order judgment task in which they reported whether they perceived the auditory or visual stimulus to have been presented first. Furthermore, using in vivo electrophysiological recordings in the lateral extrastriate visual (V2L) cortex of anesthetized rats, we performed the first investigation of how neurons in the rat multisensory cortex integrate audiovisual stimuli presented at different SOAs. As predicted, rats ( n = 7) trained to perform the simultaneity judgment task could accurately (~80%) identify synchronous vs. asynchronous (200 ms SOA) trials. Moreover, the rats judged trials at 10 ms SOA to be synchronous, whereas the majority (~70%) of trials at 100 ms SOA were perceived to be asynchronous. During the temporal order judgment task, rats ( n = 7) perceived the synchronous audiovisual stimuli to be "visual first" for ~52% of the trials, and calculation of the smallest timing interval between the auditory and visual stimuli that could be detected in each rat (i.e., the just noticeable difference (JND)) ranged from 77 ms to 122 ms. Neurons in the rat V2L cortex were sensitive to the timing of audiovisual stimuli, such that spiking activity was greatest during trials when the visual stimulus preceded the auditory by 20-40 ms. Ultimately, given that our behavioral and electrophysiological results were consistent with studies conducted on human participants and previous recordings made in multisensory brain regions of different species, we suggest that the rat represents an effective model for studying audiovisual temporal synchrony at both the neuronal and perceptual level.
Audiovisual Temporal Processing and Synchrony Perception in the Rat
Schormans, Ashley L.; Scott, Kaela E.; Vo, Albert M. Q.; Tyker, Anna; Typlt, Marei; Stolzberg, Daniel; Allman, Brian L.
2017-01-01
Extensive research on humans has improved our understanding of how the brain integrates information from our different senses, and has begun to uncover the brain regions and large-scale neural activity that contributes to an observer’s ability to perceive the relative timing of auditory and visual stimuli. In the present study, we developed the first behavioral tasks to assess the perception of audiovisual temporal synchrony in rats. Modeled after the parameters used in human studies, separate groups of rats were trained to perform: (1) a simultaneity judgment task in which they reported whether audiovisual stimuli at various stimulus onset asynchronies (SOAs) were presented simultaneously or not; and (2) a temporal order judgment task in which they reported whether they perceived the auditory or visual stimulus to have been presented first. Furthermore, using in vivo electrophysiological recordings in the lateral extrastriate visual (V2L) cortex of anesthetized rats, we performed the first investigation of how neurons in the rat multisensory cortex integrate audiovisual stimuli presented at different SOAs. As predicted, rats (n = 7) trained to perform the simultaneity judgment task could accurately (~80%) identify synchronous vs. asynchronous (200 ms SOA) trials. Moreover, the rats judged trials at 10 ms SOA to be synchronous, whereas the majority (~70%) of trials at 100 ms SOA were perceived to be asynchronous. During the temporal order judgment task, rats (n = 7) perceived the synchronous audiovisual stimuli to be “visual first” for ~52% of the trials, and calculation of the smallest timing interval between the auditory and visual stimuli that could be detected in each rat (i.e., the just noticeable difference (JND)) ranged from 77 ms to 122 ms. Neurons in the rat V2L cortex were sensitive to the timing of audiovisual stimuli, such that spiking activity was greatest during trials when the visual stimulus preceded the auditory by 20–40 ms. Ultimately, given that our behavioral and electrophysiological results were consistent with studies conducted on human participants and previous recordings made in multisensory brain regions of different species, we suggest that the rat represents an effective model for studying audiovisual temporal synchrony at both the neuronal and perceptual level. PMID:28119580
Image Location Estimation by Salient Region Matching.
Qian, Xueming; Zhao, Yisi; Han, Junwei
2015-11-01
Nowadays, locations of images have been widely used in many application scenarios for large geo-tagged image corpora. As to images which are not geographically tagged, we estimate their locations with the help of the large geo-tagged image set by content-based image retrieval. In this paper, we exploit spatial information of useful visual words to improve image location estimation (or content-based image retrieval performances). We proposed to generate visual word groups by mean-shift clustering. To improve the retrieval performance, spatial constraint is utilized to code the relative position of visual words. We proposed to generate a position descriptor for each visual word and build fast indexing structure for visual word groups. Experiments show the effectiveness of our proposed approach.
[Visual cuing effect for haptic angle judgment].
Era, Ataru; Yokosawa, Kazuhiko
2009-08-01
We investigated whether visual cues are useful for judging haptic angles. Participants explored three-dimensional angles with a virtual haptic feedback device. For visual cues, we use a location cue, which synchronizes haptic exploration, and a space cue, which specifies the haptic space. In Experiment 1, angles were judged more correctly with both cues, but were overestimated with a location cue only. In Experiment 2, the visual cues emphasized depth, and overestimation with location cues occurred, but space cues had no influence. The results showed that (a) when both cues are presented, haptic angles are judged more correctly. (b) Location cues facilitate only motion information, and not depth information. (c) Haptic angles are apt to be overestimated when there is both haptic and visual information.
Self-perceived health status, gender, and work status.
Pino-Domínguez, Lara; Navarro-Gil, Patricia; González-Vélez, Abel E; Prieto-Flores, Maria-Eugenia; Ayala, Alba; Rojo-Pérez, Fermina; Fernández-Mayoralas, Gloria; Martínez-Martín, Pablo; Forjaz, Maria João
2016-01-01
This study analyzes the relationship between gender and self-perceived health status in Spanish retirees and housewives from a sample of 1,106 community-dwelling older adults. A multivariate linear regression model was used in which self-perceived health status was measured by the EQ-5D visual analogue scale and gender according to work status (retired men and women and housewives). Retired males reported a significantly better health status than housewives. Self-perceived health status was closely associated with physical, mental, and functional health and leisure activities. Finally, being a woman with complete dedication to domestic work is associated with a worse state of self-perceived health.
Decoding illusory self-location from activity in the human hippocampus.
Guterstam, Arvid; Björnsdotter, Malin; Bergouignan, Loretxu; Gentile, Giovanni; Li, Tie-Qiang; Ehrsson, H Henrik
2015-01-01
Decades of research have demonstrated a role for the hippocampus in spatial navigation and episodic and spatial memory. However, empirical evidence linking hippocampal activity to the perceptual experience of being physically located at a particular place in the environment is lacking. In this study, we used a multisensory out-of-body illusion to perceptually 'teleport' six healthy participants between two different locations in the scanner room during high-resolution functional magnetic resonance imaging (fMRI). The participants were fitted with MRI-compatible head-mounted displays that changed their first-person visual perspective to that of a pair of cameras placed in one of two corners of the scanner room. To elicit the illusion of being physically located in this position, we delivered synchronous visuo-tactile stimulation in the form of an object moving toward the cameras coupled with touches applied to the participant's chest. Asynchronous visuo-tactile stimulation did not induce the illusion and served as a control condition. We found that illusory self-location could be successfully decoded from patterns of activity in the hippocampus in all of the participants in the synchronous (P < 0.05) but not in the asynchronous condition (P > 0.05). At the group-level, the decoding accuracy was significantly higher in the synchronous than in the asynchronous condition (P = 0.012). These findings associate hippocampal activity with the perceived location of the bodily self in space, which suggests that the human hippocampus is involved not only in spatial navigation and memory but also in the construction of our sense of bodily self-location.
Decoding illusory self-location from activity in the human hippocampus
Guterstam, Arvid; Björnsdotter, Malin; Bergouignan, Loretxu; Gentile, Giovanni; Li, Tie-Qiang; Ehrsson, H. Henrik
2015-01-01
Decades of research have demonstrated a role for the hippocampus in spatial navigation and episodic and spatial memory. However, empirical evidence linking hippocampal activity to the perceptual experience of being physically located at a particular place in the environment is lacking. In this study, we used a multisensory out-of-body illusion to perceptually ‘teleport’ six healthy participants between two different locations in the scanner room during high-resolution functional magnetic resonance imaging (fMRI). The participants were fitted with MRI-compatible head-mounted displays that changed their first-person visual perspective to that of a pair of cameras placed in one of two corners of the scanner room. To elicit the illusion of being physically located in this position, we delivered synchronous visuo-tactile stimulation in the form of an object moving toward the cameras coupled with touches applied to the participant’s chest. Asynchronous visuo-tactile stimulation did not induce the illusion and served as a control condition. We found that illusory self-location could be successfully decoded from patterns of activity in the hippocampus in all of the participants in the synchronous (P < 0.05) but not in the asynchronous condition (P > 0.05). At the group-level, the decoding accuracy was significantly higher in the synchronous than in the asynchronous condition (P = 0.012). These findings associate hippocampal activity with the perceived location of the bodily self in space, which suggests that the human hippocampus is involved not only in spatial navigation and memory but also in the construction of our sense of bodily self-location. PMID:26236222
Visual cortex activation in kinesthetic guidance of reaching.
Darling, W G; Seitz, R J; Peltier, S; Tellmann, L; Butler, A J
2007-06-01
The purpose of this research was to determine the cortical circuit involved in encoding and controlling kinesthetically guided reaching movements. We used (15)O-butanol positron emission tomography in ten blindfolded able-bodied volunteers in a factorial experiment in which arm (left/right) used to encode target location and to reach back to the remembered location and hemispace of target location (left/right side of midsagittal plane) varied systematically. During encoding of a target the experimenter guided the hand to touch the index fingertip to an external target and then returned the hand to the start location. After a short delay the subject voluntarily moved the same hand back to the remembered target location. SPM99 analysis of the PET data contrasting left versus right hand reaching showed increased (P < 0.05, corrected) neural activity in the sensorimotor cortex, premotor cortex and posterior parietal lobule (PPL) contralateral to the moving hand. Additional neural activation was observed in prefrontal cortex and visual association areas of occipital and parietal lobes contralateral and ipsilateral to the reaching hand. There was no statistically significant effect of target location in left versus right hemispace nor was there an interaction of hand and hemispace effects. Structural equation modeling showed that parietal lobe visual association areas contributed to kinesthetic processing by both hands but occipital lobe visual areas contributed only during dominant hand kinesthetic processing. This visual processing may also involve visualization of kinesthetically guided target location and use of the same network employed to guide reaches to visual targets when reaching to kinesthetic targets. The present work clearly demonstrates a network for kinesthetic processing that includes higher visual processing areas in the PPL for both upper limbs and processing in occipital lobe visual areas for the dominant limb.
Weakley, Jonathon Js; Wilson, Kyle M; Till, Kevin; Read, Dale B; Darrall-Jones, Joshua; Roe, Gregory; Phibbs, Padraic J; Jones, Ben
2017-07-12
It is unknown whether instantaneous visual feedback of resistance training outcomes can enhance barbell velocity in younger athletes. Therefore, the purpose of this study was to quantify the effects of visual feedback on mean concentric barbell velocity in the back squat, and to identify changes in motivation, competitiveness, and perceived workload. In a randomised-crossover design (Feedback vs. Control) feedback of mean concentric barbell velocity was or was not provided throughout a set of 10 repetitions in the barbell back squat. Magnitude-based inferences were used to assess changes between conditions, with almost certainly greater differences in mean concentric velocity between the Feedback (0.70 ±0.04 m·s) and Control (0.65 ±0.05 m·s) observed. Additionally, individual repetition mean concentric velocity ranged from possibly (repetition number two: 0.79 ±0.04 vs. 0.78 ±0.04 m·s) to almost certainly (repetition number 10: 0.58 ±0.05 vs. 0.49 ±0.05 m·s) greater when provided feedback, while almost certain differences were observed in motivation, competitiveness, and perceived workload, respectively. Providing adolescent male athletes with visual kinematic information while completing resistance training is beneficial for the maintenance of barbell velocity during a training set, potentially enhancing physical performance. Moreover, these improvements were observed alongside increases in motivation, competitiveness and perceived workload providing insight into the underlying mechanisms responsible for the performance gains observed. Given the observed maintenance of barbell velocity during a training set, practitioners can use this technique to manipulate training outcomes during resistance training.
Estimation of detection thresholds for redirected walking techniques.
Steinicke, Frank; Bruder, Gerd; Jerald, Jason; Frenz, Harald; Lappe, Markus
2010-01-01
In immersive virtual environments (IVEs), users can control their virtual viewpoint by moving their tracked head and walking through the real world. Usually, movements in the real world are mapped one-to-one to virtual camera motions. With redirection techniques, the virtual camera is manipulated by applying gains to user motion so that the virtual world moves differently than the real world. Thus, users can walk through large-scale IVEs while physically remaining in a reasonably small workspace. In psychophysical experiments with a two-alternative forced-choice task, we have quantified how much humans can unknowingly be redirected on physical paths that are different from the visually perceived paths. We tested 12 subjects in three different experiments: (E1) discrimination between virtual and physical rotations, (E2) discrimination between virtual and physical straightforward movements, and (E3) discrimination of path curvature. In experiment E1, subjects performed rotations with different gains, and then had to choose whether the visually perceived rotation was smaller or greater than the physical rotation. In experiment E2, subjects chose whether the physical walk was shorter or longer than the visually perceived scaled travel distance. In experiment E3, subjects estimate the path curvature when walking a curved path in the real world while the visual display shows a straight path in the virtual world. Our results show that users can be turned physically about 49 percent more or 20 percent less than the perceived virtual rotation, distances can be downscaled by 14 percent and upscaled by 26 percent, and users can be redirected on a circular arc with a radius greater than 22 m while they believe that they are walking straight.
ERIC Educational Resources Information Center
Srinivasan, Ravindra J.; Massaro, Dominic W.
2003-01-01
Examined the processing of potential auditory and visual cues that differentiate statements from echoic questions. Found that both auditory and visual cues reliably conveyed statement and question intonation, were successfully synthesized, and generalized to other utterances. (Author/VWL)
Developing Local Scale, High Resolution, Data to Interface with Numerical Hurricane Models
NASA Astrophysics Data System (ADS)
Witkop, R.; Becker, A.
2017-12-01
In 2017, the University of Rhode Island's (URI's) Graduate School of Oceanography (GSO) developed hurricane models that specify wind speed, inundation, and erosion around Rhode Island with enough precision to incorporate impacts on individual facilities. At the same time, URI's Marine Affairs Visualization Lab (MAVL) developed a way to realistically visualize these impacts in 3-D. Since climate change visualizations and water resource simulations have been shown to promote resiliency action (Sheppard, 2015) and increase credibility (White et al., 2010) when local knowledge is incorporated, URI's hurricane models and visualizations may also more effectively enable hurricane resilience actions if they include Facility Manager (FM) and Emergency Manager (EM) perceived hurricane impacts. This study determines how FM's and EM's perceive their assets as being vulnerable to quantifiable hurricane-related forces at the individual facility scale while exploring methods to elicit this information from FMs and EMs in a format usable for incorporation into URI GSO's hurricane models.
Illusions of having small or large invisible bodies influence visual perception of object size
van der Hoort, Björn; Ehrsson, H. Henrik
2016-01-01
The size of our body influences the perceived size of the world so that objects appear larger to children than to adults. The mechanisms underlying this effect remain unclear. It has been difficult to dissociate visual rescaling of the external environment based on an individual’s visible body from visual rescaling based on a central multisensory body representation. To differentiate these potential causal mechanisms, we manipulated body representation without a visible body by taking advantage of recent developments in body representation research. Participants experienced the illusion of having a small or large invisible body while object-size perception was tested. Our findings show that the perceived size of test-objects was determined by the size of the invisible body (inverse relation), and by the strength of the invisible body illusion. These findings demonstrate how central body representation directly influences visual size perception, without the need for a visible body, by rescaling the spatial representation of the environment. PMID:27708344
Multiple Fingers - One Gestalt.
Lezkan, Alexandra; Manuel, Steven G; Colgate, J Edward; Klatzky, Roberta L; Peshkin, Michael A; Drewing, Knut
2016-01-01
The Gestalt theory of perception offered principles by which distributed visual sensations are combined into a structured experience ("Gestalt"). We demonstrate conditions whereby haptic sensations at two fingertips are integrated in the perception of a single object. When virtual bumps were presented simultaneously to the right hand's thumb and index finger during lateral arm movements, participants reported perceiving a single bump. A discrimination task measured the bump's perceived location and perceptual reliability (assessed by differential thresholds) for four finger configurations, which varied in their adherence to the Gestalt principles of proximity (small versus large finger separation) and synchrony (virtual spring to link movements of the two fingers versus no spring). According to models of integration, reliability should increase with the degree to which multi-finger cues integrate into a unified percept. Differential thresholds were smaller in the virtual-spring condition (synchrony) than when fingers were unlinked. Additionally, in the condition with reduced synchrony, greater proximity led to lower differential thresholds. Thus, with greater adherence to Gestalt principles, thresholds approached values predicted for optimal integration. We conclude that the Gestalt principles of synchrony and proximity apply to haptic perception of surface properties and that these principles can interact to promote multi-finger integration.
Visual-Accommodation Trainer/Tester
NASA Technical Reports Server (NTRS)
Randle, Robert J., Jr.
1986-01-01
Ophthalmic instrument tests and helps develop focusing ability. Movable stage on a fixed base permits adjustment of effective target position as perceived by subject. Various apertures used to perform tests and training procedures. Ophthalmic instrument provides four functions: it measures visual near and far points; provides focus stimulus in vision research; measures visual-accommodation resting position; can be used to train for volitional control of person's focus response.
The primary visual cortex in the neural circuit for visual orienting
NASA Astrophysics Data System (ADS)
Zhaoping, Li
The primary visual cortex (V1) is traditionally viewed as remote from influencing brain's motor outputs. However, V1 provides the most abundant cortical inputs directly to the sensory layers of superior colliculus (SC), a midbrain structure to command visual orienting such as shifting gaze and turning heads. I will show physiological, anatomical, and behavioral data suggesting that V1 transforms visual input into a saliency map to guide a class of visual orienting that is reflexive or involuntary. In particular, V1 receives a retinotopic map of visual features, such as orientation, color, and motion direction of local visual inputs; local interactions between V1 neurons perform a local-to-global computation to arrive at a saliency map that highlights conspicuous visual locations by higher V1 responses. The conspicuous location are usually, but not always, where visual input statistics changes. The population V1 outputs to SC, which is also retinotopic, enables SC to locate, by lateral inhibition between SC neurons, the most salient location as the saccadic target. Experimental tests of this hypothesis will be shown. Variations of the neural circuit for visual orienting across animal species, with more or less V1 involvement, will be discussed. Supported by the Gatsby Charitable Foundation.
The impact of continuity editing in narrative film on event segmentation.
Magliano, Joseph P; Zacks, Jeffrey M
2011-01-01
Filmmakers use continuity editing to engender a sense of situational continuity or discontinuity at editing boundaries. The goal of this study was to assess the impact of continuity editing on how people perceive the structure of events in a narrative film and to identify brain networks that are associated with the processing of different types of continuity editing boundaries. Participants viewed a commercially produced film and segmented it into meaningful events, while brain activity was recorded with functional magnetic resonance imaging (MRI). We identified three degrees of continuity that can occur at editing locations: edits that are continuous in space, time, and action; edits that are discontinuous in space or time but continuous in action; and edits that are discontinuous in action as well as space or time. Discontinuities in action had the biggest impact on behavioral event segmentation, and discontinuities in space and time had minor effects. Edits were associated with large transient increases in early visual areas. Spatial-temporal changes and action changes produced strikingly different patterns of transient change, and they provided evidence that specialized mechanisms in higher order perceptual processing regions are engaged to maintain continuity of action in the face of spatiotemporal discontinuities. These results suggest that commercial film editing is shaped to support the comprehension of meaningful events that bridge breaks in low-level visual continuity, and even breaks in continuity of spatial and temporal location. Copyright © 2011 Cognitive Science Society, Inc.
Dynamic and predictive links between touch and vision.
Gray, Rob; Tan, Hong Z
2002-07-01
We investigated crossmodal links between vision and touch for moving objects. In experiment 1, observers discriminated visual targets presented randomly at one of five locations on their forearm. Tactile pulses simulating motion along the forearm preceded visual targets. At short tactile-visual ISIs, discriminations were more rapid when the final tactile pulse and visual target were at the same location. At longer ISIs, discriminations were more rapid when the visual target was offset in the motion direction and were slower for offsets opposite to the motion direction. In experiment 2, speeded tactile discriminations at one of three random locations on the forearm were preceded by a visually simulated approaching object. Discriminations were more rapid when the object approached the location of the tactile stimulation and discrimination performance was dependent on the approaching object's time to contact. These results demonstrate dynamic links in the spatial mapping between vision and touch.
SmartAdP: Visual Analytics of Large-scale Taxi Trajectories for Selecting Billboard Locations.
Liu, Dongyu; Weng, Di; Li, Yuhong; Bao, Jie; Zheng, Yu; Qu, Huamin; Wu, Yingcai
2017-01-01
The problem of formulating solutions immediately and comparing them rapidly for billboard placements has plagued advertising planners for a long time, owing to the lack of efficient tools for in-depth analyses to make informed decisions. In this study, we attempt to employ visual analytics that combines the state-of-the-art mining and visualization techniques to tackle this problem using large-scale GPS trajectory data. In particular, we present SmartAdP, an interactive visual analytics system that deals with the two major challenges including finding good solutions in a huge solution space and comparing the solutions in a visual and intuitive manner. An interactive framework that integrates a novel visualization-driven data mining model enables advertising planners to effectively and efficiently formulate good candidate solutions. In addition, we propose a set of coupled visualizations: a solution view with metaphor-based glyphs to visualize the correlation between different solutions; a location view to display billboard locations in a compact manner; and a ranking view to present multi-typed rankings of the solutions. This system has been demonstrated using case studies with a real-world dataset and domain-expert interviews. Our approach can be adapted for other location selection problems such as selecting locations of retail stores or restaurants using trajectory data.
Revisiting Curriculum Inquiry: The Role of Visual Representations
ERIC Educational Resources Information Center
Eilam, Billie; Ben-Peretz, Miriam
2010-01-01
How do visual representations (VRs) in curriculum materials influence theoretical curriculum frameworks? Suggesting that VRs' integration into curriculum materials affords a different lens for perceiving and understanding the curriculum domain, this study draws on a curricular perspective in relation to multi-representations in texts rather than…
Sounds Exaggerate Visual Shape
ERIC Educational Resources Information Center
Sweeny, Timothy D.; Guzman-Martinez, Emmanuel; Ortega, Laura; Grabowecky, Marcia; Suzuki, Satoru
2012-01-01
While perceiving speech, people see mouth shapes that are systematically associated with sounds. In particular, a vertically stretched mouth produces a /woo/ sound, whereas a horizontally stretched mouth produces a /wee/ sound. We demonstrate that hearing these speech sounds alters how we see aspect ratio, a basic visual feature that contributes…
Putting to a bigger hole: Golf performance relates to perceived size
Witt, Jessica K.; Linkenauger, Sally A.; Bakdash, Jonathan Z.; Proffitt, Dennis R.
2011-01-01
When engaged in a skilled behaviour such as occurs in sports, people's perceptions relate optical information to their performance. In current research we demonstrate the effects of performance on size perception in golfers. We found golfers who played better judged the hole to be bigger than golfers who did not play as well (Study 1). In follow-up laboratory experiments, participants putted on a golf mat from a location near or far from the hole then judged the size of the hole. Participants who putted from the near location perceived the hole to be bigger than participants who putted from the far location. Our results demonstrate that perception is influenced by the perceiver's current ability to act effectively in the environment. PMID:18567258
Dokka, Kalpana; DeAngelis, Gregory C.
2015-01-01
Humans and animals are fairly accurate in judging their direction of self-motion (i.e., heading) from optic flow when moving through a stationary environment. However, an object moving independently in the world alters the optic flow field and may bias heading perception if the visual system cannot dissociate object motion from self-motion. We investigated whether adding vestibular self-motion signals to optic flow enhances the accuracy of heading judgments in the presence of a moving object. Macaque monkeys were trained to report their heading (leftward or rightward relative to straight-forward) when self-motion was specified by vestibular, visual, or combined visual-vestibular signals, while viewing a display in which an object moved independently in the (virtual) world. The moving object induced significant biases in perceived heading when self-motion was signaled by either visual or vestibular cues alone. However, this bias was greatly reduced when visual and vestibular cues together signaled self-motion. In addition, multisensory heading discrimination thresholds measured in the presence of a moving object were largely consistent with the predictions of an optimal cue integration strategy. These findings demonstrate that multisensory cues facilitate the perceptual dissociation of self-motion and object motion, consistent with computational work that suggests that an appropriate decoding of multisensory visual-vestibular neurons can estimate heading while discounting the effects of object motion. SIGNIFICANCE STATEMENT Objects that move independently in the world alter the optic flow field and can induce errors in perceiving the direction of self-motion (heading). We show that adding vestibular (inertial) self-motion signals to optic flow almost completely eliminates the errors in perceived heading induced by an independently moving object. Furthermore, this increased accuracy occurs without a substantial loss in the precision. Our results thus demonstrate that vestibular signals play a critical role in dissociating self-motion from object motion. PMID:26446214
Does apparent size capture attention in visual search? Evidence from the Muller-Lyer illusion.
Proulx, Michael J; Green, Monique
2011-11-23
Is perceived size a crucial factor for the bottom-up guidance of attention? Here, a visual search experiment was used to examine whether an irrelevantly longer object can capture attention when participants were to detect a vertical target item. The longer object was created by an apparent size manipulation, the Müller-Lyer illusion; however, all objects contained the same number of pixels. The vertical target was detected more efficiently when it was also perceived as the longer item that was defined by apparent size. Further analysis revealed that the longer Müller-Lyer object received a greater degree of attentional priority than published results for other features such as retinal size, luminance contrast, and the abrupt onset of a new object. The present experiment has demonstrated for the first time that apparent size can capture attention and, thus, provide bottom-up guidance on the basis of perceived salience.
The brain's dress code: How The Dress allows to decode the neuronal pathway of an optical illusion.
Schlaffke, Lara; Golisch, Anne; Haag, Lauren M; Lenz, Melanie; Heba, Stefanie; Lissek, Silke; Schmidt-Wilcke, Tobias; Eysel, Ulf T; Tegenthoff, Martin
2015-12-01
Optical illusions have broadened our understanding of the brain's role in visual perception. A modern day optical illusion emerged from a posted photo of a striped dress, which some perceived as white and gold and others as blue and black. Here we show, using functional magnetic resonance imaging (fMRI), that those who perceive The Dress as white/gold have higher activation in response to the image of The Dress in brain regions critically involved in higher cognition (frontal and parietal brain areas). These results are consistent with theories of top-down modulation and present a neural signature associated with the differences in perceiving The Dress as white/gold or blue/black. Furthermore the results support recent psychophysiological data on this phenomenon and provide a fundamental building block to study interindividual differences in visual processing. Copyright © 2015 Elsevier Ltd. All rights reserved.
Children Perceive Speech Onsets by Ear and Eye
ERIC Educational Resources Information Center
Jerger, Susan; Damian, Markus F.; Tye-Murrey, Nancy; Abdi, Herve
2017-01-01
Adults use vision to perceive low-fidelity speech; yet how children acquire this ability is not well understood. The literature indicates that children show reduced sensitivity to visual speech from kindergarten to adolescence. We hypothesized that this pattern reflects the effects of complex tasks and a growth period with harder-to-utilize…
Recognition of Amodal Language Identity Emerges in Infancy
ERIC Educational Resources Information Center
Lewkowicz, David J.; Pons, Ferran
2013-01-01
Audiovisual speech consists of overlapping and invariant patterns of dynamic acoustic and optic articulatory information. Research has shown that infants can perceive a variety of basic auditory-visual (A-V) relations but no studies have investigated whether and when infants begin to perceive higher order A-V relations inherent in speech. Here, we…
Alphabetic letter identification: Effects of perceivability, similarity, and bias☆
Mueller, Shane T.; Weidemann, Christoph T.
2012-01-01
The legibility of the letters in the Latin alphabet has been measured numerous times since the beginning of experimental psychology. To identify the theoretical mechanisms attributed to letter identification, we report a comprehensive review of literature, spanning more than a century. This review revealed that identification accuracy has frequently been attributed to a subset of three common sources: perceivability, bias, and similarity. However, simultaneous estimates of these values have rarely (if ever) been performed. We present the results of two new experiments which allow for the simultaneous estimation of these factors, and examine how the shape of a visual mask impacts each of them, as inferred through a new statistical model. Results showed that the shape and identity of the mask impacted the inferred perceivability, bias, and similarity space of a letter set, but that there were aspects of similarity that were robust to the choice of mask. The results illustrate how the psychological concepts of perceivability, bias, and similarity can be estimated simultaneously, and how each make powerful contributions to visual letter identification. PMID:22036587
The effects of auditive and visual settings on perceived restoration likelihood
Jahncke, Helena; Eriksson, Karolina; Naula, Sanna
2015-01-01
Research has so far paid little attention to how environmental sounds might affect restorative processes. The aim of the present study was to investigate the effects of auditive and visual stimuli on perceived restoration likelihood and attitudes towards varying environmental resting conditions. Assuming a condition of cognitive fatigue, all participants (N = 40) were presented with images of an open plan office and urban nature, each under four sound conditions (nature sound, quiet, broadband noise, office noise). After the presentation of each setting/sound combination, the participants assessed it according to restorative qualities, restoration likelihood and attitude. The results mainly showed predicted effects of the sound manipulations on the perceived restorative qualities of the settings. Further, significant interactions between auditive and visual stimuli were found for all measures. Both nature sounds and quiet more positively influenced evaluations of the nature setting compared to the office setting. When office noise was present, both settings received poor evaluations. The results agree with expectations that nature sounds and quiet areas support restoration, while office noise and broadband noise (e.g. ventilation, traffic noise) do not. The findings illustrate the significance of environmental sound for restorative experience. PMID:25599752
Perception of Egocentric Distance during Gravitational Changes in Parabolic Flight.
Clément, Gilles; Loureiro, Nuno; Sousa, Duarte; Zandvliet, Andre
2016-01-01
We explored the effect of gravity on the perceived representation of the absolute distance of objects to the observers within the range from 1.5-6 m. Experiments were performed on board the CNES Airbus Zero-G during parabolic flights eliciting repeated exposures to short periods of microgravity (0 g), hypergravity (1.8 g), and normal gravity (1 g). Two methods for obtaining estimates of perceived egocentric distance were used: verbal reports and visually directed motion toward a memorized visual target. For the latter method, because normal walking is not possible in 0 g, blindfolded subjects translated toward the visual target by pulling on a rope with their arms. The results showed that distance estimates using both verbal reports and blind pulling were significantly different between normal gravity, microgravity, and hypergravity. Compared to the 1 g measurements, the estimates of perceived distance using blind pulling were shorter for all distances in 1.8 g, whereas in 0 g they were longer for distances up to 4 m and shorter for distances beyond. These findings suggest that gravity plays a role in both the sensorimotor system and the perceptual/cognitive system for estimating egocentric distance.
Perception of Egocentric Distance during Gravitational Changes in Parabolic Flight
Clément, Gilles; Loureiro, Nuno; Sousa, Duarte; Zandvliet, Andre
2016-01-01
We explored the effect of gravity on the perceived representation of the absolute distance of objects to the observers within the range from 1.5–6 m. Experiments were performed on board the CNES Airbus Zero-G during parabolic flights eliciting repeated exposures to short periods of microgravity (0 g), hypergravity (1.8 g), and normal gravity (1 g). Two methods for obtaining estimates of perceived egocentric distance were used: verbal reports and visually directed motion toward a memorized visual target. For the latter method, because normal walking is not possible in 0 g, blindfolded subjects translated toward the visual target by pulling on a rope with their arms. The results showed that distance estimates using both verbal reports and blind pulling were significantly different between normal gravity, microgravity, and hypergravity. Compared to the 1 g measurements, the estimates of perceived distance using blind pulling were shorter for all distances in 1.8 g, whereas in 0 g they were longer for distances up to 4 m and shorter for distances beyond. These findings suggest that gravity plays a role in both the sensorimotor system and the perceptual/cognitive system for estimating egocentric distance. PMID:27463106
Emotional tears facilitate the recognition of sadness and the perceived need for social support.
Balsters, Martijn J H; Krahmer, Emiel J; Swerts, Marc G J; Vingerhoets, Ad J J M
2013-02-12
The tearing effect refers to the relevance of tears as an important visual cue adding meaning to human facial expression. However, little is known about how people process these visual cues and their mediating role in terms of emotion perception and person judgment. We therefore conducted two experiments in which we measured the influence of tears on the identification of sadness and the perceived need for social support at an early perceptional level. In two experiments (1 and 2), participants were exposed to sad and neutral faces. In both experiments, the face stimuli were presented for 50 milliseconds. In experiment 1, tears were digitally added to sad faces in one condition. Participants demonstrated a significant faster recognition of sad faces with tears compared to those without tears. In experiment 2, tears were added to neutral faces as well. Participants had to indicate to what extent the displayed individuals were in need of social support. Study participants reported a greater perceived need for social support to both sad and neutral faces with tears than to those without tears. This study thus demonstrated that emotional tears serve as important visual cues at an early (pre-attentive) level.
Enhancing performance expectancies through visual illusions facilitates motor learning in children.
Bahmani, Moslem; Wulf, Gabriele; Ghadiri, Farhad; Karimi, Saeed; Lewthwaite, Rebecca
2017-10-01
In a recent study by Chauvel, Wulf, and Maquestiaux (2015), golf putting performance was found to be affected by the Ebbinghaus illusion. Specifically, adult participants demonstrated more effective learning when they practiced with a hole that was surrounded by small circles, making it look larger, than when the hole was surrounded by large circles, making it look smaller. The present study examined whether this learning advantage would generalize to children who are assumed to be less sensitive to the visual illusion. Two groups of 10-year olds practiced putting golf balls from a distance of 2m, with perceived larger or smaller holes resulting from the visual illusion. Self-efficacy was increased in the group with the perceived larger hole. The latter group also demonstrated more accurate putting performance during practice. Importantly, learning (i.e., delayed retention performance without the illusion) was enhanced in the group that practiced with the perceived larger hole. The findings replicate previous results with adult learners and are in line with the notion that enhanced performance expectancies are key to optimal motor learning (Wulf & Lewthwaite, 2016). Copyright © 2017 Elsevier B.V. All rights reserved.
Serial dependence in the perception of attractiveness.
Xia, Ye; Leib, Allison Yamanashi; Whitney, David
2016-12-01
The perception of attractiveness is essential for choices of food, object, and mate preference. Like perception of other visual features, perception of attractiveness is stable despite constant changes of image properties due to factors like occlusion, visual noise, and eye movements. Recent results demonstrate that perception of low-level stimulus features and even more complex attributes like human identity are biased towards recent percepts. This effect is often called serial dependence. Some recent studies have suggested that serial dependence also exists for perceived facial attractiveness, though there is also concern that the reported effects are due to response bias. Here we used an attractiveness-rating task to test the existence of serial dependence in perceived facial attractiveness. Our results demonstrate that perceived face attractiveness was pulled by the attractiveness level of facial images encountered up to 6 s prior. This effect was not due to response bias and did not rely on the previous motor response. This perceptual pull increased as the difference in attractiveness between previous and current stimuli increased. Our results reconcile previously conflicting findings and extend previous work, demonstrating that sequential dependence in perception operates across different levels of visual analysis, even at the highest levels of perceptual interpretation.
Soundwalk approach to identify urban soundscapes individually.
Jeon, Jin Yong; Hong, Joo Young; Lee, Pyoung Jik
2013-07-01
This study proposes a soundwalk procedure for evaluating urban soundscapes. Previous studies, which adopted soundwalk methodologies for investigating participants' responses to visual and acoustic environments, were analyzed considering type, evaluation position, measurement, and subjective assessment. An individual soundwalk procedure was then developed based on asking individual subjects to walk and select evaluation positions where they perceived any positive or negative characteristics of the urban soundscape. A case study was performed in urban spaces and the results were compared with those of the group soundwalk to validate the individual soundwalk procedure. Thirty subjects (15 architects and 15 acousticians) participated in the soundwalk. During the soundwalk, the subjects selected a total of 196 positions, and those were classified into 4 groups. It was found that soundscape perceptions were dominated by acoustic comfort, visual images, and openness. It was also revealed that perceived elements of the acoustic environment and visual image differed across classified soundscape groups, and there was a difference between architects and acousticians in terms of how they described their impressions of the soundscape elements. The results show that the individual soundwalk procedure has advantages for measuring diverse subjective responses and for obtaining the perceived elements of the urban soundscape.
Ye, Ying; Griffin, Michael J
2016-04-01
This study investigated whether the reductions in finger blood flow induced by 125-Hz vibration applied to different locations on the hand depend on thresholds for perceiving vibration at these locations. Subjects attended three sessions during which vibration was applied to the right index finger, the right thenar eminence, or the left thenar eminence. Absolute thresholds for perceiving vibration at these locations were determined. Finger blood flow in the middle finger of both hands was then measured at 30-s intervals during five successive 5-min periods: (i) pre-exposure, (ii) pre-exposure with 2-N force, (iii) 2-N force with vibration, (iv) post-exposure with 2-N force, (v) recovery. During period (iii), vibration was applied at 15 dB above the absolute threshold for perceiving vibration at the right thenar eminence. Vibration at all three locations reduced finger blood flow on the exposed and unexposed hand, with greater reductions when vibrating the finger. Vibration-induced vasoconstriction was greatest for individuals with low thresholds and locations of excitation with low thresholds. Differences in vasoconstriction between subjects and between locations are consistent with the Pacinian channel mediating both absolute thresholds and vibration-induced vasoconstriction.
Surface gloss and color perception of 3D objects.
Xiao, Bei; Brainard, David H
2008-01-01
Two experiments explore the color perception of objects in complex scenes. The first experiment examines the color perception of objects across variation in surface gloss. Observers adjusted the color appearance of a matte sphere to match that of a test sphere. Across conditions we varied the body color and glossiness of the test sphere. The data indicate that observers do not simply match the average light reflected from the test. Indeed, the visual system compensates for the physical effect of varying the gloss, so that appearance is stabilized relative to what is predicted by the spatial average. The second experiment examines how people perceive color across locations on an object. We replaced the test sphere with a soccer ball that had one of its hexagonal faces colored. Observers were asked to adjust the match sphere have the same color appearance as this test patch. The test patch could be located at either an upper or lower location on the soccer ball. In addition, we varied the surface gloss of the entire soccer ball (including the test patch). The data show that there is an effect of test patch location on observers' color matching, but this effect is small compared to the physical change in the average light reflected from the test patch across the two locations. In addition, the effect of glossy highlights on the color appearance of the test patch was consistent with the results from Experiment 1.
Surface gloss and color perception of 3D objects
Xiao, Bei; Brainard, David H.
2008-01-01
Two experiments explore the color perception of objects in complex scenes. The first experiment examines the color perception of objects across variation in surface gloss. Observers adjusted the color appearance of a matte sphere to match that of a test sphere. Across conditions we varied the body color and glossiness of the test sphere. The data indicate that observers do not simply match the average light reflected from the test. Indeed, the visual system compensates for the physical effect of varying the gloss, so that appearance is stabilized relative to what is predicted by the spatial average. The second experiment examines how people perceive color across locations on an object. We replaced the test sphere with a soccer ball that had one of its hexagonal faces colored. Observers were asked to adjust the match sphere have the same color appearance as this test patch. The test patch could be located at either an upper or lower location on the soccer ball. In addition, we varied the surface gloss of the entire soccer ball (including the test patch). The data show that there is an effect of test patch location on observers’ color matching, but this effect is small compared to the physical change in the average light reflected from the test patch across the two locations. In addition, the effect of glossy highlights on the color appearance of the test patch was consistent with the results from Experiment 1. PMID:18598406
Alais, David; Cass, John
2010-06-23
An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question. Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ). Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds) occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones) was slightly weaker than visual learning (lateralised grating patches). Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes. The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns of featural transfer suggest that perceptual learning of temporal order may be optimised to object-centered rather than viewer-centered constraints.
McClain, Arianna; van den Bos, Wouter; Matheson, Donna; Desai, Manisha; McClure, Samuel M.; Robinson, Thomas N.
2013-01-01
OBJECTIVE The Delboeuf Illusion affects perceptions of the relative sizes of concentric shapes. This study was designed to extend research on the application of the Delboeuf illusion to food on a plate by testing whether a plate’s rim width and coloring influence perceptual bias to affect perceived food portion size. DESIGN AND METHODS Within-subjects experimental design. Experiment 1 tested the effect of rim width on perceived food portion size. Experiment 2 tested the effect of rim coloring on perceived food portion size. In both experiments, participants observed a series of photographic images of paired, side-by-side plates varying in designs and amounts of food. From each pair, participants were asked to select the plate that contained more food. Multi-level logistic regression examined the effects of rim width and coloring on perceived food portion size. RESULTS Experiment 1: Participants overestimated the diameter of food portions by 5% and the visual area of food portions by 10% on plates with wider rims compared to plates with very thin rims (P<0.0001). The effect of rim width was greater with larger food portion sizes. Experiment 2: Participants overestimated the diameter of food portions by 1.5% and the visual area of food portions by 3% on plates with rim coloring compared to plates with no coloring (P=0.01). The effect of rim coloring was greater with smaller food portion sizes. CONCLUSION The Delboeuf illusion applies to food on a plate. Participants overestimated food portion size on plates with wider and colored rims. These findings may help design plates to influence perceptions of food portion sizes. PMID:24005858
Measurement of Perceived Stress in Age-Related Macular Degeneration.
Dougherty, Bradley E; Cooley, San-San L; Davidorf, Frederick H
2017-03-01
To validate the Perceived Stress Scale (PSS) in patients with age-related macular degeneration (AMD) using Rasch analysis. Study participants with AMD were recruited from the retina service of the Department of Ophthalmology at the Ohio State University during clinical visits for treatment or observation. Visual acuity with habitual distance correction was assessed. A 10-item version of the PSS was administered in large print or by reading the items to the patient. Rasch analysis was used to investigate the measurement properties of the PSS, including fit to the model, ability to separate between people with different levels of perceived stress, category response structure performance, and unidimensionality. A total of 137 patients with a diagnosis of AMD were enrolled. The mean (±SD) age of participants was 82 ± 9 years. Fifty-four percent were female. Median Early Treatment of Diabetic Retinopathy Study (ETDRS) visual acuity of the better eye was 65 letters (Snellen 20/50), with a range of approximately 20/800 to 20/15. Forty-seven percent of participants were receiving an anti-VEGF injection on the day of the study visit. The response category structure was appropriate. One item, "How often have you felt confident in your ability to handle your personal problems?" was removed due to poor fit statistics. The remaining nine items showed good fit to the model, acceptable measurement precision as assessed by the Rasch person separation statistic, and unidimensionality. There was some evidence of differential item functioning by age and visual acuity. The Perceived Stress Scale demonstrated acceptable measurement properties and may be useful for the measurement of perceived stress in patients with AMD.
Fixating at far distance shortens reaction time to peripheral visual stimuli at specific locations.
Kokubu, Masahiro; Ando, Soichi; Oda, Shingo
2018-01-18
The purpose of the present study was to examine whether the fixation distance in real three-dimensional space affects manual reaction time to peripheral visual stimuli. Light-emitting diodes were used for presenting a fixation point and four peripheral visual stimuli. The visual stimuli were located at a distance of 45cm and at 25° in the left, right, upper, and lower directions from the sagittal axis including the fixation point. Near (30cm), Middle (45cm), Far (90cm), and Very Far (300cm) fixation distance conditions were used. When one of the four visual stimuli was randomly illuminated, the participants released a button as quickly as possible. Results showed that overall peripheral reaction time decreased as the fixation distance increased. The significant interaction between fixation distance and stimulus location indicated that the effect of fixation distance on reaction time was observed at the left, right, and upper locations but not at the lower location. These results suggest that fixating at far distance would contribute to faster reaction and that the effect is specific to locations in the peripheral visual field. The present findings are discussed in terms of viewer-centered representation, the focus of attention in depth, and visual field asymmetry related to neurological and psychological aspects. Copyright © 2017 Elsevier B.V. All rights reserved.
Perceptual Completion in Newborn Human Infants
ERIC Educational Resources Information Center
Valenza, Eloisa; Leo, Irene; Gava, Lucia; Simion, Francesca
2006-01-01
Despite decades of studies of human infants, a still open question concerns the role of visual experience in the development of the ability to perceive complete shapes over partial occlusion. Previous studies show that newborns fail to manifest this ability, either because they lack the visual experience required for perceptual completion or…
Creative Literacy: A New Space of Pedagogical Understanding
ERIC Educational Resources Information Center
Hrenko, Kelly A.; Stairs, Andrea J.
2012-01-01
This research has begun to examine how teachers in Maine meaningfully infuse art and Native American epistemologies through visual arts and writing across curricula to enhance student learning and engagement. Teachers explored a perceived new space of pedagogical possibility within visual arts and American Indian curricula as cross-disciplinary…
Psychophysics of the McGurk and Other Audiovisual Speech Integration Effects
ERIC Educational Resources Information Center
Jiang, Jintao; Bernstein, Lynne E.
2011-01-01
When the auditory and visual components of spoken audiovisual nonsense syllables are mismatched, perceivers produce four different types of perceptual responses, auditory correct, visual correct, fusion (the so-called "McGurk effect"), and combination (i.e., two consonants are reported). Here, quantitative measures were developed to account for…
Domain-Specific Ratings of Importance and Global Self-Worth of Children with Visual Impairments
ERIC Educational Resources Information Center
Shapiro, Deborah R.; Moffett, Aaron; Lieberman, Lauren; Dummer, Gail M.
2008-01-01
This study examined perceived competence; ratings of importance of physical appearance, athletic competence, and social acceptance; discrepancy scores; and global self-worth of 43 children with visual impairments. The findings revealed that the children discounted the importance of physical appearance, athletic competence, and social acceptance…
Teacher Vision: Expert and Novice Teachers' Perception of Problematic Classroom Management Scenes
ERIC Educational Resources Information Center
Wolff, Charlotte E.; Jarodzka, Halszka; van den Bogert, Niek; Boshuizen, Henny P. A.
2016-01-01
Visual expertise has been explored in numerous professions, but research on teachers' vision remains limited. Teachers' visual expertise is an important professional skill, particularly the ability to simultaneously perceive and interpret classroom situations for effective classroom management. This skill is complex and relies on an awareness of…
Perceived School Safety: Visual Narratives from the Middle Grades
ERIC Educational Resources Information Center
Biag, Manuelito
2014-01-01
Using participatory visual research methods, this study examined how certain low-income, urban youth in a high-minority middle school characterized safe and unsafe spaces on campus. Drawing from a convenience sample of identified gifted students in one classroom (N = 20), results suggested how caring support from adults, friendly peer…
Field Dependence, Perceptual Instability, and Sex Differences.
ERIC Educational Resources Information Center
Bergum, Judith E.; Bergum, Bruce O.
Recent studies have shown perceptual instability to be related to visual creativity as reflected in career choice. In general, those who display greater perceptual instability perceive themselves to be more creative and tend to choose careers related to visual creativity, regardless of their gender. To test the hypothesis that field independents…
Dynamic Visual Perception and Reading Development in Chinese School Children
ERIC Educational Resources Information Center
Meng, Xiangzhi; Cheng-Lai, Alice; Zeng, Biao; Stein, John F.; Zhou, Xiaolin
2011-01-01
The development of reading skills may depend to a certain extent on the development of basic visual perception. The magnocellular theory of developmental dyslexia assumes that deficits in the magnocellular pathway, indicated by less sensitivity in perceiving dynamic sensory stimuli, are responsible for a proportion of reading difficulties…
Are We Better without Technology?
ERIC Educational Resources Information Center
Kara, Ahmet
2017-01-01
The purpose of this study was to determine the effect of visual element and technology supported teaching upon perceived instructor behaviors by pre-service teachers. In accordance with this purpose, whereas the lessons were lectured without benefiting from visual elements and technology in a traditional way with the students included in the…
Perceived causality, force, and resistance in the absence of launching.
Hubbard, Timothy L; Ruppel, Susan E
2017-04-01
In the launching effect, a moving object (the launcher) contacts a stationary object (the target), and upon contact, the launcher stops and the target begins moving in the same direction and at the same or slower velocity as previous launcher motion (Michotte, 1946/1963). In the study reported here, participants viewed a modified launching effect display in which the launcher stopped before or at the moment of contact and the target remained stationary. Participants rated perceived causality, perceived force, and perceived resistance of the launcher on the target or the target on the launcher. For launchers and for targets, increases in the size of the spatial gap between the final location of the launcher and the location of the target decreased ratings of perceived causality and ratings of perceived force and increased ratings of perceived resistance. Perceived causality, perceived force, and perceived resistance exhibited gradients or fields extending from the launcher and from the target and were not dependent upon contact of the launcher and target. Causal asymmetries and force asymmetries reported in previous studies did not occur, and this suggests that such asymmetries might be limited to typical launching effect stimuli. Deviations from Newton's laws of motion are noted, and the existence of separate radii of action extending from the launcher and from the target is suggested.
Grubert, Anna; Eimer, Martin
2015-11-11
During the maintenance of task-relevant objects in visual working memory, the contralateral delay activity (CDA) is elicited over the hemisphere opposite to the visual field where these objects are presented. The presence of this lateralised CDA component demonstrates the existence of position-dependent object representations in working memory. We employed a change detection task to investigate whether the represented object locations in visual working memory are shifted in preparation for the known location of upcoming comparison stimuli. On each trial, bilateral memory displays were followed after a delay period by bilateral test displays. Participants had to encode and maintain three visual objects on one side of the memory display, and to judge whether they were identical or different to three objects in the test display. Task-relevant memory and test stimuli were located in the same visual hemifield in the no-shift task, and on opposite sides in the horizontal shift task. CDA components of similar size were triggered contralateral to the memorized objects in both tasks. The absence of a polarity reversal of the CDA in the horizontal shift task demonstrated that there was no preparatory shift of memorized object location towards the side of the upcoming comparison stimuli. These results suggest that visual working memory represents the locations of visual objects during encoding, and that the matching of memorized and test objects at different locations is based on a comparison process that can bridge spatial translations between these objects. This article is part of a Special Issue entitled SI: Prediction and Attention. Copyright © 2014 Elsevier B.V. All rights reserved.
Van Berkel, Derek B.; Tabrizian, Payam; Dorning, Monica; Smart, Lindsey S.; Newcomb, Doug; Mehaffey, Megan; Neale, Anne; Meentemeyer, Ross K.
2018-01-01
Landscapes are increasingly recognized for providing valuable cultural ecosystem services with numer- ous non-material benefits by serving as places of rest, relaxation, and inspiration that ultimately improve overall mental health and physical well-being. Maintaining and enhancing these valuable benefits through targeted management and conservation measures requires understanding the spatial and tem- poral determinants of perceived landscape values. Content contributed through mobile technologies and the web are emerging globally, providing a promising data source for localizing and assessing these land- scape benefits. These georeferenced data offer rich in situ qualitative information through photos and comments that capture valued and special locations across large geographic areas. We present a novel method for mapping and modeling landscape values and perceptions that leverages viewshed analysis of georeferenced social media data. Using a high resolution LiDAR (Light Detection and Ranging) derived digital surface model, we are able to evaluate landscape characteristics associated with the visual- sensory qualities of outdoor recreationalists. Our results show the importance of historical monuments and attractions in addition to specific environmental features which are appreciated by the public. Evaluation of photo-image content highlights the opportunity of including temporally and spatially vari- able visual-sensory qualities in cultural ecosystem services (CES) evaluation like the sights, sounds and smells of wildlife and weather phenomena.
ERIC Educational Resources Information Center
Eddles-Hirsch, Katrina
2017-01-01
This article reports on an exploratory study that addressed the low confidence levels of 80 generalist primary student teachers enrolled in a mandatory visual arts course. Previous studies in this area have found that a cycle of neglect exists in Australia, as a result of educators' lack of confidence in their ability to teach visual arts. This is…
Similarity relations in visual search predict rapid visual categorization
Mohan, Krithika; Arun, S. P.
2012-01-01
How do we perform rapid visual categorization?It is widely thought that categorization involves evaluating the similarity of an object to other category items, but the underlying features and similarity relations remain unknown. Here, we hypothesized that categorization performance is based on perceived similarity relations between items within and outside the category. To this end, we measured the categorization performance of human subjects on three diverse visual categories (animals, vehicles, and tools) and across three hierarchical levels (superordinate, basic, and subordinate levels among animals). For the same subjects, we measured their perceived pair-wise similarities between objects using a visual search task. Regardless of category and hierarchical level, we found that the time taken to categorize an object could be predicted using its similarity to members within and outside its category. We were able to account for several classic categorization phenomena, such as (a) the longer times required to reject category membership; (b) the longer times to categorize atypical objects; and (c) differences in performance across tasks and across hierarchical levels. These categorization times were also accounted for by a model that extracts coarse structure from an image. The striking agreement observed between categorization and visual search suggests that these two disparate tasks depend on a shared coarse object representation. PMID:23092947
Shapiro, Arthur; Lu, Zhong-Lin; Huang, Chang-Bing; Knight, Emily; Ennis, Robert
2010-01-01
Background The human visual system does not treat all parts of an image equally: the central segments of an image, which fall on the fovea, are processed with a higher resolution than the segments that fall in the visual periphery. Even though the differences between foveal and peripheral resolution are large, these differences do not usually disrupt our perception of seamless visual space. Here we examine a motion stimulus in which the shift from foveal to peripheral viewing creates a dramatic spatial/temporal discontinuity. Methodology/Principal Findings The stimulus consists of a descending disk (global motion) with an internal moving grating (local motion). When observers view the disk centrally, they perceive both global and local motion (i.e., observers see the disk's vertical descent and the internal spinning). When observers view the disk peripherally, the internal portion appears stationary, and the disk appears to descend at an angle. The angle of perceived descent increases as the observer views the stimulus from further in the periphery. We examine the first- and second-order information content in the display with the use of a three-dimensional Fourier analysis and show how our results can be used to describe perceived spatial/temporal discontinuities in real-world situations. Conclusions/Significance The perceived shift of the disk's direction in the periphery is consistent with a model in which foveal processing separates first- and second-order motion information while peripheral processing integrates first- and second-order motion information. We argue that the perceived distortion may influence real-world visual observations. To this end, we present a hypothesis and analysis of the perception of the curveball and rising fastball in the sport of baseball. The curveball is a physically measurable phenomenon: the imbalance of forces created by the ball's spin causes the ball to deviate from a straight line and to follow a smooth parabolic path. However, the curveball is also a perceptual puzzle because batters often report that the flight of the ball undergoes a dramatic and nearly discontinuous shift in position as the ball nears home plate. We suggest that the perception of a discontinuous shift in position results from differences between foveal and peripheral processing. PMID:20967247
The impact of interference on short-term memory for visual orientation.
Rademaker, Rosanne L; Bloem, Ilona M; De Weerd, Peter; Sack, Alexander T
2015-12-01
Visual short-term memory serves as an efficient buffer for maintaining no longer directly accessible information. How robust are visual memories against interference? Memory for simple visual features has proven vulnerable to distractors containing conflicting information along the relevant stimulus dimension, leading to the idea that interacting feature-specific channels at an early stage of visual processing support memory for simple visual features. Here we showed that memory for a single randomly orientated grating was susceptible to interference from a to-be-ignored distractor grating presented midway through a 3-s delay period. Memory for the initially presented orientation became noisier when it differed from the distractor orientation, and response distributions were shifted toward the distractor orientation (by ∼3°). Interestingly, when the distractor was rendered task-relevant by making it a second memory target, memory for both retained orientations showed reduced reliability as a function of increased orientation differences between them. However, the degree to which responses to the first grating shifted toward the orientation of the task-relevant second grating was much reduced. Finally, using a dichoptic display, we demonstrated that these systematic biases caused by a consciously perceived distractor disappeared once the distractor was presented outside of participants' awareness. Together, our results show that visual short-term memory for orientation can be systematically biased by interfering information that is consciously perceived. (c) 2015 APA, all rights reserved).
Greguol, Márcia; Gobbi, Erica; Carraro, Attilio
2015-01-01
To analyze the practice of physical activity among children and adolescents with visual impairments (VI), regarding the possible influence of parental support and perceived barriers. Twenty-two young people with VIs (10 + 2.74 years old) and one of each of their parents were evaluated. They responded to the Physical Activity Questionnaire for Older Children (PAQ-C), Baecke Questionnaire, the Parental Support Scale and a questionnaire about perceived barriers to physical activity. The independent samples t-test, pearson correlation test and chi-square test were performed. Blind young people showed lower physical activity levels. There were significant correlations both between parents' physical activity and the support offered to children and between the PAQ-C results and the importance given by young people to physical activity, but only for those aged between 8 and 10 years old. The main perceived barriers were lack of security, motivation, professional training and information about available physical activity programs. The influence of parental support seems to be an important factor in the adoption of a physically active lifestyle for young people with VI. Parents and children should have more information about the benefits and opportunities of physical activity. Implications for Rehabilitation Young people with visual impairment should be encouraged by parents to practice physical activity. More information should be provided on the benefits of physical activity to both parents and children. Professional training should be available to help support this group become more active.
There's no team in I: How observers perceive individual creativity in a team setting.
Kay, Min B; Proudfoot, Devon; Larrick, Richard P
2018-04-01
Creativity is highly valued in organizations as an important source of innovation. As most creative projects require the efforts of groups of individuals working together, it is important to understand how creativity is perceived for team products, including how observers attribute creative ability to focal actors who worked as part of a creative team. Evidence from three experiments suggests that observers commit the fundamental attribution error-systematically discounting the contribution of the group when assessing the creative ability of a single group representative, particularly when the group itself is not visually salient. In a pilot study, we found that, in the context of the design team at Apple, a target group member visually depicted alone is perceived to have greater personal creative ability than when he is visually depicted with his team. In Study 1, using a sample of managers, we conceptually replicated this finding and further observed that, when shown alone, a target member of a group that produced a creative product is perceived to be as creative as an individual described as working alone on the same output. In Study 2, we replicated the findings of Study 1 and also observed that a target group member depicted alone, rather than with his team, is also attributed less creative ability for uncreative group output. Findings are discussed in light of how overattribution of individual creative ability can harm organizations in the long run. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Nunes, Guilherme S; Bender, Paula Urio; de Menezes, Fábio Sprada; Yamashitafuji, Igor; Vargas, Valentine Zimermann; Wageck, Bruna
2016-04-01
Can massage therapy reduce pain and perceived fatigue in the quadriceps of athletes after a long-distance triathlon race (Ironman)? Randomised, controlled trial with concealed allocation, intention-to-treat analysis and blinded outcome assessors. Seventy-four triathlon athletes who completed an entire Ironman triathlon race and whose main complaint was pain in the anterior portion of the thigh. The experimental group received massage to the quadriceps, which was aimed at recovery after competition, and the control group rested in sitting. The outcomes were pain and perceived fatigue, which were reported using a visual analogue scale, and pressure pain threshold at three points over the quadriceps muscle, which was assessed using digital pressure algometry. The experimental group had significantly lower scores than the control group on the visual analogue scale for pain (MD -7 mm, 95% CI -13 to -1) and for perceived fatigue (MD -15 mm, 95% CI -21 to -9). There were no significant between-group differences for the pressure pain threshold at any of the assessment points. Massage therapy was more effective than no intervention on the post-race recovery from pain and perceived fatigue in long-distance triathlon athletes. Brazilian Registry of Clinical Trials, RBR-4n2sxr. Copyright © 2016 Australian Physiotherapy Association. Published by Elsevier B.V. All rights reserved.
Vision in Flies: Measuring the Attention Span
Koenig, Sebastian; Wolf, Reinhard; Heisenberg, Martin
2016-01-01
A visual stimulus at a particular location of the visual field may elicit a behavior while at the same time equally salient stimuli in other parts do not. This property of visual systems is known as selective visual attention (SVA). The animal is said to have a focus of attention (FoA) which it has shifted to a particular location. Visual attention normally involves an attention span at the location to which the FoA has been shifted. Here the attention span is measured in Drosophila. The fly is tethered and hence has its eyes fixed in space. It can shift its FoA internally. This shift is revealed using two simultaneous test stimuli with characteristic responses at their particular locations. In tethered flight a wild type fly keeps its FoA at a certain location for up to 4s. Flies with a mutation in the radish gene, that has been suggested to be involved in attention-like mechanisms, display a reduced attention span of only 1s. PMID:26848852
Vision in Flies: Measuring the Attention Span.
Koenig, Sebastian; Wolf, Reinhard; Heisenberg, Martin
2016-01-01
A visual stimulus at a particular location of the visual field may elicit a behavior while at the same time equally salient stimuli in other parts do not. This property of visual systems is known as selective visual attention (SVA). The animal is said to have a focus of attention (FoA) which it has shifted to a particular location. Visual attention normally involves an attention span at the location to which the FoA has been shifted. Here the attention span is measured in Drosophila. The fly is tethered and hence has its eyes fixed in space. It can shift its FoA internally. This shift is revealed using two simultaneous test stimuli with characteristic responses at their particular locations. In tethered flight a wild type fly keeps its FoA at a certain location for up to 4s. Flies with a mutation in the radish gene, that has been suggested to be involved in attention-like mechanisms, display a reduced attention span of only 1s.
Hannon, Erin E; Schachner, Adena; Nave-Blodgett, Jessica E
2017-07-01
Movement to music is a universal human behavior, yet little is known about how observers perceive audiovisual synchrony in complex musical displays such as a person dancing to music, particularly during infancy and childhood. In the current study, we investigated how perception of musical audiovisual synchrony develops over the first year of life. We habituated infants to a video of a person dancing to music and subsequently presented videos in which the visual track was matched (synchronous) or mismatched (asynchronous) with the audio track. In a visual-only control condition, we presented the same visual stimuli with no sound. In Experiment 1, we found that older infants (8-12months) exhibited a novelty preference for the mismatched movie when both auditory information and visual information were available and showed no preference when only visual information was available. By contrast, younger infants (5-8months) in Experiment 2 did not discriminate matching stimuli from mismatching stimuli. This suggests that the ability to perceive musical audiovisual synchrony may develop during the second half of the first year of infancy. Copyright © 2017 Elsevier Inc. All rights reserved.
Serial dependence promotes object stability during occlusion
Liberman, Alina; Zhang, Kathy; Whitney, David
2016-01-01
Object identities somehow appear stable and continuous over time despite eye movements, disruptions in visibility, and constantly changing visual input. Recent results have demonstrated that the perception of orientation, numerosity, and facial identity is systematically biased (i.e., pulled) toward visual input from the recent past. The spatial region over which current orientations or face identities are pulled by previous orientations or identities, respectively, is known as the continuity field, which is temporally tuned over the past several seconds (Fischer & Whitney, 2014). This perceptual pull could contribute to the visual stability of objects over short time periods, but does it also address how perceptual stability occurs during visual discontinuities? Here, we tested whether the continuity field helps maintain perceived object identity during occlusion. Specifically, we found that the perception of an oriented Gabor that emerged from behind an occluder was significantly pulled toward the random (and unrelated) orientation of the Gabor that was seen entering the occluder. Importantly, this serial dependence was stronger for predictable, continuously moving trajectories, compared to unpredictable ones or static displacements. This result suggests that our visual system takes advantage of expectations about a stable world, helping to maintain perceived object continuity despite interrupted visibility. PMID:28006066
De Freitas, Julian; Alvarez, George A
2018-05-28
To what extent are people's moral judgments susceptible to subtle factors of which they are unaware? Here we show that we can change people's moral judgments outside of their awareness by subtly biasing perceived causality. Specifically, we used subtle visual manipulations to create visual illusions of causality in morally relevant scenarios, and this systematically changed people's moral judgments. After demonstrating the basic effect using simple displays involving an ambiguous car collision that ends up injuring a person (E1), we show that the effect is sensitive on the millisecond timescale to manipulations of task-irrelevant factors that are known to affect perceived causality, including the duration (E2a) and asynchrony (E2b) of specific task-irrelevant contextual factors in the display. We then conceptually replicate the effect using a different paradigm (E3a), and also show that we can eliminate the effect by interfering with motion processing (E3b). Finally, we show that the effect generalizes across different kinds of moral judgments (E3c). Combined, these studies show that obligatory, abstract inferences made by the visual system influence moral judgments. Copyright © 2018 Elsevier B.V. All rights reserved.
Perception of self-tilt in a true and illusory vertical plane
NASA Technical Reports Server (NTRS)
Groen, Eric L.; Jenkin, Heather L.; Howard, Ian P.; Oman, C. M. (Principal Investigator)
2002-01-01
A tilted furnished room can induce strong visual reorientation illusions in stationary subjects. Supine subjects may perceive themselves upright when the room is tilted 90 degrees so that the visual polarity axis is kept aligned with the subject. This 'upright illusion' was used to induce roll tilt in a truly horizontal, but perceptually vertical, plane. A semistatic tilt profile was applied, in which the tilt angle gradually changed from 0 degrees to 90 degrees, and vice versa. This method produced larger illusory self-tilt than usually found with static tilt of a visual scene. Ten subjects indicated self-tilt by setting a tactile rod to perceived vertical. Six of them experienced the upright illusion and indicated illusory self-tilt with an average gain of about 0.5. This value is smaller than with true self-tilt (0.8), but comparable to the gain of visually induced self-tilt in erect subjects. Apparently, the contribution of nonvisual cues to gravity was independent of the subject's orientation to gravity itself. It therefore seems that the gain of visually induced self-tilt is smaller because of lacking, rather than conflicting, nonvisual cues. A vector analysis is used to discuss the results in terms of relative sensory weightings.
Visual learning with reduced adaptation is eccentricity-specific.
Harris, Hila; Sagi, Dov
2018-01-12
Visual learning is known to be specific to the trained target location, showing little transfer to untrained locations. Recently, learning was shown to transfer across equal-eccentricity retinal-locations when sensory adaptation due to repetitive stimulation was minimized. It was suggested that learning transfers to previously untrained locations when the learned representation is location invariant, with sensory adaptation introducing location-dependent representations, thus preventing transfer. Spatial invariance may also fail when the trained and tested locations are at different distance from the center of gaze (different retinal eccentricities), due to differences in the corresponding low-level cortical representations (e.g. allocated cortical area decreases with eccentricity). Thus, if learning improves performance by better classifying target-dependent early visual representations, generalization is predicted to fail when locations of different retinal eccentricities are trained and tested in the absence sensory adaptation. Here, using the texture discrimination task, we show specificity of learning across different retinal eccentricities (4-8°) using reduced adaptation training. The existence of generalization across equal-eccentricity locations but not across different eccentricities demonstrates that learning accesses visual representations preceding location independent representations, with specificity of learning explained by inhomogeneous sensory representation.
Integrated Cuing Requirements (ICR) Study: Demonstration Data Base and Users Guide.
1983-07-01
viewed with a servo-mounted televison camera and used to provide a visual scene for an observer in an ATD. Modulation: Mathematically, the absolute...i(b). CROSS REFERENCE The impact of stationary scene RESULTS. . details was also tested in this See (c) study. See Figure 33.5-1. Ial TEST APPARATUS...size. (See the discussion of * the impact of perceived distance on perceived size in Section 31._.) Figure 33.4-1 Perceived Distance and Velocity of Self
Effects of selective attention on perceptual filling-in.
De Weerd, P; Smith, E; Greenberg, P
2006-03-01
After few seconds, a figure steadily presented in peripheral vision becomes perceptually filled-in by its background, as if it "disappeared". We report that directing attention to the color, shape, or location of a figure increased the probability of perceiving filling-in compared to unattended figures, without modifying the time required for filling-in. This effect could be augmented by boosting attention. Furthermore, the frequency distribution of filling-in response times for attended figures could be predicted by multiplying the frequencies of response times for unattended figures with a constant. We propose that, after failure of figure-ground segregation, the neural interpolation processes that produce perceptual filling-in are enhanced in attended figure regions. As filling-in processes are involved in surface perception, the present study demonstrates that even very early visual processes are subject to modulation by cognitive factors.
How Visuo-Spatial Mental Imagery Develops: Image Generation and Maintenance
Wimmer, Marina C.; Maras, Katie L.; Robinson, Elizabeth J; Doherty, Martin J; Pugeault, Nicolas
2015-01-01
Two experiments examined the nature of visuo-spatial mental imagery generation and maintenance in 4-, 6-, 8-, 10-year old children and adults (N = 211). The key questions were how image generation and maintenance develop (Experiment 1) and how accurately children and adults coordinate mental and visually perceived images (Experiment 2). Experiment 1 indicated that basic image generation and maintenance abilities are present at 4 years of age but the precision with which images are generated and maintained improves particularly between 4 and 8 years. In addition to increased precision, Experiment 2 demonstrated that generated and maintained mental images become increasingly similar to visually perceived objects. Altogether, findings suggest that for simple tasks demanding image generation and maintenance, children attain adult-like precision younger than previously reported. This research also sheds new light on the ability to coordinate mental images with visual images in children and adults. PMID:26562296
How do plants see the world? - UV imaging with a TiO2 nanowire array by artificial photosynthesis.
Kang, Ji-Hoon; Leportier, Thibault; Park, Min-Chul; Han, Sung Gyu; Song, Jin-Dong; Ju, Hyunsu; Hwang, Yun Jeong; Ju, Byeong-Kwon; Poon, Ting-Chung
2018-05-10
The concept of plant vision refers to the fact that plants are receptive to their visual environment, although the mechanism involved is quite distinct from the human visual system. The mechanism in plants is not well understood and has yet to be fully investigated. In this work, we have exploited the properties of TiO2 nanowires as a UV sensor to simulate the phenomenon of photosynthesis in order to come one step closer to understanding how plants see the world. To the best of our knowledge, this study is the first approach to emulate and depict plant vision. We have emulated the visual map perceived by plants with a single-pixel imaging system combined with a mechanical scanner. The image acquisition has been demonstrated for several electrolyte environments, in both transmissive and reflective configurations, in order to explore the different conditions in which plants perceive light.
Employees' satisfaction as influenced by acoustic and visual privacy in the open office environment
NASA Astrophysics Data System (ADS)
Soules, Maureen Jeanette
The purpose of this study was to examine the relationship between employees' acoustic and visual privacy issues and their perceived satisfaction in their open office work environments while in focus work mode. The study examined the Science Teaching Student Services Building at the University of Minnesota Minneapolis. The building houses instructional classrooms and administrative offices that service UMN students. The Sustainable Post-Occupancy Evaluation Survey was used to collect data on overall privacy conditions, acoustic and visual privacy conditions, and employees' perceived privacy conditions while in their primary workplace. Paired T-tests were used to analyze the relationships between privacy conditions and employees' perceptions of privacy. All hypotheses are supported indicating that the privacy variables are correlated to the employees' perception of satisfaction within the primary workplace. The findings are important because they can be used to inform business leaders, designers, educators and future research in the field of office design.
The role of primary auditory and visual cortices in temporal processing: A tDCS approach.
Mioni, G; Grondin, S; Forgione, M; Fracasso, V; Mapelli, D; Stablum, F
2016-10-15
Many studies showed that visual stimuli are frequently experienced as shorter than equivalent auditory stimuli. These findings suggest that timing is distributed across many brain areas and that "different clocks" might be involved in temporal processing. The aim of this study is to investigate, with the application of tDCS over V1 and A1, the specific role of primary sensory cortices (either visual or auditory) in temporal processing. Forty-eight University students were included in the study. Twenty-four participants were stimulated over A1 and 24 participants were stimulated over V1. Participants performed time bisection tasks, in the visual and the auditory modalities, involving standard durations lasting 300ms (short) and 900ms (long). When tDCS was delivered over A1, no effect of stimulation was observed on perceived duration but we observed higher temporal variability under anodic stimulation compared to sham and higher variability in the visual compared to the auditory modality. When tDCS was delivered over V1, an under-estimation of perceived duration and higher variability was observed in the visual compared to the auditory modality. Our results showed more variability of visual temporal processing under tDCS stimulation. These results suggest a modality independent role of A1 in temporal processing and a modality specific role of V1 in the processing of temporal intervals in the visual modality. Copyright © 2016 Elsevier B.V. All rights reserved.
Mixing apples with oranges: Visual attention deficits in schizophrenia.
Caprile, Claudia; Cuevas-Esteban, Jorge; Ochoa, Susana; Usall, Judith; Navarra, Jordi
2015-09-01
Patients with schizophrenia usually present cognitive deficits. We investigated possible anomalies at filtering out irrelevant visual information in this psychiatric disorder. Associations between these anomalies and positive and/or negative symptomatology were also addressed. A group of individuals with schizophrenia and a control group of healthy adults performed a Garner task. In Experiment 1, participants had to rapidly classify visual stimuli according to their colour while ignoring their shape. These two perceptual dimensions are reported to be "separable" by visual selective attention. In Experiment 2, participants classified the width of other visual stimuli while trying to ignore their height. These two visual dimensions are considered as being "integral" and cannot be attended separately. While healthy perceivers were, in Experiment 1, able to exclusively respond to colour, an irrelevant variation in shape increased colour-based reaction times (RTs) in the group of patients. In Experiment 2, RTs when classifying width increased in both groups as a consequence of perceiving a variation in the irrelevant dimension (height). However, this interfering effect was larger in the group of schizophrenic patients than in the control group. Further analyses revealed that these alterations in filtering out irrelevant visual information correlated with positive symptoms in PANSS scale. A possible limitation of the study is the relatively small sample. Our findings suggest the presence of attention deficits in filtering out irrelevant visual information in schizophrenia that could be related to positive symptomatology. Copyright © 2015 Elsevier Ltd. All rights reserved.
Pilot perception and confidence of location during a simulated helicopter navigation task.
Yang, Ji Hyun; Cowden, Bradley T; Kennedy, Quinn; Schramm, Harrison; Sullivan, Joseph
2013-09-01
This paper aims to provide insights into human perception, navigation performance, and confidence in helicopter overland navigation. Helicopter overland navigation is a challenging mission area because it is a complex cognitive task, and failing to recognize when the aircraft is off-course can lead to operational failures and mishaps. A human-in-the-loop experiment to investigate pilot perception during simulated overland navigation by analyzing actual navigation trajectory, pilots' perceived location, and corresponding confidence levels was designed. There were 15 military officers with prior overland navigation experience who completed 4 simulated low-level navigation routes, 2 of which entailed auto-navigation. This route was paused roughly every 30 s for the subject to mark their perceived location on the map and their confidence level using a customized program. Analysis shows that there is no correlation between perceived and actual location of the aircraft, nor between confidence level and actual location. There is, however, some evidence that there is a correlation (rho = -0.60 to approximately 0.65) between perceived location and intended route of flight, suggesting that there is a bias toward believing one is on the intended flight route. If aviation personnel can proactively identify the circumstances in which usual misperceptions occur in navigation, they may reduce mission failure and accident rate. Fleet squadrons and instructional commands can benefit from this study to improve operations that require low-level flight while also improving crew resource management.
Shifting Attention within Memory Representations Involves Early Visual Areas
Munneke, Jaap; Belopolsky, Artem V.; Theeuwes, Jan
2012-01-01
Prior studies have shown that spatial attention modulates early visual cortex retinotopically, resulting in enhanced processing of external perceptual representations. However, it is not clear whether the same visual areas are modulated when attention is focused on, and shifted within a working memory representation. In the current fMRI study participants were asked to memorize an array containing four stimuli. After a delay, participants were presented with a verbal cue instructing them to actively maintain the location of one of the stimuli in working memory. Additionally, on a number of trials a second verbal cue instructed participants to switch attention to the location of another stimulus within the memorized representation. Results of the study showed that changes in the BOLD pattern closely followed the locus of attention within the working memory representation. A decrease in BOLD-activity (V1–V3) was observed at ROIs coding a memory location when participants switched away from this location, whereas an increase was observed when participants switched towards this location. Continuous increased activity was obtained at the memorized location when participants did not switch. This study shows that shifting attention within memory representations activates the earliest parts of visual cortex (including V1) in a retinotopic fashion. We conclude that even in the absence of visual stimulation, early visual areas support shifting of attention within memorized representations, similar to when attention is shifted in the outside world. The relationship between visual working memory and visual mental imagery is discussed in light of the current findings. PMID:22558165
Forest aesthetics, biodiversity, and the perceived appropriateness of ecosystem management practices
Paul H. Gobster
1996-01-01
The social acceptability of 'ecosystem management' and related new forestry programs hinges on how people view the forest environment and what it means to them. For many, these conceptions are based on a 'scenic aesthetic" that is dramatic and visual, where both human and natural changes are perceived negatively. In contrast, appreciation of...
Fragile visual short-term memory is an object-based and location-specific store.
Pinto, Yaïr; Sligte, Ilja G; Shapiro, Kimron L; Lamme, Victor A F
2013-08-01
Fragile visual short-term memory (FM) is a recently discovered form of visual short-term memory. Evidence suggests that it provides rich and high-capacity storage, like iconic memory, yet it exists, without interference, almost as long as visual working memory. In the present study, we sought to unveil the functional underpinnings of this memory storage. We found that FM is only completely erased when the new visual scene appears at the same location and consists of the same objects as the to-be-recalled information. This result has two important implications: First, it shows that FM is an object- and location-specific store, and second, it suggests that FM might be used in everyday life when the presentation of visual information is appropriately designed.
Glasauer, S; Dieterich, M; Brandt, T
2018-05-29
Acute unilateral lesions of vestibular graviceptive pathways from the otolith organs and semicircular canals via vestibular nuclei and the thalamus to the parieto-insular vestibular cortex regularly cause deviations of perceived verticality in the frontal roll plane. These tilts are ipsilateral in peripheral and in ponto-medullary lesions and contralateral in ponto-mesencephalic lesions. Unilateral lesions of the vestibular thalamus or cortex cause smaller tilts of the perceived vertical, which may be either ipsilateral or contralateral. Using a neural network model, we previously explained why unilateral vestibular midbrain lesions rarely manifest with rotational vertigo. We here extend this approach, focussing on the direction-specific deviations of perceived verticality in the roll plane caused by acute unilateral vestibular lesions from the labyrinth to the cortex. Traditionally, the effect of unilateral peripheral lesions on perceived verticality has been attributed to a lesion-based bias of the otolith system. We here suggest, on the basis of a comparison of model simulations with patient data, that perceived visual tilt after peripheral lesions is caused by the effect of a torsional semicircular canal bias on the central gravity estimator. We further argue that the change of gravity coding from a peripheral/brainstem vectorial representation in otolith coordinates to a distributed population coding at thalamic and cortical levels can explain why unilateral thalamic and cortical lesions have a variable effect on perceived verticality. Finally, we propose how the population-coding network for gravity direction might implement the elements required for the well-known perceptual underestimation of the subjective visual vertical in tilted body positions.
Chouinard, Philippe A.; Peel, Hayden J.; Landry, Oriane
2017-01-01
The closer a line extends toward a surrounding frame, the longer it appears. This is known as a framing effect. Over 70 years ago, Teodor Künnapas demonstrated that the shape of the visual field itself can act as a frame to influence the perceived length of lines in the vertical-horizontal illusion. This illusion is typically created by having a vertical line rise from the center of a horizontal line of the same length creating an inverted T figure. We aimed to determine if the degree to which one fixates on a spatial location where the two lines bisect could influence the strength of the illusion, assuming that the framing effect would be stronger when the retinal image is more stable. We performed two experiments: the visual-field and vertical-horizontal illusion experiments. The visual-field experiment demonstrated that the participants could discriminate a target more easily when it was presented along the horizontal vs. vertical meridian, confirming a framing influence on visual perception. The vertical-horizontal illusion experiment determined the effects of orientation, size and eye gaze on the strength of the illusion. As predicted, the illusion was strongest when the stimulus was presented in either its standard inverted T orientation or when it was rotated 180° compared to other orientations, and in conditions in which the retinal image was more stable, as indexed by eye tracking. Taken together, we conclude that the results provide support for Teodor Künnapas’ explanation of the vertical-horizontal illusion. PMID:28392764
The Mechanism for Processing Random-Dot Motion at Various Speeds in Early Visual Cortices
An, Xu; Gong, Hongliang; McLoughlin, Niall; Yang, Yupeng; Wang, Wei
2014-01-01
All moving objects generate sequential retinotopic activations representing a series of discrete locations in space and time (motion trajectory). How direction-selective neurons in mammalian early visual cortices process motion trajectory remains to be clarified. Using single-cell recording and optical imaging of intrinsic signals along with mathematical simulation, we studied response properties of cat visual areas 17 and 18 to random dots moving at various speeds. We found that, the motion trajectory at low speed was encoded primarily as a direction signal by groups of neurons preferring that motion direction. Above certain transition speeds, the motion trajectory is perceived as a spatial orientation representing the motion axis of the moving dots. In both areas studied, above these speeds, other groups of direction-selective neurons with perpendicular direction preferences were activated to encode the motion trajectory as motion-axis information. This applied to both simple and complex neurons. The average transition speed for switching between encoding motion direction and axis was about 31°/s in area 18 and 15°/s in area 17. A spatio-temporal energy model predicted the transition speeds accurately in both areas, but not the direction-selective indexes to random-dot stimuli in area 18. In addition, above transition speeds, the change of direction preferences of population responses recorded by optical imaging can be revealed using vector maximum but not vector summation method. Together, this combined processing of motion direction and axis by neurons with orthogonal direction preferences associated with speed may serve as a common principle of early visual motion processing. PMID:24682033
Correction techniques for depth errors with stereo three-dimensional graphic displays
NASA Technical Reports Server (NTRS)
Parrish, Russell V.; Holden, Anthony; Williams, Steven P.
1992-01-01
Three-dimensional (3-D), 'real-world' pictorial displays that incorporate 'true' depth cues via stereopsis techniques have proved effective for displaying complex information in a natural way to enhance situational awareness and to improve pilot/vehicle performance. In such displays, the display designer must map the depths in the real world to the depths available with the stereo display system. However, empirical data have shown that the human subject does not perceive the information at exactly the depth at which it is mathematically placed. Head movements can also seriously distort the depth information that is embedded in stereo 3-D displays because the transformations used in mapping the visual scene to the depth-viewing volume (DVV) depend intrinsically on the viewer location. The goal of this research was to provide two correction techniques; the first technique corrects the original visual scene to the DVV mapping based on human perception errors, and the second (which is based on head-positioning sensor input data) corrects for errors induced by head movements. Empirical data are presented to validate both correction techniques. A combination of the two correction techniques effectively eliminates the distortions of depth information embedded in stereo 3-D displays.
Stimulus Load and Oscillatory Activity in Higher Cortex
Kornblith, Simon; Buschman, Timothy J.; Miller, Earl K.
2016-01-01
Exploring and exploiting a rich visual environment requires perceiving, attending, and remembering multiple objects simultaneously. Recent studies have suggested that this mental “juggling” of multiple objects may depend on oscillatory neural dynamics. We recorded local field potentials from the lateral intraparietal area, frontal eye fields, and lateral prefrontal cortex while monkeys maintained variable numbers of visual stimuli in working memory. Behavior suggested independent processing of stimuli in each hemifield. During stimulus presentation, higher-frequency power (50–100 Hz) increased with the number of stimuli (load) in the contralateral hemifield, whereas lower-frequency power (8–50 Hz) decreased with the total number of stimuli in both hemifields. During the memory delay, lower-frequency power increased with contralateral load. Load effects on higher frequencies during stimulus encoding and lower frequencies during the memory delay were stronger when neural activity also signaled the location of the stimuli. Like power, higher-frequency synchrony increased with load, but beta synchrony (16–30 Hz) showed the opposite effect, increasing when power decreased (stimulus presentation) and decreasing when power increased (memory delay). Our results suggest roles for lower-frequency oscillations in top-down processing and higher-frequency oscillations in bottom-up processing. PMID:26286916
Azorin-Lopez, Jorge; Fuster-Guillo, Andres; Saval-Calvo, Marcelo; Mora-Mora, Higinio; Garcia-Chamizo, Juan Manuel
2017-01-01
The use of visual information is a very well known input from different kinds of sensors. However, most of the perception problems are individually modeled and tackled. It is necessary to provide a general imaging model that allows us to parametrize different input systems as well as their problems and possible solutions. In this paper, we present an active vision model considering the imaging system as a whole (including camera, lighting system, object to be perceived) in order to propose solutions to automated visual systems that present problems that we perceive. As a concrete case study, we instantiate the model in a real application and still challenging problem: automated visual inspection. It is one of the most used quality control systems to detect defects on manufactured objects. However, it presents problems for specular products. We model these perception problems taking into account environmental conditions and camera parameters that allow a system to properly perceive the specific object characteristics to determine defects on surfaces. The validation of the model has been carried out using simulations providing an efficient way to perform a large set of tests (different environment conditions and camera parameters) as a previous step of experimentation in real manufacturing environments, which more complex in terms of instrumentation and more expensive. Results prove the success of the model application adjusting scale, viewpoint and lighting conditions to detect structural and color defects on specular surfaces. PMID:28640211
Bourrelly, Aurore; McIntyre, Joseph; Luyat, Marion
2015-09-01
On Earth, visual eye height (VEH)--the distance from the observer's line of gaze to the ground in the visual scene--constitutes an effective cue in perceiving affordance such as the passability through apertures, based on the assumption that one's feet are on the ground. In the present study, we questioned whether an observer continues to use VEH to estimate the width of apertures during long-term exposure to weightlessness, where contact with the floor is not required. Ten astronauts were tested in preflight, inflight in the International Space Station, and postflight sessions. They were asked to adjust the opening of a virtual doorway displayed on a laptop device until it was perceived to be just wide enough to pass through (i.e., the critical aperture). We manipulated VEH by raising and lowering the level of the floor in the visual scene. We observed an effect of VEH manipulation on the critical aperture. When VEH decreased, the critical aperture decreased too, suggesting that widths relative to the body were perceived to be larger when VEH was smaller. There was no overall significant session effect, but the analysis of between-subjects variability revealed two participant profile groups. The effect of weightlessness was different for these two groups even though the VEH strategy remained operational during spaceflight. This study shows that the VEH strategy appears to be very robust and can be used, if necessary, in inappropriate circumstances such as free-floating, perhaps promoted by the nature of the visual scene.
Orientation of selective effects of body tilt on visually induced perception of self-motion.
Nakamura, S; Shimojo, S
1998-10-01
We examined the effect of body posture upon visually induced perception of self-motion (vection) with various angles of observer's tilt. The experiment indicated that the tilted body of observer could enhance perceived strength of vertical vection, while there was no effect of body tilt on horizontal vection. This result suggests that there is an interaction between the effects of visual and vestibular information on perception of self-motion.
Monaco, Simona; Gallivan, Jason P; Figley, Teresa D; Singhal, Anthony; Culham, Jody C
2017-11-29
The role of the early visual cortex and higher-order occipitotemporal cortex has been studied extensively for visual recognition and to a lesser degree for haptic recognition and visually guided actions. Using a slow event-related fMRI experiment, we investigated whether tactile and visual exploration of objects recruit the same "visual" areas (and in the case of visual cortex, the same retinotopic zones) and if these areas show reactivation during delayed actions in the dark toward haptically explored objects (and if so, whether this reactivation might be due to imagery). We examined activation during visual or haptic exploration of objects and action execution (grasping or reaching) separated by an 18 s delay. Twenty-nine human volunteers (13 females) participated in this study. Participants had their eyes open and fixated on a point in the dark. The objects were placed below the fixation point and accordingly visual exploration activated the cuneus, which processes retinotopic locations in the lower visual field. Strikingly, the occipital pole (OP), representing foveal locations, showed higher activation for tactile than visual exploration, although the stimulus was unseen and location in the visual field was peripheral. Moreover, the lateral occipital tactile-visual area (LOtv) showed comparable activation for tactile and visual exploration. Psychophysiological interaction analysis indicated that the OP showed stronger functional connectivity with anterior intraparietal sulcus and LOtv during the haptic than visual exploration of shapes in the dark. After the delay, the cuneus, OP, and LOtv showed reactivation that was independent of the sensory modality used to explore the object. These results show that haptic actions not only activate "visual" areas during object touch, but also that this information appears to be used in guiding grasping actions toward targets after a delay. SIGNIFICANCE STATEMENT Visual presentation of an object activates shape-processing areas and retinotopic locations in early visual areas. Moreover, if the object is grasped in the dark after a delay, these areas show "reactivation." Here, we show that these areas are also activated and reactivated for haptic object exploration and haptically guided grasping. Touch-related activity occurs not only in the retinotopic location of the visual stimulus, but also at the occipital pole (OP), corresponding to the foveal representation, even though the stimulus was unseen and located peripherally. That is, the same "visual" regions are implicated in both visual and haptic exploration; however, touch also recruits high-acuity central representation within early visual areas during both haptic exploration of objects and subsequent actions toward them. Functional connectivity analysis shows that the OP is more strongly connected with ventral and dorsal stream areas when participants explore an object in the dark than when they view it. Copyright © 2017 the authors 0270-6474/17/3711572-20$15.00/0.
Dynamic visual noise affects visual short-term memory for surface color, but not spatial location.
Dent, Kevin
2010-01-01
In two experiments participants retained a single color or a set of four spatial locations in memory. During a 5 s retention interval participants viewed either flickering dynamic visual noise or a static matrix pattern. In Experiment 1 memory was assessed using a recognition procedure, in which participants indicated if a particular test stimulus matched the memorized stimulus or not. In Experiment 2 participants attempted to either reproduce the locations or they picked the color from a whole range of possibilities. Both experiments revealed effects of dynamic visual noise (DVN) on memory for colors but not for locations. The implications of the results for theories of working memory and the methodological prospects for DVN as an experimental tool are discussed.
The footprints of visual attention in the Posner cueing paradigm revealed by classification images
NASA Technical Reports Server (NTRS)
Eckstein, Miguel P.; Shimozaki, Steven S.; Abbey, Craig K.
2002-01-01
In the Posner cueing paradigm, observers' performance in detecting a target is typically better in trials in which the target is present at the cued location than in trials in which the target appears at the uncued location. This effect can be explained in terms of a Bayesian observer where visual attention simply weights the information differently at the cued (attended) and uncued (unattended) locations without a change in the quality of processing at each location. Alternatively, it could also be explained in terms of visual attention changing the shape of the perceptual filter at the cued location. In this study, we use the classification image technique to compare the human perceptual filters at the cued and uncued locations in a contrast discrimination task. We did not find statistically significant differences between the shapes of the inferred perceptual filters across the two locations, nor did the observed differences account for the measured cueing effects in human observers. Instead, we found a difference in the magnitude of the classification images, supporting the idea that visual attention changes the weighting of information at the cued and uncued location, but does not change the quality of processing at each individual location.
Audio-Visual Speech Perception Is Special
ERIC Educational Resources Information Center
Tuomainen, J.; Andersen, T.S.; Tiippana, K.; Sams, M.
2005-01-01
In face-to-face conversation speech is perceived by ear and eye. We studied the prerequisites of audio-visual speech perception by using perceptually ambiguous sine wave replicas of natural speech as auditory stimuli. When the subjects were not aware that the auditory stimuli were speech, they showed only negligible integration of auditory and…
Patterns and Trajectories in Williams Syndrome: The Case of Visual Orientation Discrimination
ERIC Educational Resources Information Center
Palomares, Melanie; Englund, Julia A.; Ahlers, Stephanie
2011-01-01
Williams Syndrome (WS) is a developmental disorder typified by deficits in visuospatial cognition. To understand the nature of this deficit, we characterized how people with WS perceive visual orientation, a fundamental ability related to object identification. We compared WS participants to typically developing children (3-6 years of age) and…
Audiovisual Perception of Congruent and Incongruent Dutch Front Vowels
ERIC Educational Resources Information Center
Valkenier, Bea; Duyne, Jurriaan Y.; Andringa, Tjeerd C.; Baskent, Deniz
2012-01-01
Purpose: Auditory perception of vowels in background noise is enhanced when combined with visually perceived speech features. The objective of this study was to investigate whether the influence of visual cues on vowel perception extends to incongruent vowels, in a manner similar to the McGurk effect observed with consonants. Method:…
Perceiving the Present and a Systematization of Illusions
ERIC Educational Resources Information Center
Changizi, Mark A.; Hsieh, Andrew; Nijhawan, Romi; Kanai, Ryota; Shimojo, Shinsuke
2008-01-01
Over the history of the study of visual perception there has been great success at discovering countless visual illusions. There has been less success in organizing the overwhelming variety of illusions into empirical generalizations (much less explaining them all via a unifying theory). Here, this article shows that it is possible to…
Visual Perception of Touchdown Point During Simulated Landing
ERIC Educational Resources Information Center
Palmisano, Stephen; Gillam, Barbara
2005-01-01
Experiments examined the accuracy of visual touchdown point perception during oblique descents (1.5?-15?) toward a ground plane consisting of (a) randomly positioned dots, (b) a runway outline, or (c) a grid. Participants judged whether the perceived touchdown point was above or below a probe that appeared at a random position following each…
DOT National Transportation Integrated Search
1971-07-01
Many safety problems encountered in aviation have been attributed to visual illusions. One of the various types of visual illusions, that of apparent motion, includes as an aftereffect the apparent reversed motion of an object after it ceases real mo...
Chasing vs. Stalking: Interrupting the Perception of Animacy
ERIC Educational Resources Information Center
Gao, Tao; Scholl, Brian J.
2011-01-01
Visual experience involves not only physical features such as color and shape, but also higher-level properties such as animacy and goal-directedness. Perceiving animacy is an inherently dynamic experience, in part because agents' goal-directed behavior may be frequently in flux--unlike many of their physical properties. How does the visual system…
Projecting the visual carrying capacity of recreation areas
Thomas J. Nieman; Jane L. Futrell
1979-01-01
The aesthetic experience of people utilizing the recreational resources of the national parks and forests of the United States is of primary importance since a large percentage of perception is visual. Undesirable intrusions into this sphere of perception substantially reduce the level of enjoyment or satisfaction derived from the recreation experience. Perceived...
Coherent modulation of stimulus colour can affect visually induced self-motion perception.
Nakamura, Shinji; Seno, Takeharu; Ito, Hiroyuki; Sunaga, Shoji
2010-01-01
The effects of dynamic colour modulation on vection were investigated to examine whether perceived variation of illumination affects self-motion perception. Participants observed expanding optic flow which simulated their forward self-motion. Onset latency, accumulated duration, and estimated magnitude of the self-motion were measured as indices of vection strength. Colour of the dots in the visual stimulus was modulated between white and red (experiment 1), white and grey (experiment 2), and grey and red (experiment 3). The results indicated that coherent colour oscillation in the visual stimulus significantly suppressed the strength of vection, whereas incoherent or static colour modulation did not affect vection. There was no effect of the types of the colour modulation; both achromatic and chromatic modulations turned out to be effective in inhibiting self-motion perception. Moreover, in a situation where the simulated direction of a spotlight was manipulated dynamically, vection strength was also suppressed (experiment 4). These results suggest that observer's perception of illumination is critical for self-motion perception, and rapid variation of perceived illumination would impair the reliabilities of visual information in determining self-motion.
Understanding Soldier Robot Teams in Virtual Environments
2006-06-01
often with Verbal only communication than the Verbal plus Visual communication . This was mainly attributed to the fact that the transmitted images...performance. Participants ranked every Verbal plus Visual communication conditions higher than any Verbal only communication condition. Finally, there were...UV and RM locations. Communication was either verbal only (either FF or via radio, depending on the location) or verbal plus visual. When visual
Household perceptions of coastal hazards and climate change in the Central Philippines.
Combest-Friedman, Chelsea; Christie, Patrick; Miles, Edward
2012-12-15
As a tropical archipelagic nation, the Philippines is particularly susceptible to coastal hazards, which are likely to be exacerbated by climate change. To improve coastal hazard management and adaptation planning, it is imperative that climate information be provided at relevant scales and that decision-makers understand the causes and nature of risk in their constituencies. Focusing on a municipality in the Central Philippines, this study examines local meteorological information and explores household perceptions of climate change and coastal hazard risk. First, meteorological data and local perceptions of changing climate conditions are assessed. Perceived changes in climate include an increase in rainfall and rainfall variability, an increase in intensity and frequency of storm events and sea level rise. Second, factors affecting climate change perceptions and perceived risk from coastal hazards are determined through statistical analysis. Factors tested include social status, economic standing, resource dependency and spatial location. Results indicate that perceived risk to coastal hazards is most affected by households' spatial location and resource dependency, rather than socio-economic conditions. However, important differences exist based on the type of hazard and nature of risk being measured. Resource dependency variables are more significant in determining perceived risk from coastal erosion and sea level rise than flood events. Spatial location is most significant in determining households' perceived risk to their household assets, but not perceived risk to their livelihood. Copyright © 2012 Elsevier Ltd. All rights reserved.
Predictive and postdictive mechanisms jointly contribute to visual awareness.
Soga, Ryosuke; Akaishi, Rei; Sakai, Katsuyuki
2009-09-01
One of the fundamental issues in visual awareness is how we are able to perceive the scene in front of our eyes on time despite the delay in processing visual information. The prediction theory postulates that our visual system predicts the future to compensate for such delays. On the other hand, the postdiction theory postulates that our visual awareness is inevitably a delayed product. In the present study we used flash-lag paradigms in motion and color domains and examined how the perception of visual information at the time of flash is influenced by prior and subsequent visual events. We found that both types of event additively influence the perception of the present visual image, suggesting that our visual awareness results from joint contribution of predictive and postdictive mechanisms.
Sobkow, Agata; Traczyk, Jakub; Zaleskiewicz, Tomasz
2016-01-01
Recent research has documented that affect plays a crucial role in risk perception. When no information about numerical risk estimates is available (e.g., probability of loss or magnitude of consequences), people may rely on positive and negative affect toward perceived risk. However, determinants of affective reactions to risks are poorly understood. In a series of three experiments, we addressed the question of whether and to what degree mental imagery eliciting negative affect and stress influences risk perception. In each experiment, participants were instructed to visualize consequences of risk taking and to rate riskiness. In Experiment 1, participants who imagined negative risk consequences reported more negative affect and perceived risk as higher compared to the control condition. In Experiment 2, we found that this effect was driven by affect elicited by mental imagery rather than its vividness and intensity. In this study, imagining positive risk consequences led to lower perceived risk than visualizing negative risk consequences. Finally, we tested the hypothesis that negative affect related to higher perceived risk was caused by negative feelings of stress. In Experiment 3, we introduced risk-irrelevant stress to show that participants in the stress condition rated perceived risk as higher in comparison to the control condition. This experiment showed that higher ratings of perceived risk were influenced by psychological stress. Taken together, our results demonstrate that affect-laden mental imagery dramatically changes risk perception through negative affect (i.e., psychological stress).
Sobkow, Agata; Traczyk, Jakub; Zaleskiewicz, Tomasz
2016-01-01
Recent research has documented that affect plays a crucial role in risk perception. When no information about numerical risk estimates is available (e.g., probability of loss or magnitude of consequences), people may rely on positive and negative affect toward perceived risk. However, determinants of affective reactions to risks are poorly understood. In a series of three experiments, we addressed the question of whether and to what degree mental imagery eliciting negative affect and stress influences risk perception. In each experiment, participants were instructed to visualize consequences of risk taking and to rate riskiness. In Experiment 1, participants who imagined negative risk consequences reported more negative affect and perceived risk as higher compared to the control condition. In Experiment 2, we found that this effect was driven by affect elicited by mental imagery rather than its vividness and intensity. In this study, imagining positive risk consequences led to lower perceived risk than visualizing negative risk consequences. Finally, we tested the hypothesis that negative affect related to higher perceived risk was caused by negative feelings of stress. In Experiment 3, we introduced risk-irrelevant stress to show that participants in the stress condition rated perceived risk as higher in comparison to the control condition. This experiment showed that higher ratings of perceived risk were influenced by psychological stress. Taken together, our results demonstrate that affect-laden mental imagery dramatically changes risk perception through negative affect (i.e., psychological stress). PMID:27445901
Incidence and outcomes of uveitis in juvenile rheumatoid arthritis, a synthesis of the literature.
Carvounis, Petros E; Herman, David C; Cha, Stephen; Burke, James P
2006-03-01
Juvenile rheumatoid arthritis (JRA) is the most common systemic cause of pediatric uveitis in Europe and North America. Uveitis is commonly perceived as a frequent sequela of JRA and JRA-associated uveitis is commonly considered to have a complicated course with frequent adverse visual outcomes. We performed a systematic literature search for series of consecutive patients with JRA (as defined by the American College of Rheumatology criteria) reporting on the frequency of uveitis and/or complications of uveitis, published between January 1980 and December 2004. The main outcome measures were: the cumulative incidence of uveitis in JRA, the cumulative incidence of adverse visual outcome and that of complications in JRA-associated uveitis. Additionally, the influence of gender, presence of antinuclear antibody (ANA) and disease onset subtype to the likelihood of developing uveitis were examined. Analysis of pooled data from the 26 eligible series suggested a cumulative incidence of uveitis in JRA of 8.3% [95% confidence intervals (CI), 7.5-9.1%]. The cumulative incidence of uveitis varied according to geographic location, being highest in Scandinavia, then the US, then Asia and lowest in India. JRA-associated uveitis was more common in pauciarticular than polyarticular onset patients [odds ratio (OR) = 3.2, 95% CI, 2.33-4.36] and in ANA-positive than ANA-negative patients (OR = 3.18, 95% CI, 2.22-4.54). Female gender was only a weak risk factor for the development of uveitis in JRA patients (OR = 1.69, 95% CI 1.09-2.62) and was not statistically significant after considering disease onset subtypes. In JRA-associated uveitis the cumulative incidence of cumulative incidence of adverse outcome (visual acuity < 20/40 OU) was 9.2% (95% CI: 4.7-15.8) of cataracts 20.5% (95% CI: 15.5-26.3), of glaucoma 18.9% (95% CI: 14.4-24.2) and of band keratopathy 15.7% (95% CI: 10.9-21.7). The cumulative incidence of uveitis in JRA varies according to geographic location, presence of ANA, type of JRA onset and gender. Uveitis, adverse visual outcome, and complications in JRA are less frequent than commonly accepted.
What does visual suffix interference tell us about spatial location in working memory?
Allen, Richard J; Castellà, Judit; Ueno, Taiji; Hitch, Graham J; Baddeley, Alan D
2015-01-01
A visual object can be conceived of as comprising a number of features bound together by their joint spatial location. We investigate the question of whether the spatial location is automatically bound to the features or whether the two are separable, using a previously developed paradigm whereby memory is disrupted by a visual suffix. Participants were shown a sample array of four colored shapes, followed by a postcue indicating the target for recall. On randomly intermixed trials, a to-be-ignored suffix array consisting of two different colored shapes was presented between the sample and the postcue. In a random half of suffix trials, one of the suffix items overlaid the location of the target. If location was automatically encoded, one might expect the colocation of target and suffix to differentially impair performance. We carried out three experiments, cuing for recall by spatial location (Experiment 1), color or shape (Experiment 2), or both randomly intermixed (Experiment 3). All three studies showed clear suffix effects, but the colocation of target and suffix was differentially disruptive only when a spatial cue was used. The results suggest that purely visual shape-color binding can be retained and accessed without requiring information about spatial location, even when task demands encourage the encoding of location, consistent with the idea of an abstract and flexible visual working memory system.
Luminance gradient at object borders communicates object location to the human oculomotor system.
Kilpeläinen, Markku; Georgeson, Mark A
2018-01-25
The locations of objects in our environment constitute arguably the most important piece of information our visual system must convey to facilitate successful visually guided behaviour. However, the relevant objects are usually not point-like and do not have one unique location attribute. Relatively little is known about how the visual system represents the location of such large objects as visual processing is, both on neural and perceptual level, highly edge dominated. In this study, human observers made saccades to the centres of luminance defined squares (width 4 deg), which appeared at random locations (8 deg eccentricity). The phase structure of the square was manipulated such that the points of maximum luminance gradient at the square's edges shifted from trial to trial. The average saccade endpoints of all subjects followed those shifts in remarkable quantitative agreement. Further experiments showed that the shifts were caused by the edge manipulations, not by changes in luminance structure near the centre of the square or outside the square. We conclude that the human visual system programs saccades to large luminance defined square objects based on edge locations derived from the points of maximum luminance gradients at the square's edges.
Social support and performance anxiety of college music students.
Schneider, Erin; Chesky, Kris
2011-09-01
This study characterized perceived social support and performance anxiety of college music students, compared characteristics to those of non-music majors, and explored the relationships between social support and performance anxiety. Subjects (n = 609) completed a questionnaire that included demographics, the Multidimensional Scale of Perceived Social Support (MSPSS), and visual analog scale measures of performance anxiety. Results showed that music majors perceived significantly lower levels of social support from significant others when compared to non-music majors. Perceived social support was significantly correlated with measures of performance anxiety. Students with greater perceived social support reported less frequent anxiety and lower levels of impact of anxiety on ability to perform. These findings may have practical implications for schools of music and conservatories.
van Hoesel, Richard J M
2015-04-01
One of the key benefits of using cochlear implants (CIs) in both ears rather than just one is improved localization. It is likely that in complex listening scenes, improved localization allows bilateral CI users to orient toward talkers to improve signal-to-noise ratios and gain access to visual cues, but to date, that conjecture has not been tested. To obtain an objective measure of that benefit, seven bilateral CI users were assessed for both auditory-only and audio-visual speech intelligibility in noise using a novel dynamic spatial audio-visual test paradigm. For each trial conducted in spatially distributed noise, first, an auditory-only cueing phrase that was spoken by one of four talkers was selected and presented from one of four locations. Shortly afterward, a target sentence was presented that was either audio-visual or, in another test configuration, audio-only and was spoken by the same talker and from the same location as the cueing phrase. During the target presentation, visual distractors were added at other spatial locations. Results showed that in terms of speech reception thresholds (SRTs), the average improvement for bilateral listening over the better performing ear alone was 9 dB for the audio-visual mode, and 3 dB for audition-alone. Comparison of bilateral performance for audio-visual and audition-alone showed that inclusion of visual cues led to an average SRT improvement of 5 dB. For unilateral device use, no such benefit arose, presumably due to the greatly reduced ability to localize the target talker to acquire visual information. The bilateral CI speech intelligibility advantage over the better ear in the present study is much larger than that previously reported for static talker locations and indicates greater everyday speech benefits and improved cost-benefit than estimated to date.
Size Constancy in Bat Biosonar? Perceptual Interaction of Object Aperture and Distance
Heinrich, Melina; Wiegrebe, Lutz
2013-01-01
Perception and encoding of object size is an important feature of sensory systems. In the visual system object size is encoded by the visual angle (visual aperture) on the retina, but the aperture depends on the distance of the object. As object distance is not unambiguously encoded in the visual system, higher computational mechanisms are needed. This phenomenon is termed “size constancy”. It is assumed to reflect an automatic re-scaling of visual aperture with perceived object distance. Recently, it was found that in echolocating bats, the ‘sonar aperture’, i.e., the range of angles from which sound is reflected from an object back to the bat, is unambiguously perceived and neurally encoded. Moreover, it is well known that object distance is accurately perceived and explicitly encoded in bat sonar. Here, we addressed size constancy in bat biosonar, recruiting virtual-object techniques. Bats of the species Phyllostomus discolor learned to discriminate two simple virtual objects that only differed in sonar aperture. Upon successful discrimination, test trials were randomly interspersed using virtual objects that differed in both aperture and distance. It was tested whether the bats spontaneously assigned absolute width information to these objects by combining distance and aperture. The results showed that while the isolated perceptual cues encoding object width, aperture, and distance were all perceptually well resolved by the bats, the animals did not assign absolute width information to the test objects. This lack of sonar size constancy may result from the bats relying on different modalities to extract size information at different distances. Alternatively, it is conceivable that familiarity with a behaviorally relevant, conspicuous object is required for sonar size constancy, as it has been argued for visual size constancy. Based on the current data, it appears that size constancy is not necessarily an essential feature of sonar perception in bats. PMID:23630598
Size constancy in bat biosonar? Perceptual interaction of object aperture and distance.
Heinrich, Melina; Wiegrebe, Lutz
2013-01-01
Perception and encoding of object size is an important feature of sensory systems. In the visual system object size is encoded by the visual angle (visual aperture) on the retina, but the aperture depends on the distance of the object. As object distance is not unambiguously encoded in the visual system, higher computational mechanisms are needed. This phenomenon is termed "size constancy". It is assumed to reflect an automatic re-scaling of visual aperture with perceived object distance. Recently, it was found that in echolocating bats, the 'sonar aperture', i.e., the range of angles from which sound is reflected from an object back to the bat, is unambiguously perceived and neurally encoded. Moreover, it is well known that object distance is accurately perceived and explicitly encoded in bat sonar. Here, we addressed size constancy in bat biosonar, recruiting virtual-object techniques. Bats of the species Phyllostomus discolor learned to discriminate two simple virtual objects that only differed in sonar aperture. Upon successful discrimination, test trials were randomly interspersed using virtual objects that differed in both aperture and distance. It was tested whether the bats spontaneously assigned absolute width information to these objects by combining distance and aperture. The results showed that while the isolated perceptual cues encoding object width, aperture, and distance were all perceptually well resolved by the bats, the animals did not assign absolute width information to the test objects. This lack of sonar size constancy may result from the bats relying on different modalities to extract size information at different distances. Alternatively, it is conceivable that familiarity with a behaviorally relevant, conspicuous object is required for sonar size constancy, as it has been argued for visual size constancy. Based on the current data, it appears that size constancy is not necessarily an essential feature of sonar perception in bats.
Geng, Joy J.; Ruff, Christian C.; Driver, Jon
2008-01-01
The possible impact upon human visual cortex from saccades to remembered target locations was investigated using fMRI. A specific location in the upper-right or upper-left visual quadrant served as the saccadic target. After a delay of 2400 msecs, an auditory signal indicated whether to execute a saccade to that location (go trial) or to cancel the saccade and remain centrally fixated (no-go). Group fMRI analysis revealed activation specific to the remembered target location for executed saccades, in contralateral lingual gyrus. No-go trials produced similar, albeit significantly reduced effects. Individual retinotopic mapping confirmed that on go trials, quadrant-specific activations arose in those parts of ventral V1, V2, and V3 that coded the target location for the saccade, whereas on no-go trials only the corresponding parts of V2 and V3 were significantly activated. These results indicate that a spatial-motor saccadic task (i.e. making an eye-movement to a remembered location) is sufficient to activate retinotopic visual cortex spatially corresponding to the target location, and that this activation is also present (though reduced) when no saccade is executed. We discuss the implications of finding that saccades to remembered locations can affect early visual cortex, not just those structures conventionally associated with eye-movements, in relation to recent ideas about attention, spatial working memory, and the notion that recently activated representations can be ‘refreshed’ when needed. PMID:18510442
A preconscious neural mechanism of hypnotically altered colors: a double case study.
Koivisto, Mika; Kirjanen, Svetlana; Revonsuo, Antti; Kallio, Sakari
2013-01-01
Hypnotic suggestions may change the perceived color of objects. Given that chromatic stimulus information is processed rapidly and automatically by the visual system, how can hypnotic suggestions affect perceived colors in a seemingly immediate fashion? We studied the mechanisms of such color alterations by measuring electroencephalography in two highly suggestible participants as they perceived briefly presented visual shapes under posthypnotic color alternation suggestions such as "all the squares are blue". One participant consistently reported seeing the suggested colors. Her reports correlated with enhanced evoked upper beta-band activity (22 Hz) 70-120 ms after stimulus in response to the shapes mentioned in the suggestion. This effect was not observed in a control condition where the participants merely tried to simulate the effects of the suggestion on behavior. The second participant neither reported color alterations nor showed the evoked beta activity, although her subjective experience and event-related potentials were changed by the suggestions. The results indicate a preconscious mechanism that first compares early visual input with a memory representation of the suggestion and consequently triggers the color alteration process in response to the objects specified by the suggestion. Conscious color experience is not purely the result of bottom-up processing but it can be modulated, at least in some individuals, by top-down factors such as hypnotic suggestions.
ERIC Educational Resources Information Center
Trainer, Erik Harrison
2012-01-01
Trust plays an important role in collaborations because it creates an environment in which people can openly exchange ideas and information with one another and engineer innovative solutions together with less perceived risk. The rise in globally distributed software development has created an environment in which workers are likely to have less…
Susceptibility to the Flash-Beep Illusion Is Increased in Children Compared to Adults
ERIC Educational Resources Information Center
Innes-Brown, Hamish; Barutchu, Ayla; Shivdasani, Mohit N.; Crewther, David P.; Grayden, David B.; Paolini, Antonio
2011-01-01
Audio-visual integration was studied in children aged 8-17 (N = 30) and adults (N = 22) using the "flash-beep illusion" paradigm, where the presentation of two beeps causes a single flash to be perceived as two flashes ("fission" illusion), and a single beep causes two flashes to be perceived as one flash ("fusion" illusion). Children reported…
A Master Trainer Class for Professionals in Teaching the UltraCane Electronic Travel Device
ERIC Educational Resources Information Center
Penrod, William; Corbett, Michael D.; Blasch, Bruce
2005-01-01
Electronic travel devices are used to transform information about the environment that would normally be perceived through the visual sense into a form that can be perceived by people who are blind or have low vision through another sense (Blasch, Long, & Griffin-Shirley, 1989). They are divided into two broad categories: primary devices and…
ERIC Educational Resources Information Center
Tsubomi, Hiroyuki; Ikeda, Takashi; Osaka, Naoyuki
2012-01-01
Perceived brightness is well described by Stevens' power function (S. S. Stevens, 1957, On the psychophysical law, "Psychological Review", Vol. 64, pp. 153-181), with a power exponent of 0.33 (the cubic-root function of luminance). The power exponent actually varies across individuals, yet little is known about neural substrates underlying this…
Social Cognition as Reinforcement Learning: Feedback Modulates Emotion Inference.
Zaki, Jamil; Kallman, Seth; Wimmer, G Elliott; Ochsner, Kevin; Shohamy, Daphna
2016-09-01
Neuroscientific studies of social cognition typically employ paradigms in which perceivers draw single-shot inferences about the internal states of strangers. Real-world social inference features much different parameters: People often encounter and learn about particular social targets (e.g., friends) over time and receive feedback about whether their inferences are correct or incorrect. Here, we examined this process and, more broadly, the intersection between social cognition and reinforcement learning. Perceivers were scanned using fMRI while repeatedly encountering three social targets who produced conflicting visual and verbal emotional cues. Perceivers guessed how targets felt and received feedback about whether they had guessed correctly. Visual cues reliably predicted one target's emotion, verbal cues predicted a second target's emotion, and neither reliably predicted the third target's emotion. Perceivers successfully used this information to update their judgments over time. Furthermore, trial-by-trial learning signals-estimated using two reinforcement learning models-tracked activity in ventral striatum and ventromedial pFC, structures associated with reinforcement learning, and regions associated with updating social impressions, including TPJ. These data suggest that learning about others' emotions, like other forms of feedback learning, relies on domain-general reinforcement mechanisms as well as domain-specific social information processing.
Harjunen, Ville J; Ahmed, Imtiaj; Jacucci, Giulio; Ravaja, Niklas; Spapé, Michiel M
2017-01-01
Earlier studies have revealed cross-modal visuo-tactile interactions in endogenous spatial attention. The current research used event-related potentials (ERPs) and virtual reality (VR) to identify how the visual cues of the perceiver's body affect visuo-tactile interaction in endogenous spatial attention and at what point in time the effect takes place. A bimodal oddball task with lateralized tactile and visual stimuli was presented in two VR conditions, one with and one without visible hands, and one VR-free control with hands in view. Participants were required to silently count one type of stimulus and ignore all other stimuli presented in irrelevant modality or location. The presence of hands was found to modulate early and late components of somatosensory and visual evoked potentials. For sensory-perceptual stages, the presence of virtual or real hands was found to amplify attention-related negativity on the somatosensory N140 and cross-modal interaction in somatosensory and visual P200. For postperceptual stages, an amplified N200 component was obtained in somatosensory and visual evoked potentials, indicating increased response inhibition in response to non-target stimuli. The effect of somatosensory, but not visual, N200 enhanced when the virtual hands were present. The findings suggest that bodily presence affects sustained cross-modal spatial attention between vision and touch and that this effect is specifically present in ERPs related to early- and late-sensory processing, as well as response inhibition, but do not affect later attention and memory-related P3 activity. Finally, the experiments provide commeasurable scenarios for the estimation of the signal and noise ratio to quantify effects related to the use of a head mounted display (HMD). However, despite valid a-priori reasons for fearing signal interference due to a HMD, we observed no significant drop in the robustness of our ERP measurements.
Blindsight and Unconscious Vision: What They Teach Us about the Human Visual System
Ajina, Sara; Bridge, Holly
2017-01-01
Damage to the primary visual cortex removes the major input from the eyes to the brain, causing significant visual loss as patients are unable to perceive the side of the world contralateral to the damage. Some patients, however, retain the ability to detect visual information within this blind region; this is known as blindsight. By studying the visual pathways that underlie this residual vision in patients, we can uncover additional aspects of the human visual system that likely contribute to normal visual function but cannot be revealed under physiological conditions. In this review, we discuss the residual abilities and neural activity that have been described in blindsight and the implications of these findings for understanding the intact system. PMID:27777337
Exogenous attention facilitates location transfer of perceptual learning.
Donovan, Ian; Szpiro, Sarit; Carrasco, Marisa
2015-01-01
Perceptual skills can be improved through practice on a perceptual task, even in adulthood. Visual perceptual learning is known to be mostly specific to the trained retinal location, which is considered as evidence of neural plasticity in retinotopic early visual cortex. Recent findings demonstrate that transfer of learning to untrained locations can occur under some specific training procedures. Here, we evaluated whether exogenous attention facilitates transfer of perceptual learning to untrained locations, both adjacent to the trained locations (Experiment 1) and distant from them (Experiment 2). The results reveal that attention facilitates transfer of perceptual learning to untrained locations in both experiments, and that this transfer occurs both within and across visual hemifields. These findings show that training with exogenous attention is a powerful regime that is able to overcome the major limitation of location specificity.
Exogenous attention facilitates location transfer of perceptual learning
Donovan, Ian; Szpiro, Sarit; Carrasco, Marisa
2015-01-01
Perceptual skills can be improved through practice on a perceptual task, even in adulthood. Visual perceptual learning is known to be mostly specific to the trained retinal location, which is considered as evidence of neural plasticity in retinotopic early visual cortex. Recent findings demonstrate that transfer of learning to untrained locations can occur under some specific training procedures. Here, we evaluated whether exogenous attention facilitates transfer of perceptual learning to untrained locations, both adjacent to the trained locations (Experiment 1) and distant from them (Experiment 2). The results reveal that attention facilitates transfer of perceptual learning to untrained locations in both experiments, and that this transfer occurs both within and across visual hemifields. These findings show that training with exogenous attention is a powerful regime that is able to overcome the major limitation of location specificity. PMID:26426818
Gravity as a Strong Prior: Implications for Perception and Action.
Jörges, Björn; López-Moliner, Joan
2017-01-01
In the future, humans are likely to be exposed to environments with altered gravity conditions, be it only visually (Virtual and Augmented Reality), or visually and bodily (space travel). As visually and bodily perceived gravity as well as an interiorized representation of earth gravity are involved in a series of tasks, such as catching, grasping, body orientation estimation and spatial inferences, humans will need to adapt to these new gravity conditions. Performance under earth gravity discrepant conditions has been shown to be relatively poor, and few studies conducted in gravity adaptation are rather discouraging. Especially in VR on earth, conflicts between bodily and visual gravity cues seem to make a full adaptation to visually perceived earth-discrepant gravities nearly impossible, and even in space, when visual and bodily cues are congruent, adaptation is extremely slow. We invoke a Bayesian framework for gravity related perceptual processes, in which earth gravity holds the status of a so called "strong prior". As other strong priors, the gravity prior has developed through years and years of experience in an earth gravity environment. For this reason, the reliability of this representation is extremely high and overrules any sensory information to its contrary. While also other factors such as the multisensory nature of gravity perception need to be taken into account, we present the strong prior account as a unifying explanation for empirical results in gravity perception and adaptation to earth-discrepant gravities.
An investigation of the spatial selectivity of the duration after-effect.
Maarseveen, Jim; Hogendoorn, Hinze; Verstraten, Frans A J; Paffen, Chris L E
2017-01-01
Adaptation to the duration of a visual stimulus causes the perceived duration of a subsequently presented stimulus with a slightly different duration to be skewed away from the adapted duration. This pattern of repulsion following adaptation is similar to that observed for other visual properties, such as orientation, and is considered evidence for the involvement of duration-selective mechanisms in duration encoding. Here, we investigated whether the encoding of duration - by duration-selective mechanisms - occurs early on in the visual processing hierarchy. To this end, we investigated the spatial specificity of the duration after-effect in two experiments. We measured the duration after-effect at adapter-test distances ranging between 0 and 15° of visual angle and for within- and between-hemifield presentations. We replicated the duration after-effect: the test stimulus was perceived to have a longer duration following adaptation to a shorter duration, and a shorter duration following adaptation to a longer duration. Importantly, this duration after-effect occurred at all measured distances, with no evidence for a decrease in the magnitude of the after-effect at larger distances or across hemifields. This shows that adaptation to duration does not result from adaptation occurring early on in the visual processing hierarchy. Instead, it seems likely that duration information is a high-level stimulus property that is encoded later on in the visual processing hierarchy. Copyright © 2016 Elsevier Ltd. All rights reserved.
The effect of visual context on manual localization of remembered targets
NASA Technical Reports Server (NTRS)
Barry, S. R.; Bloomberg, J. J.; Huebner, W. P.
1997-01-01
This paper examines the contribution of egocentric cues and visual context to manual localization of remembered targets. Subjects pointed in the dark to the remembered position of a target previously viewed without or within a structured visual scene. Without a remembered visual context, subjects pointed to within 2 degrees of the target. The presence of a visual context with cues of straight ahead enhanced pointing performance to the remembered location of central but not off-center targets. Thus, visual context provides strong visual cues of target position and the relationship of body position to target location. Without a visual context, egocentric cues provide sufficient input for accurate pointing to remembered targets.
Gender-related effects of vision impairment characteristics on depression in Korea.
Park, Hye Won; Lee, Wanhyung; Yoon, Jin-Ha
2018-04-01
To investigate the gender-specific associations between perceived vision impairment and symptoms of depression. We used the data from the 2012 Korean Longitudinal Study of Aging database of 7448 individuals aged 45 years and older. Questionnaires assessing depression symptoms and perceived visual impairment at near, distance, and in general were administered. Logistic regression analyses were used to evaluate if visual impairment could lead to depression, adjusting for the potential confounders of age, socioeconomic status (household income, education level, marital status, and employment status), and health behaviors (alcohol consumption, smoking, and physical activity level) after gender stratification. Perceived general and near vision impairment were significantly associated with symptoms of depression in males (odds ratio [OR] = 2.78 and 2.54; 95% confidence interval [CI], 1.91-4.04 and 1.78-3.63). Perceived general and distance vision impairment were significantly associated with symptoms of depression in females (OR = 2.16 and 2.08; 95% CI, 1.67-2.79 and 1.61-2.69). General sight with near vision impairment in males and general sight with distance vision impairment in females could be stronger predictors of depression than other vision impairment combinations (area under the receiver operating characteristic curve [AUROC], 0.6461; p = 0.0425 in males; AUROC, 0.6270; p = 0.0318 in females). Conclusion Gender differences were found in the characteristics of visual impairment on symptoms of depression. Ophthalmologists should be aware that near vision impairment in males and distance vision impairment in females have an adjunctive effect that might contribute to symptoms of depression.
Jung, Eunice L.; Zadbood, Asieh; Lee, Sang-Hun; Tomarken, Andrew J.; Blake, Randolph
2013-01-01
We live in a cluttered, dynamic visual environment that poses a challenge for the visual system: for objects, including those that move about, to be perceived, information specifying those objects must be integrated over space and over time. Does a single, omnibus mechanism perform this grouping operation, or does grouping depend on separate processes specialized for different feature aspects of the object? To address this question, we tested a large group of healthy young adults on their abilities to perceive static fragmented figures embedded in noise and to perceive dynamic point-light biological motion figures embedded in dynamic noise. There were indeed substantial individual differences in performance on both tasks, but none of the statistical tests we applied to this data set uncovered a significant correlation between those performance measures. These results suggest that the two tasks, despite their superficial similarity, require different segmentation and grouping processes that are largely unrelated to one another. Whether those processes are embodied in distinct neural mechanisms remains an open question. PMID:24198799
Jung, Eunice L; Zadbood, Asieh; Lee, Sang-Hun; Tomarken, Andrew J; Blake, Randolph
2013-01-01
WE LIVE IN A CLUTTERED, DYNAMIC VISUAL ENVIRONMENT THAT POSES A CHALLENGE FOR THE VISUAL SYSTEM: for objects, including those that move about, to be perceived, information specifying those objects must be integrated over space and over time. Does a single, omnibus mechanism perform this grouping operation, or does grouping depend on separate processes specialized for different feature aspects of the object? To address this question, we tested a large group of healthy young adults on their abilities to perceive static fragmented figures embedded in noise and to perceive dynamic point-light biological motion figures embedded in dynamic noise. There were indeed substantial individual differences in performance on both tasks, but none of the statistical tests we applied to this data set uncovered a significant correlation between those performance measures. These results suggest that the two tasks, despite their superficial similarity, require different segmentation and grouping processes that are largely unrelated to one another. Whether those processes are embodied in distinct neural mechanisms remains an open question.
The Perception of Cooperativeness Without Any Visual or Auditory Communication.
Chang, Dong-Seon; Burger, Franziska; Bülthoff, Heinrich H; de la Rosa, Stephan
2015-12-01
Perceiving social information such as the cooperativeness of another person is an important part of human interaction. But can people perceive the cooperativeness of others even without any visual or auditory information? In a novel experimental setup, we connected two people with a rope and made them accomplish a point-collecting task together while they could not see or hear each other. We observed a consistently emerging turn-taking behavior in the interactions and installed a confederate in a subsequent experiment who either minimized or maximized this behavior. Participants experienced this only through the haptic force-feedback of the rope and made evaluations about the confederate after each interaction. We found that perception of cooperativeness was significantly affected only by the manipulation of this turn-taking behavior. Gender- and size-related judgments also significantly differed. Our results suggest that people can perceive social information such as the cooperativeness of other people even in situations where possibilities for communication are minimal.
Perceived reachability in single- and multiple-degree-of-freedom workspaces.
Gabbard, Carl; Ammar, Diala; Lee, Sunghan
2006-11-01
In comparisons of perceived (imagined) and actual reaches, investigators consistently find a tendency to overestimate. A primary explanation for that phenomenon is that individuals reach as a "whole-body engagement" involving multiple degrees of freedom (m-df). The authors examined right-handers (N = 28) in 1-df and m-df workspaces by having them judge the reachability of targets at midline, right, and left visual fields. Response profiles were similar for total error. Both conditions reflected an overestimation bias, although the bias was significantly greater in the m-df condition. Midline responses differed (greater overestimation) from those of right and left visual fields, which were similar. Although the authors would have predicted better performance in the m-df condition, it seems plausible that if individuals think in terms of m-df, they may feel more confident in that condition and thereby exhibit greater overestimation. Furthermore, the authors speculate that the reduced bias at the side fields may be attributed to a more conservative strategy based in part on perceived reach constraints.
The Perception of Cooperativeness Without Any Visual or Auditory Communication
Chang, Dong-Seon; Burger, Franziska; de la Rosa, Stephan
2015-01-01
Perceiving social information such as the cooperativeness of another person is an important part of human interaction. But can people perceive the cooperativeness of others even without any visual or auditory information? In a novel experimental setup, we connected two people with a rope and made them accomplish a point-collecting task together while they could not see or hear each other. We observed a consistently emerging turn-taking behavior in the interactions and installed a confederate in a subsequent experiment who either minimized or maximized this behavior. Participants experienced this only through the haptic force-feedback of the rope and made evaluations about the confederate after each interaction. We found that perception of cooperativeness was significantly affected only by the manipulation of this turn-taking behavior. Gender- and size-related judgments also significantly differed. Our results suggest that people can perceive social information such as the cooperativeness of other people even in situations where possibilities for communication are minimal. PMID:27551362
Rademaker, Rosanne L; van de Ven, Vincent G; Tong, Frank; Sack, Alexander T
2017-01-01
Neuroimaging studies have demonstrated that activity patterns in early visual areas predict stimulus properties actively maintained in visual working memory. Yet, the mechanisms by which such information is represented remain largely unknown. In this study, observers remembered the orientations of 4 briefly presented gratings, one in each quadrant of the visual field. A 10Hz Transcranial Magnetic Stimulation (TMS) triplet was applied directly at stimulus offset, or midway through a 2-second delay, targeting early visual cortex corresponding retinotopically to a sample item in the lower hemifield. Memory for one of the four gratings was probed at random, and participants reported this orientation via method of adjustment. Recall errors were smaller when the visual field location targeted by TMS overlapped with that of the cued memory item, compared to errors for stimuli probed diagonally to TMS. This implied topographic storage of orientation information, and a memory-enhancing effect at the targeted location. Furthermore, early pulses impaired performance at all four locations, compared to late pulses. Next, response errors were fit empirically using a mixture model to characterize memory precision and guess rates. Memory was more precise for items proximal to the pulse location, irrespective of pulse timing. Guesses were more probable with early TMS pulses, regardless of stimulus location. Thus, while TMS administered at the offset of the stimulus array might disrupt early-phase consolidation in a non-topographic manner, TMS also boosts the precise representation of an item at its targeted retinotopic location, possibly by increasing attentional resources or by injecting a beneficial amount of noise.
van de Ven, Vincent G.; Tong, Frank; Sack, Alexander T.
2017-01-01
Neuroimaging studies have demonstrated that activity patterns in early visual areas predict stimulus properties actively maintained in visual working memory. Yet, the mechanisms by which such information is represented remain largely unknown. In this study, observers remembered the orientations of 4 briefly presented gratings, one in each quadrant of the visual field. A 10Hz Transcranial Magnetic Stimulation (TMS) triplet was applied directly at stimulus offset, or midway through a 2-second delay, targeting early visual cortex corresponding retinotopically to a sample item in the lower hemifield. Memory for one of the four gratings was probed at random, and participants reported this orientation via method of adjustment. Recall errors were smaller when the visual field location targeted by TMS overlapped with that of the cued memory item, compared to errors for stimuli probed diagonally to TMS. This implied topographic storage of orientation information, and a memory-enhancing effect at the targeted location. Furthermore, early pulses impaired performance at all four locations, compared to late pulses. Next, response errors were fit empirically using a mixture model to characterize memory precision and guess rates. Memory was more precise for items proximal to the pulse location, irrespective of pulse timing. Guesses were more probable with early TMS pulses, regardless of stimulus location. Thus, while TMS administered at the offset of the stimulus array might disrupt early-phase consolidation in a non-topographic manner, TMS also boosts the precise representation of an item at its targeted retinotopic location, possibly by increasing attentional resources or by injecting a beneficial amount of noise. PMID:28384347
[Visual perception and its disorders].
Ruf-Bächtiger, L
1989-11-21
It's the brain and not the eye that decides what is perceived. In spite of this fact, quite a lot is known about the functioning of the eye and the first sections of the optic tract, but little about the actual process of perception. Examination of visual perception and its malfunctions relies therefore on certain hypotheses. Proceeding from the model of functional brain systems, variant functional domains of visual perception can be distinguished. Among the more important of these domains are: digit span, visual discrimination and figure-ground discrimination. Evaluation of these functional domains allows us to understand those children with disorders of visual perception better and to develop more effective treatment methods.
A flight by periscope and where it landed.
Roscoe, Stanley N; Acosta, Hector M
2008-06-01
This study defines display design factors linking visual accommodation and the perceived size of distant objects. In 1947, in anticipation of augmented contact and sensor-relayed contact displays, a periscope was installed in an airplane to serve as a sensor-based contact display simulator. To achieve normal landing performance, however, the unity image had to be magnified. This successful intervention, first published in 1966 in Human Factors, implicated oculomotor mechanisms and higher perceptual functions and became the observational basis for a series of investigative hypotheses. Observers registered the perceived size of the collimated image of a "moon" by adjusting a disk of light while alternatively providing optometric measurements of accommodative distance. Various investigators found high correlations between focal distances and perceived moon sizes. The simulated moon provided a superior vehicle for revealing the relationship between focal distance and perceived size and the factors affecting both. The operational display design implications and the possibility of a partial explanation for the moon illusion provided the motivation for an important doctoral research project involving eight factors that affect both focal distance and perceived size. The investigation reaffirmed that virtual images, as found in head-up and head-mounted displays (HUDs and HMDs, respectively), do not consistently draw focus to optical infinity and that a variety of factors necessarily manipulated by display designers and present in many operational systems can affect visual performance partially through the mediation of accommodation.
Bergström, Fredrik; Eriksson, Johan
2015-01-01
Although non-consciously perceived information has previously been assumed to be short-lived (< 500 ms), recent findings show that non-consciously perceived information can be maintained for at least 15 s. Such findings can be explained as working memory without a conscious experience of the information to be retained. However, whether or not working memory can operate on non-consciously perceived information remains controversial, and little is known about the nature of such non-conscious visual short-term memory (VSTM). Here we used continuous flash suppression to render stimuli non-conscious, to investigate the properties of non-consciously perceived representations in delayed match-to-sample (DMS) tasks. In Experiment I we used variable delays (5 or 15 s) and found that performance was significantly better than chance and was unaffected by delay duration, thereby replicating previous findings. In Experiment II the DMS task required participants to combine information of spatial position and object identity on a trial-by-trial basis to successfully solve the task. We found that the conjunction of spatial position and object identity was retained, thereby verifying that non-conscious, trial-specific information can be maintained for prospective use. We conclude that our results are consistent with a working memory interpretation, but that more research is needed to verify this interpretation.
Quantifying how the combination of blur and disparity affects the perceived depth
NASA Astrophysics Data System (ADS)
Wang, Junle; Barkowsky, Marcus; Ricordel, Vincent; Le Callet, Patrick
2011-03-01
The influence of a monocular depth cue, blur, on the apparent depth of stereoscopic scenes will be studied in this paper. When 3D images are shown on a planar stereoscopic display, binocular disparity becomes a pre-eminent depth cue. But it induces simultaneously the conflict between accommodation and vergence, which is often considered as a main reason for visual discomfort. If we limit this visual discomfort by decreasing the disparity, the apparent depth also decreases. We propose to decrease the (binocular) disparity of 3D presentations, and to reinforce (monocular) cues to compensate the loss of perceived depth and keep an unaltered apparent depth. We conducted a subjective experiment using a twoalternative forced choice task. Observers were required to identify the larger perceived depth in a pair of 3D images with/without blur. By fitting the result to a psychometric function, we obtained points of subjective equality in terms of disparity. We found that when blur is added to the background of the image, the viewer can perceive larger depth comparing to the images without any blur in the background. The increase of perceived depth can be considered as a function of the relative distance between the foreground and background, while it is insensitive to the distance between the viewer and the depth plane at which the blur is added.
The Impact of Rope Jumping Exercise on Physical Fitness of Visually Impaired Students
ERIC Educational Resources Information Center
Chen, Chao-Chien; Lin, Shih-Yen
2011-01-01
The main purpose of this study was to investigate the impact of rope jumping exercise on the health-related physical fitness of visually impaired students. The participants' physical fitness was examined before and after the training. The exercise intensity of the experimental group was controlled with Rating of Perceived Exertion (RPE) (values…
Communication Variables Associated with Hearing-Impaired/Vision-Impaired Persons--A Pilot-Study.
ERIC Educational Resources Information Center
Hicks, Wanda M.
1979-01-01
A study involving eight youths and adults with retinitis pigmentosa (and only 20 degree visual field and hearing loss of at least 20 decibels) determined variance in the ability to perceive and comprehend visual stimuli presented by way of the manual modality when modifications were made in configuration, movement speed, movement size, and…
Viewing Objects and Planning Actions: On the Potentiation of Grasping Behaviours by Visual Objects
ERIC Educational Resources Information Center
Makris, Stergios; Hadar, Aviad A.; Yarrow, Kielan
2011-01-01
How do humans interact with tools? Gibson (1979) suggested that humans perceive directly what tools afford in terms of meaningful actions. This "affordances" hypothesis implies that visual objects can potentiate motor responses even in the absence of an intention to act. Here we explore the temporal evolution of motor plans afforded by common…
ERIC Educational Resources Information Center
Gitlin, L. N.; Mount, J.; Lucas, W.; Weirich, L. C.; Gramberg, L.
1997-01-01
This study investigated the musculoskeletal consequences of travel aids, particularly white canes and guide dogs, as perceived by 21 individuals (ages 27-68) with visual impairments or blindness. They experienced a variety of negative physical effects that they denied, ignored, or minimized because of benefits derived from being independently…
Curriculum Design and Its Relationship to Cultural Visual Production
ERIC Educational Resources Information Center
Smith, Wendell Rudolph
2014-01-01
Stakeholders in the arts perceive a disconnect between the visual art curriculum at a university in the West Indies and participation of graduates in the market economy. The role of this university in promoting social and economic development is crucial to the region. Graduates are often left with limited options in which to make a living from…
ERIC Educational Resources Information Center
Simpkins, N. K.
2014-01-01
This article reports an investigation into undergraduate student experiences and views of a visual or "blocks" based programming language and its environment. An additional and central aspect of this enquiry is to substantiate the perceived degree of transferability of programming skills learnt within the visual environment to a typical…
Liu, Shu; Yu, Marco; Weinreb, Robert N; Lai, Gilda; Lam, Dennis Shun-Chiu; Leung, Christopher Kai-Shun
2014-05-02
We compared the detection of visual field progression and its rate of change between standard automated perimetry (SAP) and Matrix frequency doubling technology perimetry (FDTP) in glaucoma. We followed prospectively 217 eyes (179 glaucoma and 38 normal eyes) for SAP and FDTP testing at 4-month intervals for ≥36 months. Pointwise linear regression analysis was performed. A test location was considered progressing when the rate of change of visual sensitivity was ≤-1 dB/y for nonedge and ≤-2 dB/y for edge locations. Three criteria were used to define progression in an eye: ≥3 adjacent nonedge test locations (conservative), any three locations (moderate), and any two locations (liberal) progressed. The rate of change of visual sensitivity was calculated with linear mixed models. Of the 217 eyes, 6.1% and 3.9% progressed with the conservative criteria, 14.5% and 5.6% of eyes progressed with the moderate criteria, and 20.1% and 11.7% of eyes progressed with the liberal criteria by FDTP and SAP, respectively. Taking all test locations into consideration (total, 54 × 179 locations), FDTP detected more progressing locations (176) than SAP (103, P < 0.001). The rate of change of visual field mean deviation (MD) was significantly faster for FDTP (all with P < 0.001). No eyes showed progression in the normal group using the conservative and the moderate criteria. With a faster rate of change of visual sensitivity, FDTP detected more progressing eyes than SAP at a comparable level of specificity. Frequency doubling technology perimetry can provide a useful alternative to monitor glaucoma progression.
Kurosaki, Mitsuhaya; Shirao, Naoko; Yamashita, Hidehisa; Okamoto, Yasumasa; Yamawaki, Shigeto
2006-02-15
Our aim was to study the gender differences in brain activation upon viewing visual stimuli of distorted images of one's own body. We performed functional magnetic resonance imaging on 11 healthy young men and 11 healthy young women using the "body image tasks" which consisted of fat, real, and thin shapes of the subject's own body. Comparison of the brain activation upon performing the fat-image task versus real-image task showed significant activation of the bilateral prefrontal cortex and left parahippocampal area including the amygdala in the women, and significant activation of the right occipital lobe including the primary and secondary visual cortices in the men. Comparison of brain activation upon performing the thin-image task versus real-image task showed significant activation of the left prefrontal cortex, left limbic area including the cingulate gyrus and paralimbic area including the insula in women, and significant activation of the occipital lobe including the left primary and secondary visual cortices in men. These results suggest that women tend to perceive distorted images of their own bodies by complex cognitive processing of emotion, whereas men tend to perceive distorted images of their own bodies by object visual processing and spatial visual processing.
How does parents' visual perception of their child's weight status affect their feeding style?
Yilmaz, Resul; Erkorkmaz, Ünal; Ozcetin, Mustafa; Karaaslan, Erhan
2013-01-01
Eating style is one of the prominente factors that determine energy intake. One of the influencing factors that determine parental feeding style is parental perception of the weight status of the child. The aim of this study is to evaluate the relationship between maternal visual perception of their children's weight status and their feeding style. A cross-sectional survey was completed with only mother's of 380 preschool children with age of 5 to 7 (6.14 years). Visual perception scores were measured with a sketch and maternal feeding style was measured with validated "Parental Feeding Style Questionnaire". The parental feeding dimensions "emotional feeding" and "encouragement to eat" subscale scores were low in overweight children according to visual perception classification. "Emotional feeding" and "permissive control" subscale scores were statistically different in children classified as correctly perceived and incorrectly low perceived group due to maternal misperception. Various feeding styles were related to maternal visual perception. The best approach to preventing obesity and underweight may be to focus on achieving correct parental perception of the weight status of their children, thus improving parental skills and leading them to implement proper feeding styles. Copyright © AULA MEDICA EDICIONES 2013. Published by AULA MEDICA. All rights reserved.
Frahm, Ken Steffen; Mørch, Carsten Dahl; Grill, Warren M; Andersen, Ole Kæseler
2013-09-01
During electrocutaneous stimulations, variation in skin properties across locations can lead to differences in neural activation. However, little focus has been given to the effect of different skin thicknesses on neural activation. Electrical stimulation was applied to six sites across the sole of the foot. The intensities used were two and four times perception threshold. The subjects (n = 8) rated the perception quality and intensity using the McGill Pain Questionnaire and a visual analog scale (VAS). A finite element model was developed and combined with the activation function (AF) to estimate neural activation. Electrical stimulation was perceived as significantly less sharp at the heel compared to all other sites, except one site in the forefoot (logistic regression, p < 0.05). The VAS scores were significantly higher in the arch than at the heel (RM ANOVA, p < 0.05). The model showed that the AF was between 91 and 231 % higher at the five other sites than at the heel. The differences in perception across the sole of the foot indicated that the CNS received different inputs depending on the stimulus site. The lower AF at the heel indicated that the skin thicknesses could contribute to the perceived differences.
The Blue Arc Entoptic Phenomenon in Glaucoma (An American Ophthalmological Thesis)
Pasquale, Louis R.; Brusie, Steven
2013-01-01
Purpose: To determine whether the blue arc entoptic phenomenon, a positive visual response originating from the retina with a shape that conforms to the topology of the nerve fiber layer, is depressed in glaucoma. Methods: We recruited a cross-sectional, nonconsecutive sample of 202 patients from a single institution in a prospective manner. Subjects underwent full ophthalmic examination, including standard automated perimetry (Humphrey Visual Field 24–2) or frequency doubling technology (Screening C 20–5) perimetry. Eligible patients viewed computer-generated stimuli under conditions chosen to optimize perception of the blue arcs. Unmasked testers instructed patients to report whether they were able to perceive blue arcs but did not reveal what response was expected. We created multivariable logistic regression models to ascertain the demographic and clinical parameters associated with perceiving the blue arcs. Results: In multivariable analyses, each 0.1 unit increase in cup-disc ratio was associated with 36% reduced likelihood of perceiving the blue arcs (odds ratio [OR] = 0.66 [95% confidence interval (CI): 0.53–0.83], P<.001). A smaller mean defect was associated with an increased likelihood of perceiving the blue arcs (OR=1.79 [95% CI: 1.40–2.28]); P<.001), while larger pattern standard deviation (OR=0.72 [95% CI: 0.57–0.91]; P=.005) and abnormal glaucoma hemifield test (OR=0.25 [0.10–0.65]; P=.006) were associated with a reduced likelihood of perceiving them. Older age and media opacity were also associated with an inability to perceive the blue arcs. Conclusion: In this study, the inability to perceive the blue arcs correlated with structural and functional features associated with glaucoma, although older age and media opacity were also predictors of this entoptic response. PMID:24167324
The blue arc entoptic phenomenon in glaucoma (an American ophthalmological thesis).
Pasquale, Louis R; Brusie, Steven
2013-09-01
To determine whether the blue arc entoptic phenomenon, a positive visual response originating from the retina with a shape that conforms to the topology of the nerve fiber layer, is depressed in glaucoma. We recruited a cross-sectional, nonconsecutive sample of 202 patients from a single institution in a prospective manner. Subjects underwent full ophthalmic examination, including standard automated perimetry (Humphrey Visual Field 24-2) or frequency doubling technology (Screening C 20-5) perimetry. Eligible patients viewed computer-generated stimuli under conditions chosen to optimize perception of the blue arcs. Unmasked testers instructed patients to report whether they were able to perceive blue arcs but did not reveal what response was expected. We created multivariable logistic regression models to ascertain the demographic and clinical parameters associated with perceiving the blue arcs. In multivariable analyses, each 0.1 unit increase in cup-disc ratio was associated with 36% reduced likelihood of perceiving the blue arcs (odds ratio [OR] = 0.66 [95% confidence interval (CI): 0.53-0.83], P<.001). A smaller mean defect was associated with an increased likelihood of perceiving the blue arcs (OR=1.79 [95% CI: 1.40-2.28]); P<.001), while larger pattern standard deviation (OR=0.72 [95% CI: 0.57-0.91]; P=.005) and abnormal glaucoma hemifield test (OR=0.25 [0.10-0.65]; P=.006) were associated with a reduced likelihood of perceiving them. Older age and media opacity were also associated with an inability to perceive the blue arcs. In this study, the inability to perceive the blue arcs correlated with structural and functional features associated with glaucoma, although older age and media opacity were also predictors of this entoptic response.
Neural representations of contextual guidance in visual search of real-world scenes.
Preston, Tim J; Guo, Fei; Das, Koel; Giesbrecht, Barry; Eckstein, Miguel P
2013-05-01
Exploiting scene context and object-object co-occurrence is critical in guiding eye movements and facilitating visual search, yet the mediating neural mechanisms are unknown. We used functional magnetic resonance imaging while observers searched for target objects in scenes and used multivariate pattern analyses (MVPA) to show that the lateral occipital complex (LOC) can predict the coarse spatial location of observers' expectations about the likely location of 213 different targets absent from the scenes. In addition, we found weaker but significant representations of context location in an area related to the orienting of attention (intraparietal sulcus, IPS) as well as a region related to scene processing (retrosplenial cortex, RSC). Importantly, the degree of agreement among 100 independent raters about the likely location to contain a target object in a scene correlated with LOC's ability to predict the contextual location while weaker but significant effects were found in IPS, RSC, the human motion area, and early visual areas (V1, V3v). When contextual information was made irrelevant to observers' behavioral task, the MVPA analysis of LOC and the other areas' activity ceased to predict the location of context. Thus, our findings suggest that the likely locations of targets in scenes are represented in various visual areas with LOC playing a key role in contextual guidance during visual search of objects in real scenes.