Slow and fast visual motion channels have independent binocular-rivalry stages.
van de Grind, W. A.; van Hof, P.; van der Smagt, M. J.; Verstraten, F. A.
2001-01-01
We have previously reported a transparent motion after-effect indicating that the human visual system comprises separate slow and fast motion channels. Here, we report that the presentation of a fast motion in one eye and a slow motion in the other eye does not result in binocular rivalry but in a clear percept of transparent motion. We call this new visual phenomenon 'dichoptic motion transparency' (DMT). So far only the DMT phenomenon and the two motion after-effects (the 'classical' motion after-effect, seen after motion adaptation on a static test pattern, and the dynamic motion after-effect, seen on a dynamic-noise test pattern) appear to isolate the channels completely. The speed ranges of the slow and fast channels overlap strongly and are observer dependent. A model is presented that links after-effect durations of an observer to the probability of rivalry or DMT as a function of dichoptic velocity combinations. Model results support the assumption of two highly independent channels showing only within-channel rivalry, and no rivalry or after-effect interactions between the channels. The finding of two independent motion vision channels, each with a separate rivalry stage and a private line to conscious perception, might be helpful in visualizing or analysing pathways to consciousness. PMID:11270442
Filling-in visual motion with sounds.
Väljamäe, A; Soto-Faraco, S
2008-10-01
Information about the motion of objects can be extracted by multiple sensory modalities, and, as a consequence, object motion perception typically involves the integration of multi-sensory information. Often, in naturalistic settings, the flow of such information can be rather discontinuous (e.g. a cat racing through the furniture in a cluttered room is partly seen and partly heard). This study addressed audio-visual interactions in the perception of time-sampled object motion by measuring adaptation after-effects. We found significant auditory after-effects following adaptation to unisensory auditory and visual motion in depth, sampled at 12.5 Hz. The visually induced (cross-modal) auditory motion after-effect was eliminated if visual adaptors flashed at half of the rate (6.25 Hz). Remarkably, the addition of the high-rate acoustic flutter (12.5 Hz) to this ineffective, sparsely time-sampled, visual adaptor restored the auditory after-effect to a level comparable to what was seen with high-rate bimodal adaptors (flashes and beeps). Our results suggest that this auditory-induced reinstatement of the motion after-effect from the poor visual signals resulted from the occurrence of sound-induced illusory flashes. This effect was found to be dependent both on the directional congruency between modalities and on the rate of auditory flutter. The auditory filling-in of time-sampled visual motion supports the feasibility of using reduced frame rate visual content in multisensory broadcasting and virtual reality applications.
DOT National Transportation Integrated Search
1971-07-01
Many safety problems encountered in aviation have been attributed to visual illusions. One of the various types of visual illusions, that of apparent motion, includes as an aftereffect the apparent reversed motion of an object after it ceases real mo...
DOT National Transportation Integrated Search
1969-08-01
Visual illusions have been a persistent problem in aviation research. The spiral aftereffect (SAE) is an example of one type of visual illusion--that which occurs following the cessation of real motion. Duration and intensity of the SAE was evaluated...
Adaptation aftereffects in the perception of gender from biological motion.
Troje, Nikolaus F; Sadr, Javid; Geyer, Henning; Nakayama, Ken
2006-07-28
Human visual perception is highly adaptive. While this has been known and studied for a long time in domains such as color vision, motion perception, or the processing of spatial frequency, a number of more recent studies have shown that adaptation and adaptation aftereffects also occur in high-level visual domains like shape perception and face recognition. Here, we present data that demonstrate a pronounced aftereffect in response to adaptation to the perceived gender of biological motion point-light walkers. A walker that is perceived to be ambiguous in gender under neutral adaptation appears to be male after adaptation with an exaggerated female walker and female after adaptation with an exaggerated male walker. We discuss this adaptation aftereffect as a tool to characterize and probe the mechanisms underlying biological motion perception.
Mental Rotation Meets the Motion Aftereffect: The Role of hV5/MT+ in Visual Mental Imagery
ERIC Educational Resources Information Center
Seurinck, Ruth; de Lange, Floris P.; Achten, Erik; Vingerhoets, Guy
2011-01-01
A growing number of studies show that visual mental imagery recruits the same brain areas as visual perception. Although the necessity of hV5/MT+ for motion perception has been revealed by means of TMS, its relevance for motion imagery remains unclear. We induced a direction-selective adaptation in hV5/MT+ by means of an MAE while subjects…
Nakajima, Sawako; Ino, Shuichi; Ifukube, Tohru
2007-01-01
Mixed Reality (MR) technologies have recently been explored in many areas of Human-Machine Interface (HMI) such as medicine, manufacturing, entertainment and education. However MR sickness, a kind of motion sickness is caused by sensory conflicts between the real world and virtual world. The purpose of this paper is to find out a new evaluation method of motion and MR sickness. This paper investigates a relationship between the whole-body vibration related to MR technologies and the motion aftereffect (MAE) phenomenon in the human visual system. This MR environment is modeled after advanced driver assistance systems in near-future vehicles. The seated subjects in the MR simulator were shaken in the pitch direction ranging from 0.1 to 2.0 Hz. Results show that MAE is useful for evaluation of MR sickness incidence. In addition, a method to reduce the MR sickness by auditory stimulation is proposed.
Perceptual adaptation in the use of night vision goggles
NASA Technical Reports Server (NTRS)
Durgin, Frank H.; Proffitt, Dennis R.
1992-01-01
The image intensification (I sup 2) systems studied for this report were the biocular AN/PVS-7(NVG) and the binocular AN/AVS-6(ANVIS). Both are quite impressive for purposes of revealing the structure of the environment in a fairly straightforward way in extremely low-light conditions. But these systems represent an unusual viewing medium. The perceptual information available through I sup 2 systems is different in a variety of ways from the typical input of everyday vision, and extensive training and practice is required for optimal use. Using this sort of system involves a kind of perceptual skill learning, but is may also involve visual adaptations that are not simply an extension of normal vision. For example, the visual noise evident in the goggles in very low-light conditions results in unusual statistical properties in visual input. Because we had recently discovered a strong and enduring aftereffect of perceived texture density which seemed to be sensitive to precisely the sorts of statistical distortions introduced by I sup 2 systems, it occurred to use that visual noise of this sort might be a very adapting stimulus for texture density and produce an aftereffect that extended into normal vision once the goggles were removed. We have not found any experimental evidence that I sup 2 systems produce texture density aftereffects. The nature of the texture density aftereffect is briefly explained, followed by an accounting of our studies of I sup 2 systems and our most recent work on the texture density aftereffect. A test for spatial frequency adaptation after exposure to NVG's is also reported, as is a study of perceived depth from motion (motion parallax) while wearing the biocular goggles. We conclude with a summary of our findings.
The continuous Wagon Wheel Illusion depends on, but is not identical to neuronal adaptation.
VanRullen, Rufin
2007-07-01
The occurrence of perceived reversed motion while observers view a continuous, periodically moving stimulus (a bistable phenomenon coined the "continuous Wagon Wheel Illusion" or "c-WWI") has been taken as evidence that some aspects of motion perception rely on discrete sampling of visual information. Alternative accounts rely on the possibility of a motion aftereffect that may become visible even while the adapting stimulus is present. Here I show that motion adaptation might be necessary, but is not sufficient to explain the illusion. When local adaptation is prevented by slowly drifting the moving wheel across the retina, the c-WWI illusion tends to decrease, as do other bistable percepts (e.g. binocular rivalry). However, the strength of the c-WWI and that of adaptation (as measured by either the static or flicker motion aftereffects) are not directly related: although the c-WWI decreases with increasing eccentricity, the aftereffects actually intensify concurrently. A similar dissociation can be induced by manipulating stimulus contrast. This indicates that the c-WWI may be enabled by, but is not equivalent to, local motion adaptation - and that other factors such as discrete sampling may be involved in its generation.
Czuba, Thaddeus B; Rokers, Bas; Guillet, Kyle; Huk, Alexander C; Cormack, Lawrence K
2011-09-26
Motion aftereffects are historically considered evidence for neuronal populations tuned to specific directions of motion. Despite a wealth of motion aftereffect studies investigating 2D (frontoparallel) motion mechanisms, there is a remarkable dearth of psychophysical evidence for neuronal populations selective for the direction of motion through depth (i.e., tuned to 3D motion). We compared the effects of prolonged viewing of unidirectional motion under dichoptic and monocular conditions and found large 3D motion aftereffects that could not be explained by simple inheritance of 2D monocular aftereffects. These results (1) demonstrate the existence of neurons tuned to 3D motion as distinct from monocular 2D mechanisms, (2) show that distinct 3D direction selectivity arises from both interocular velocity differences and changing disparities over time, and (3) provide a straightforward psychophysical tool for further probing 3D motion mechanisms. © ARVO
Czuba, Thaddeus B.; Rokers, Bas; Guillet, Kyle; Huk, Alexander C.; Cormack, Lawrence K.
2013-01-01
Motion aftereffects are historically considered evidence for neuronal populations tuned to specific directions of motion. Despite a wealth of motion aftereffect studies investigating 2D (frontoparallel) motion mechanisms, there is a remarkable dearth of psychophysical evidence for neuronal populations selective for the direction of motion through depth (i.e., tuned to 3D motion). We compared the effects of prolonged viewing of unidirectional motion under dichoptic and monocular conditions and found large 3D motion aftereffects that could not be explained by simple inheritance of 2D monocular aftereffects. These results (1) demonstrate the existence of neurons tuned to 3D motion as distinct from monocular 2D mechanisms, (2) show that distinct 3D direction selectivity arises from both interocular velocity differences and changing disparities over time, and (3) provide a straightforward psychophysical tool for further probing 3D motion mechanisms. PMID:21945967
Spatiotopic coding during dynamic head tilt
Turi, Marco; Burr, David C.
2016-01-01
Humans maintain a stable representation of the visual world effortlessly, despite constant movements of the eyes, head, and body, across multiple planes. Whereas visual stability in the face of saccadic eye movements has been intensely researched, fewer studies have investigated retinal image transformations induced by head movements, especially in the frontal plane. Unlike head rotations in the horizontal and sagittal planes, tilting the head in the frontal plane is only partially counteracted by torsional eye movements and consequently induces a distortion of the retinal image to which we seem to be completely oblivious. One possible mechanism aiding perceptual stability is an active reconstruction of a spatiotopic map of the visual world, anchored in allocentric coordinates. To explore this possibility, we measured the positional motion aftereffect (PMAE; the apparent change in position after adaptation to motion) with head tilts of ∼42° between adaptation and test (to dissociate retinal from allocentric coordinates). The aftereffect was shown to have both a retinotopic and spatiotopic component. When tested with unpatterned Gaussian blobs rather than sinusoidal grating stimuli, the retinotopic component was greatly reduced, whereas the spatiotopic component remained. The results suggest that perceptual stability may be maintained at least partially through mechanisms involving spatiotopic coding. NEW & NOTEWORTHY Given that spatiotopic coding could play a key role in maintaining visual stability, we look for evidence of spatiotopic coding after retinal image transformations caused by head tilt. To this end, we measure the strength of the positional motion aftereffect (PMAE; previously shown to be largely spatiotopic after saccades) after large head tilts. We find that, as with eye movements, the spatial selectivity of the PMAE has a large spatiotopic component after head rotation. PMID:27903636
Neural Integration of Information Specifying Human Structure from Form, Motion, and Depth
Jackson, Stuart; Blake, Randolph
2010-01-01
Recent computational models of biological motion perception operate on ambiguous two-dimensional representations of the body (e.g., snapshots, posture templates) and contain no explicit means for disambiguating the three-dimensional orientation of a perceived human figure. Are there neural mechanisms in the visual system that represent a moving human figure’s orientation in three dimensions? To isolate and characterize the neural mechanisms mediating perception of biological motion, we used an adaptation paradigm together with bistable point-light (PL) animations whose perceived direction of heading fluctuates over time. After exposure to a PL walker with a particular stereoscopically defined heading direction, observers experienced a consistent aftereffect: a bistable PL walker, which could be perceived in the adapted orientation or reversed in depth, was perceived predominantly reversed in depth. A phase-scrambled adaptor produced no aftereffect, yet when adapting and test walkers differed in size or appeared on opposite sides of fixation aftereffects did occur. Thus, this heading direction aftereffect cannot be explained by local, disparity-specific motion adaptation, and the properties of scale and position invariance imply higher-level origins of neural adaptation. Nor is disparity essential for producing adaptation: when suspended on top of a stereoscopically defined, rotating globe, a context-disambiguated “globetrotter” was sufficient to bias the bistable walker’s direction, as were full-body adaptors. In sum, these results imply that the neural signals supporting biomotion perception integrate information on the form, motion, and three-dimensional depth orientation of the moving human figure. Models of biomotion perception should incorporate mechanisms to disambiguate depth ambiguities in two-dimensional body representations. PMID:20089892
Verstraten, Frans A J; Niehorster, Diederick C; van de Grind, Wim A; Wade, Nicholas J
2015-10-01
In his original contribution, Exner's principal concern was a comparison between the properties of different aftereffects, and particularly to determine whether aftereffects of motion were similar to those of color and whether they could be encompassed within a unified physiological framework. Despite the fact that he was unable to answer his main question, there are some excellent-so far unknown-contributions in Exner's paper. For example, he describes observations that can be related to binocular interaction, not only in motion aftereffects but also in rivalry. To the best of our knowledge, Exner provides the first description of binocular rivalry induced by differently moving patterns in each eye, for motion as well as for their aftereffects. Moreover, apart from several known, but beautifully addressed, phenomena he makes a clear distinction between motion in depth based on stimulus properties and motion in depth based on the interpretation of motion. That is, the experience of movement, as distinct from the perception of movement. The experience, unlike the perception, did not result in a motion aftereffect in depth.
Niehorster, Diederick C.; van de Grind, Wim A.; Wade, Nicholas J.
2015-01-01
In his original contribution, Exner’s principal concern was a comparison between the properties of different aftereffects, and particularly to determine whether aftereffects of motion were similar to those of color and whether they could be encompassed within a unified physiological framework. Despite the fact that he was unable to answer his main question, there are some excellent—so far unknown—contributions in Exner’s paper. For example, he describes observations that can be related to binocular interaction, not only in motion aftereffects but also in rivalry. To the best of our knowledge, Exner provides the first description of binocular rivalry induced by differently moving patterns in each eye, for motion as well as for their aftereffects. Moreover, apart from several known, but beautifully addressed, phenomena he makes a clear distinction between motion in depth based on stimulus properties and motion in depth based on the interpretation of motion. That is, the experience of movement, as distinct from the perception of movement. The experience, unlike the perception, did not result in a motion aftereffect in depth. PMID:27648213
Murd, Carolina; Bachmann, Talis
2011-05-25
In searching for the target-afterimage patch among spatially separate alternatives of color-afterimages the target fades from awareness before its competitors (Bachmann, T., & Murd, C. (2010). Covert spatial attention in search for the location of a color-afterimage patch speeds up its decay from awareness: Introducing a method useful for the study of neural correlates of visual awareness. Vision Research 50, 1048-1053). In an analogous study presented here we show that a similar effect is obtained when a target spatial location specified according to the direction of motion aftereffect within it is searched by covert top-down attention. The adverse effect of selective attention on the duration of awareness of sensory qualiae known earlier to be present for color and periodic spatial contrast is extended also to sensory channels carrying motion information. Copyright © 2011 Elsevier Ltd. All rights reserved.
Effects of feature-based attention on the motion aftereffect at remote locations.
Boynton, Geoffrey M; Ciaramitaro, Vivian M; Arman, A Cyrus
2006-09-01
Previous studies have shown that attention to a particular stimulus feature, such as direction of motion or color, enhances neuronal responses to unattended stimuli sharing that feature. We studied this effect psychophysically by measuring the strength of the motion aftereffect (MAE) induced by an unattended stimulus when attention was directed to one of two overlapping fields of moving dots in a different spatial location. When attention was directed to the same direction of motion as the unattended stimulus, the unattended stimulus induced a stronger MAE than when attention was directed to the opposite direction. Also, when the unattended location contained either uncorrelated motion or had no stimulus at all an MAE was induced in the opposite direction to the attended direction of motion. The strength of the MAE was similar regardless of whether subjects attended to the speed or luminance of the attended dots. These results provide further support for a global feature-based mechanism of attention, and show that the effect spreads across all features of an attended object, and to all locations of visual space.
Conflict between aftereffects of retinal sweep and looming motion.
Bridgeman, B; Nardello, C
1994-01-01
Observers looked monocularly into a tunnel, with gratings on the left and right sides drifting toward the head. An exposure period was followed by a test with fixed gratings. With fixation points, left and right retinal fields could be stimulated selectively. When exposure and test were on the same retinal fields, but fixation was on opposite sides of the tunnel during exposure and test periods, aftereffects of retinal sweep and of perceived looming were in opposite directions. The two effects tended to cancel, yielding no perceived aftereffect. When they did occur, aftereffects in the retinal and the looming directions were equally likely. Cancellation was significantly more likely in the experimental conditions than in the control, when fixation always remained on the same side. When areas of retinal stimulation in the exposure and test periods did not overlap, cancellation was less frequent and aftereffects of looming were more frequent. Results were not significantly different for left and right visual fields, indicating that cortical vs. subcortical OKN pathways do not influence the illusion. Vection resulted for 16 of 20 observers under one or another of our conditions.
Enduring stereoscopic motion aftereffects induced by prolonged adaptation.
Bowd, C; Rose, D; Phinney, R E; Patterson, R
1996-11-01
This study investigated the effects of prolonged adaptation on the recovery of the stereoscopic motion aftereffect (adaptation induced by moving binocular disparity information). The adapting and test stimuli were stereoscopic grating patterns created from disparity, embedded in dynamic random-dot stereograms. Motion aftereffects induced by luminance stimuli were included in the study for comparison. Adaptation duration was either 1, 2, 4, 8, 16, 32 or 64 min and the duration of the ensuing aftereffect was the variable of interest. The results showed that aftereffect duration was proportional to the square root of adaptation duration for both stereoscopic and luminance stimuli; on log-log axes, the relation between aftereffect duration and adaptation duration was a power law with the slope near 0.5 in both cases. For both kinds of stimuli, there was no sign of adaptation saturation even at the longest adaptation duration.
Durgin, Frank H; Fox, Laura F; Hoon Kim, Dong
2003-11-01
We investigated the phenomenon of limb-specific locomotor adaptation in order to adjudicate between sensory-cue-conflict theory and motor-adaptation theory. The results were consistent with cue-conflict theory in demonstrating that two different leg-specific hopping aftereffects are modulated by the presence of conflicting estimates of self-motion from visual and nonvisual sources. Experiment 1 shows that leg-specific increases in forward drift during attempts to hop in place on one leg while blindfolded vary according to the relationship between visual information and motor activity during an adaptation to outdoor forward hopping. Experiment 2 shows that leg-specific changes in performance on a blindfolded hopping-to-target task are similarly modulated by the presence of cue conflict during adaptation to hopping on a treadmill. Experiment 3 shows that leg-specific aftereffects from hopping additionally produce inadvertent turning during running in place while blindfolded. The results of these experiments suggest that these leg-specific locomotor aftereffects are produced by sensory-cue conflict rather than simple motor adaptation.
Evidence against the temporal subsampling account of illusory motion reversal
Kline, Keith A.; Eagleman, David M.
2010-01-01
An illusion of reversed motion may occur sporadically while viewing continuous smooth motion. This has been suggested as evidence of discrete temporal sampling by the visual system in analogy to the sampling that generates the wagon–wheel effect on film. In an alternative theory, the illusion is not the result of discrete sampling but instead of perceptual rivalry between appropriately activated and spuriously activated motion detectors. Results of the current study demonstrate that illusory reversals of two spatially overlapping and orthogonal motions often occur separately, providing evidence against the possibility that illusory motion reversal (IMR) is caused by temporal sampling within a visual region. Further, we find that IMR occurs with non-uniform and non-periodic stimuli—an observation that is not accounted for by the temporal sampling hypothesis. We propose, that a motion aftereffect is superimposed on the moving stimulus, sporadically allowing motion detectors for the reverse direction to dominate perception. PMID:18484852
Residual effects of ecstasy (3,4-methylenedioxymethamphetamine) on low level visual processes.
Murray, Elizabeth; Bruno, Raimondo; Brown, John
2012-03-01
'Ecstasy' (3,4-methylenedioxymethamphetamine) induces impaired functioning in the serotonergic system, including the occipital lobe. This study employed the 'tilt aftereffect' paradigm to operationalise the function of orientation-selective neurons among ecstasy consumers and controls as a means of investigating the role of reduced serotonin on visual orientation processing. The magnitude of the tilt aftereffect reflects the extent of lateral inhibition between orientation-selective neurons and is elicited to both 'real' contours, processed in visual cortex area V1, and illusory contours, processed in V2. The magnitude of tilt aftereffect to both contour types was examined among 19 ecstasy users (6 ecstasy only; 13 ecstasy-plus-cannabis users) and 23 matched controls (9 cannabis-only users; 14 drug-naive). Ecstasy users had a significantly greater tilt magnitude than non-users for real contours (Hedge's g = 0.63) but not for illusory contours (g = 0.20). These findings provide support for literature suggesting that residual effects of ecstasy (and reduced serotonin) impairs lateral inhibition between orientation-selective neurons in V1, which however suggests that ecstasy may not substantially affect this process in V2. Multiple studies have now demonstrated ecstasy-related deficits on basic visual functions, including orientation and motion processing. Such low-level effects may contribute to the impact of ecstasy use on neuropsychological tests of visuospatial function. Copyright © 2012 John Wiley & Sons, Ltd.
Are face representations depth cue invariant?
Dehmoobadsharifabadi, Armita; Farivar, Reza
2016-06-01
The visual system can process three-dimensional depth cues defining surfaces of objects, but it is unclear whether such information contributes to complex object recognition, including face recognition. The processing of different depth cues involves both dorsal and ventral visual pathways. We investigated whether facial surfaces defined by individual depth cues resulted in meaningful face representations-representations that maintain the relationship between the population of faces as defined in a multidimensional face space. We measured face identity aftereffects for facial surfaces defined by individual depth cues (Experiments 1 and 2) and tested whether the aftereffect transfers across depth cues (Experiments 3 and 4). Facial surfaces and their morphs to the average face were defined purely by one of shading, texture, motion, or binocular disparity. We obtained identification thresholds for matched (matched identity between adapting and test stimuli), non-matched (non-matched identity between adapting and test stimuli), and no-adaptation (showing only the test stimuli) conditions for each cue and across different depth cues. We found robust face identity aftereffect in both experiments. Our results suggest that depth cues do contribute to forming meaningful face representations that are depth cue invariant. Depth cue invariance would require integration of information across different areas and different pathways for object recognition, and this in turn has important implications for cortical models of visual object recognition.
New rules for visual selection: Isolating procedural attention.
Ramamurthy, Mahalakshmi; Blaser, Erik
2017-02-01
High performance in well-practiced, everyday tasks-driving, sports, gaming-suggests a kind of procedural attention that can allocate processing resources to behaviorally relevant information in an unsupervised manner. Here we show that training can lead to a new, automatic attentional selection rule that operates in the absence of bottom-up, salience-driven triggers and willful top-down selection. Taking advantage of the fact that attention modulates motion aftereffects, observers were presented with a bivectorial display with overlapping, iso-salient red and green dot fields moving to the right and left, respectively, while distracted by a demanding auditory two-back memory task. Before training, since the motion vectors canceled each other out, no net motion aftereffect (MAE) was found. However, after 3 days (0.5 hr/day) of training, during which observers practiced selectively attending to the red, rightward field, a significant net MAE was observed-even when top-down selection was again distracted. Further experiments showed that these results were not due to perceptual learning, and that the new rule targeted the motion, and not the color of the target dot field, and global, not local, motion signals; thus, the new rule was: "select the rightward field." This study builds on recent work on selection history-driven and reward-driven biases, but uses a novel paradigm where the allocation of visual processing resources are measured passively, offline, and when the observer's ability to execute top-down selection is defeated.
Stereoscopic advantages for vection induced by radial, circular, and spiral optic flows.
Palmisano, Stephen; Summersby, Stephanie; Davies, Rodney G; Kim, Juno
2016-11-01
Although observer motions project different patterns of optic flow to our left and right eyes, there has been surprisingly little research into potential stereoscopic contributions to self-motion perception. This study investigated whether visually induced illusory self-motion (i.e., vection) is influenced by the addition of consistent stereoscopic information to radial, circular, and spiral (i.e., combined radial + circular) patterns of optic flow. Stereoscopic vection advantages were found for radial and spiral (but not circular) flows when monocular motion signals were strong. Under these conditions, stereoscopic benefits were greater for spiral flow than for radial flow. These effects can be explained by differences in the motion aftereffects generated by these displays, which suggest that the circular motion component in spiral flow selectively reduced adaptation to stereoscopic motion-in-depth. Stereoscopic vection advantages were not observed for circular flow when monocular motion signals were strong, but emerged when monocular motion signals were weakened. These findings show that stereoscopic information can contribute to visual self-motion perception in multiple ways.
Rehabilitation of Visual and Perceptual Dysfunction after Severe Traumatic Brain Injury
2014-05-01
Aguilar C, Hall-Haro C. Decay of prism aftereffects under passive and active conditions. Cogn Brain Res. 2004;20:92-97. 13. Kornheiser A. Adaptation...17. Huxlin KR, Martin T, Kelly K, et al. Perceptual relearning of complex visual motion after V1 damage in humans. J Neurosci . 2009;29:3981-3991...questionnaires. Restor Neurol Neurosci . 2004;22:399-420. 19. Peli E, Bowers AR, Mandel AJ, Higgins K, Goldstein RB, Bobrow L. Design of driving simulator
Fox, Christopher J; Barton, Jason J S
2007-01-05
The neural representation of facial expression within the human visual system is not well defined. Using an adaptation paradigm, we examined aftereffects on expression perception produced by various stimuli. Adapting to a face, which was used to create morphs between two expressions, substantially biased expression perception within the morphed faces away from the adapting expression. This adaptation was not based on low-level image properties, as a different image of the same person displaying that expression produced equally robust aftereffects. Smaller but significant aftereffects were generated by images of different individuals, irrespective of gender. Non-face visual, auditory, or verbal representations of emotion did not generate significant aftereffects. These results suggest that adaptation affects at least two neural representations of expression: one specific to the individual (not the image), and one that represents expression across different facial identities. The identity-independent aftereffect suggests the existence of a 'visual semantic' for facial expression in the human visual system.
Ronchi, Roberta; Revol, Patrice; Katayama, Masahiro; Rossetti, Yves; Farnè, Alessandro
2011-01-01
During the procedure of prism adaptation, subjects execute pointing movements to visual targets under a lateral optical displacement: As consequence of the discrepancy between visual and proprioceptive inputs, their visuo-motor activity is characterized by pointing errors. The perception of such final errors triggers error-correction processes that eventually result into sensori-motor compensation, opposite to the prismatic displacement (i.e., after-effects). Here we tested whether the mere observation of erroneous pointing movements, similar to those executed during prism adaptation, is sufficient to produce adaptation-like after-effects. Neurotypical participants observed, from a first-person perspective, the examiner's arm making incorrect pointing movements that systematically overshot visual targets location to the right, thus simulating a rightward optical deviation. Three classical after-effect measures (proprioceptive, visual and visual-proprioceptive shift) were recorded before and after first-person's perspective observation of pointing errors. Results showed that mere visual exposure to an arm that systematically points on the right-side of a target (i.e., without error correction) produces a leftward after-effect, which mostly affects the observer's proprioceptive estimation of her body midline. In addition, being exposed to such a constant visual error induced in the observer the illusion “to feel” the seen movement. These findings indicate that it is possible to elicit sensori-motor after-effects by mere observation of movement errors. PMID:21731649
An investigation of the spatial selectivity of the duration after-effect.
Maarseveen, Jim; Hogendoorn, Hinze; Verstraten, Frans A J; Paffen, Chris L E
2017-01-01
Adaptation to the duration of a visual stimulus causes the perceived duration of a subsequently presented stimulus with a slightly different duration to be skewed away from the adapted duration. This pattern of repulsion following adaptation is similar to that observed for other visual properties, such as orientation, and is considered evidence for the involvement of duration-selective mechanisms in duration encoding. Here, we investigated whether the encoding of duration - by duration-selective mechanisms - occurs early on in the visual processing hierarchy. To this end, we investigated the spatial specificity of the duration after-effect in two experiments. We measured the duration after-effect at adapter-test distances ranging between 0 and 15° of visual angle and for within- and between-hemifield presentations. We replicated the duration after-effect: the test stimulus was perceived to have a longer duration following adaptation to a shorter duration, and a shorter duration following adaptation to a longer duration. Importantly, this duration after-effect occurred at all measured distances, with no evidence for a decrease in the magnitude of the after-effect at larger distances or across hemifields. This shows that adaptation to duration does not result from adaptation occurring early on in the visual processing hierarchy. Instead, it seems likely that duration information is a high-level stimulus property that is encoded later on in the visual processing hierarchy. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOT National Transportation Integrated Search
1968-05-01
The study examined some effects of stimulus size and distance on the persistence of one type of illusory motion, viz., the spiral aftereffect (SAE). Duration of SAE was investigated with stimuli of 2, 4, 8, 12, and 16 inches in diameter. The distance...
Awareness Becomes Necessary Between Adaptive Pattern Coding of Open and Closed Curvatures
Sweeny, Timothy D.; Grabowecky, Marcia; Suzuki, Satoru
2012-01-01
Visual pattern processing becomes increasingly complex along the ventral pathway, from the low-level coding of local orientation in the primary visual cortex to the high-level coding of face identity in temporal visual areas. Previous research using pattern aftereffects as a psychophysical tool to measure activation of adaptive feature coding has suggested that awareness is relatively unimportant for the coding of orientation, but awareness is crucial for the coding of face identity. We investigated where along the ventral visual pathway awareness becomes crucial for pattern coding. Monoptic masking, which interferes with neural spiking activity in low-level processing while preserving awareness of the adaptor, eliminated open-curvature aftereffects but preserved closed-curvature aftereffects. In contrast, dichoptic masking, which spares spiking activity in low-level processing while wiping out awareness, preserved open-curvature aftereffects but eliminated closed-curvature aftereffects. This double dissociation suggests that adaptive coding of open and closed curvatures straddles the divide between weakly and strongly awareness-dependent pattern coding. PMID:21690314
How visual short-term memory maintenance modulates subsequent visual aftereffects.
Saad, Elyana; Silvanto, Juha
2013-05-01
Prolonged viewing of a visual stimulus can result in sensory adaptation, giving rise to perceptual phenomena such as the tilt aftereffect (TAE). However, it is not known if short-term memory maintenance induces such effects. We examined how visual short-term memory (VSTM) maintenance modulates the strength of the TAE induced by subsequent visual adaptation. We reasoned that if VSTM maintenance induces aftereffects on subsequent encoding of visual information, then it should either enhance or reduce the TAE induced by a subsequent visual adapter, depending on the congruency of the memory cue and the adapter. Our results were consistent with this hypothesis and thus indicate that the effects of VSTM maintenance can outlast the maintenance period.
Visuomotor adaptation to a visual rotation is gravity dependent.
Toma, Simone; Sciutti, Alessandra; Papaxanthis, Charalambos; Pozzo, Thierry
2015-03-15
Humans perform vertical and horizontal arm motions with different temporal patterns. The specific velocity profiles are chosen by the central nervous system by integrating the gravitational force field to minimize energy expenditure. However, what happens when a visuomotor rotation is applied, so that a motion performed in the horizontal plane is perceived as vertical? We investigated the dynamic of the adaptation of the spatial and temporal properties of a pointing motion during prolonged exposure to a 90° visuomotor rotation, where a horizontal movement was associated with a vertical visual feedback. We found that participants immediately adapted the spatial parameters of motion to the conflicting visual scene in order to keep their arm trajectory straight. In contrast, the initial symmetric velocity profiles specific for a horizontal motion were progressively modified during the conflict exposure, becoming more asymmetric and similar to those appropriate for a vertical motion. Importantly, this visual effect that increased with repetitions was not followed by a consistent aftereffect when the conflicting visual feedback was absent (catch and washout trials). In a control experiment we demonstrated that an intrinsic representation of the temporal structure of perceived vertical motions could provide the error signal allowing for this progressive adaptation of motion timing. These findings suggest that gravity strongly constrains motor learning and the reweighting process between visual and proprioceptive sensory inputs, leading to the selection of a motor plan that is suboptimal in terms of energy expenditure. Copyright © 2015 the American Physiological Society.
The Use of Aftereffects in the Study of Relationships among Emotion Categories
ERIC Educational Resources Information Center
Rutherford, M. D.; Chattha, Harnimrat Monica; Krysko, Kristen M.
2008-01-01
The perception of visual aftereffects has been long recognized, and these aftereffects reveal a relationship between perceptual categories. Thus, emotional expression aftereffects can be used to map the categorical relationships among emotion percepts. One might expect a symmetric relationship among categories, but an evolutionary, functional…
Shioiri, Satoshi; Matsumiya, Kazumichi
2009-05-29
We investigated spatiotemporal characteristics of motion mechanisms using a new type of motion aftereffect (MAE) we found. Our stimulus comprised two superimposed sinusoidal gratings with different spatial frequencies. After exposure to the moving stimulus, observers perceived the MAE in the static test in the direction opposite to that of the high spatial frequency grating even when low spatial frequency motion was perceived during adaptation. In contrast, in the flicker test, the MAE was perceived in the direction opposite to that of the low spatial frequency grating. These MAEs indicate that two different motion systems contribute to motion perception and can be isolated by using different test stimuli. Using a psychophysical technique based on the MAE, we investigated the differences between the two motion mechanisms. The results showed that the static MAE is the aftereffect of the motion system with a high spatial and low temporal frequency tuning (slow motion detector) and the flicker MAE is the aftereffect of the motion system with a low spatial and high temporal frequency tuning (fast motion detector). We also revealed that the two motion detectors differ in orientation tuning, temporal frequency tuning, and sensitivity to relative motion.
The fate of task-irrelevant visual motion: perceptual load versus feature-based attention.
Taya, Shuichiro; Adams, Wendy J; Graf, Erich W; Lavie, Nilli
2009-11-18
We tested contrasting predictions derived from perceptual load theory and from recent feature-based selection accounts. Observers viewed moving, colored stimuli and performed low or high load tasks associated with one stimulus feature, either color or motion. The resultant motion aftereffect (MAE) was used to evaluate attentional allocation. We found that task-irrelevant visual features received less attention than co-localized task-relevant features of the same objects. Moreover, when color and motion features were co-localized yet perceived to belong to two distinct surfaces, feature-based selection was further increased at the expense of object-based co-selection. Load theory predicts that the MAE for task-irrelevant motion would be reduced with a higher load color task. However, this was not seen for co-localized features; perceptual load only modulated the MAE for task-irrelevant motion when this was spatially separated from the attended color location. Our results suggest that perceptual load effects are mediated by spatial selection and do not generalize to the feature domain. Feature-based selection operates to suppress processing of task-irrelevant, co-localized features, irrespective of perceptual load.
Motion perception: behavior and neural substrate.
Mather, George
2011-05-01
Visual motion perception is vital for survival. Single-unit recordings in primate primary visual cortex (V1) have revealed the existence of specialized motion sensing neurons; perceptual effects such as the motion after-effect demonstrate their importance for motion perception. Human psychophysical data on motion detection can be explained by a computational model of cortical motion sensors. Both psychophysical and physiological data reveal at least two classes of motion sensor capable of sensing motion in luminance-defined and texture-defined patterns, respectively. Psychophysical experiments also reveal that motion can be seen independently of motion sensor output, based on attentive tracking of visual features. Sensor outputs are inherently ambiguous, due to the problem of univariance in neural responses. In order to compute stimulus direction and speed, the visual system must compare the responses of many different sensors sensitive to different directions and speeds. Physiological data show that this computation occurs in the visual middle temporal (MT) area. Recent psychophysical studies indicate that information about spatial form may also play a role in motion computations. Adaptation studies show that the human visual system is selectively sensitive to large-scale optic flow patterns, and physiological studies indicate that cells in the middle superior temporal (MST) area derive this sensitivity from the combined responses of many MT cells. Extraretinal signals used to control eye movements are an important source of signals to cancel out the retinal motion responses generated by eye movements, though visual information also plays a role. A number of issues remain to be resolved at all levels of the motion-processing hierarchy. WIREs Cogni Sci 2011 2 305-314 DOI: 10.1002/wcs.110 For further resources related to this article, please visit the WIREs website Additional Supporting Information may be found in http://www.lifesci.sussex.ac.uk/home/George_Mather/Motion/index.html. Copyright © 2010 John Wiley & Sons, Ltd.
Prism adaptation in virtual and natural contexts: Evidence for a flexible adaptive process.
Veilleux, Louis-Nicolas; Proteau, Luc
2015-01-01
Prism exposure when aiming at a visual target in a virtual condition (e.g., when the hand is represented by a video representation) produces no or only small adaptations (after-effects), whereas prism exposure in a natural condition produces large after-effects. Some researchers suggested that this difference may arise from distinct adaptive processes, but other studies suggested a unique process. The present study reconciled these conflicting interpretations. Forty participants were divided into two groups: One group used visual feedback of their hand (natural context), and the other group used computer-generated representational feedback (virtual context). Visual feedback during adaptation was concurrent or terminal. All participants underwent laterally displacing prism perturbation. The results showed that the after-effects were twice as large in the "natural context" than in the "virtual context". No significant differences were observed between the concurrent and terminal feedback conditions. The after-effects generalized to untested targets and workspace. These results suggest that prism adaptation in virtual and natural contexts involves the same process. The smaller after-effects in the virtual context suggest that the depth of adaptation is a function of the degree of convergence between the proprioceptive and visual information that arises from the hand.
Rhodes, Gillian; Nishimura, Mayu; de Heering, Adelaide; Jeffery, Linda; Maurer, Daphne
2017-05-01
Faces are adaptively coded relative to visual norms that are updated by experience, and this adaptive coding is linked to face recognition ability. Here we investigated whether adaptive coding of faces is disrupted in individuals (adolescents and adults) who experience face recognition difficulties following visual deprivation from congenital cataracts in infancy. We measured adaptive coding using face identity aftereffects, where smaller aftereffects indicate less adaptive updating of face-coding mechanisms by experience. We also examined whether the aftereffects increase with adaptor identity strength, consistent with norm-based coding of identity, as in typical populations, or whether they show a different pattern indicating some more fundamental disruption of face-coding mechanisms. Cataract-reversal patients showed significantly smaller face identity aftereffects than did controls (Experiments 1 and 2). However, their aftereffects increased significantly with adaptor strength, consistent with norm-based coding (Experiment 2). Thus we found reduced adaptability but no fundamental disruption of norm-based face-coding mechanisms in cataract-reversal patients. Our results suggest that early visual experience is important for the normal development of adaptive face-coding mechanisms. © 2016 John Wiley & Sons Ltd.
Sharpening vision by adapting to flicker.
Arnold, Derek H; Williams, Jeremy D; Phipps, Natasha E; Goodale, Melvyn A
2016-11-01
Human vision is surprisingly malleable. A static stimulus can seem to move after prolonged exposure to movement (the motion aftereffect), and exposure to tilted lines can make vertical lines seem oppositely tilted (the tilt aftereffect). The paradigm used to induce such distortions (adaptation) can provide powerful insights into the computations underlying human visual experience. Previously spatial form and stimulus dynamics were thought to be encoded independently, but here we show that adaptation to stimulus dynamics can sharpen form perception. We find that fast flicker adaptation (FFAd) shifts the tuning of face perception to higher spatial frequencies, enhances the acuity of spatial vision-allowing people to localize inputs with greater precision and to read finer scaled text, and it selectively reduces sensitivity to coarse-scale form signals. These findings are consistent with two interrelated influences: FFAd reduces the responsiveness of magnocellular neurons (which are important for encoding dynamics, but can have poor spatial resolution), and magnocellular responses contribute coarse spatial scale information when the visual system synthesizes form signals. Consequently, when magnocellular responses are mitigated via FFAd, human form perception is transiently sharpened because "blur" signals are mitigated.
Sharpening vision by adapting to flicker
Arnold, Derek H.; Williams, Jeremy D.; Phipps, Natasha E.; Goodale, Melvyn A.
2016-01-01
Human vision is surprisingly malleable. A static stimulus can seem to move after prolonged exposure to movement (the motion aftereffect), and exposure to tilted lines can make vertical lines seem oppositely tilted (the tilt aftereffect). The paradigm used to induce such distortions (adaptation) can provide powerful insights into the computations underlying human visual experience. Previously spatial form and stimulus dynamics were thought to be encoded independently, but here we show that adaptation to stimulus dynamics can sharpen form perception. We find that fast flicker adaptation (FFAd) shifts the tuning of face perception to higher spatial frequencies, enhances the acuity of spatial vision—allowing people to localize inputs with greater precision and to read finer scaled text, and it selectively reduces sensitivity to coarse-scale form signals. These findings are consistent with two interrelated influences: FFAd reduces the responsiveness of magnocellular neurons (which are important for encoding dynamics, but can have poor spatial resolution), and magnocellular responses contribute coarse spatial scale information when the visual system synthesizes form signals. Consequently, when magnocellular responses are mitigated via FFAd, human form perception is transiently sharpened because “blur” signals are mitigated. PMID:27791115
Motion sickness and proprioceptive aftereffects following virtual environment exposure
NASA Technical Reports Server (NTRS)
Stanney, K. M.; Kennedy, R. S.; Drexler, J. M.; Harm, D. L.
1999-01-01
To study the potential aftereffects of virtual environments (VE), tests of visually guided behavior and felt limb position (pointing with eyes open and closed) along with self-reports of motion sickness-like discomfort were administered before and after 30 min exposure of 34 subjects. When post- discomfort was compared to a pre-baseline, the participants reported more sickness afterward (p < 0.03). The change in felt limb position resulted in subjects pointing higher (p < 0.038) and slightly to the left, although the latter difference was not statistically significant (p = 0.08). When findings from a second study using a different VE system were compared, they essentially replicated the results of the first study with higher sickness afterward (p < 0.001) and post- pointing errors were also up (p < 0.001) and to the left (p < 0.001). While alternative explanations (e.g. learning, fatigue, boredom, habituation, etc.) of these outcomes cannot be ruled out, the consistency of the post- effects on felt limb position changes in the two VE implies that these recalibrations may linger once interaction with the VE has concluded, rendering users potentially physiologically maladapted for the real world when they return. This suggests there may be safety concerns following VE exposures until pre-exposure functioning has been regained. The results of this study emphasize the need for developing and using objective measures of post-VE exposure aftereffects in order to systematically determine under what conditions these effects may occur.
Moon illusion and spiral aftereffect: illusions due to the loom-zoom system?
Hershenson, M
1982-12-01
The moon illusion and the spiral aftereffect are illusions in which apparent size and apparent distance vary inversely. Because this relationship is exactly opposite to that predicted by the static size--distance invariance hypothesis, the illusions have been called "paradoxical." The illusions may be understood as products of a loom-zoom system, a hypothetical visual subsystem that, in its normal operation, acts according to its structural constraint, the constancy axiom, to produce perceptions that satisfy the constraints of stimulation, the kinetic size--distance invariance hypothesis. When stimulated by its characteristic stimulus of symmetrical expansion or contraction, the loom-zoom system produces the perception of a rigid object moving in depth. If this system is stimulated by a rotating spiral, a negative motion-aftereffect is produced when rotation ceases. If fixation is then shifted to a fixed-sized disc, the aftereffect process alters perceived distance and the loom-zoom system alters perceived size such that the disc appears to expand and approach or to contract and recede, depending on the direction of rotation of the spiral. If the loom-zoom system is stimulated by a moon-terrain configuration, the equidistance tendency produces a foreshortened perceived distance for the moon as an inverse function of elevation and acts in conjunction with the loom-zoom system to produce the increased perceived size of the moon.
Alterations to global but not local motion processing in long-term ecstasy (MDMA) users.
White, Claire; Brown, John; Edwards, Mark
2014-07-01
Growing evidence indicates that the main psychoactive ingredient in the illegal drug "ecstasy" (methylendioxymethamphetamine) causes reduced activity in the serotonin and gamma-aminobutyric acid (GABA) systems in humans. On the basis of substantial serotonin input to the occipital lobe, recent research investigated visual processing in long-term users and found a larger magnitude of the tilt aftereffect, interpreted to reflect broadened orientation tuning bandwidths. Further research found higher orientation discrimination thresholds and reduced long-range interactions in the primary visual area of ecstasy users. The aim of the present research was to investigate whether serotonin-mediated V1 visual processing deficits in ecstasy users extend to motion processing mechanisms. Forty-five participants (21 controls, 24 drug users) completed two psychophysical studies: A direction discrimination study directly measured local motion processing in V1, while a motion coherence task tested global motion processing in area V5/MT. "Primary" ecstasy users (n = 18), those without substantial polydrug use, had significantly lower global motion thresholds than controls [p = 0.027, Cohen's d = 0.78 (large)], indicating increased sensitivity to global motion stimuli, but no difference in local motion processing (p = 0.365). These results extend on previous research investigating the long-term effects of illicit drugs on visual processing. Two possible explanations are explored: defuse attentional processes may be facilitating spatial pooling of motion signals in users. Alternatively, it may be that a GABA-mediated disruption to V5/MT processing is reducing spatial suppression and therefore improving global motion perception in ecstasy users.
Cross-Modal Face Identity Aftereffects and Their Relation to Priming
ERIC Educational Resources Information Center
Hills, Peter J.; Elward, Rachael L.; Lewis, Michael B.
2010-01-01
We tested the magnitude of the face identity aftereffect following adaptation to different modes of adaptors in four experiments. The perceptual midpoint between two morphed famous faces was measured pre- and post-adaptation. Significant aftereffects were observed for visual (faces) and nonvisual adaptors (voices and names) but not nonspecific…
Dynamics of contextual modulation of perceived shape in human vision
Gheorghiu, Elena; Kingdom, Frederick A. A.
2017-01-01
In biological vision, contextual modulation refers to the influence of a surround pattern on either the perception of, or the neural responses to, a target pattern. One studied form of contextual modulation deals with the effect of a surround texture on the perceived shape of a contour, in the context of the phenomenon known as the shape aftereffect. In the shape aftereffect, prolonged viewing, or adaptation to a particular contour’s shape causes a shift in the perceived shape of a subsequently viewed contour. Shape aftereffects are suppressed when the adaptor contour is surrounded by a texture of similarly-shaped contours, a surprising result given that the surround contours are all potential adaptors. Here we determine the motion and temporal properties of this form of contextual modulation. We varied the relative motion directions, speeds and temporal phases between the central adaptor contour and the surround texture and measured for each manipulation the degree to which the shape aftereffect was suppressed. Results indicate that contextual modulation of shape processing is selective to motion direction, temporal frequency and temporal phase. These selectivities are consistent with one aim of vision being to segregate contours that define objects from those that form textured surfaces. PMID:28230085
Visually induced plasticity of auditory spatial perception in macaques.
Woods, Timothy M; Recanzone, Gregg H
2004-09-07
When experiencing spatially disparate visual and auditory stimuli, a common percept is that the sound originates from the location of the visual stimulus, an illusion known as the ventriloquism effect. This illusion can persist for tens of minutes, a phenomenon termed the ventriloquism aftereffect. The underlying neuronal mechanisms of this rapidly induced plasticity remain unclear; indeed, it remains untested whether similar multimodal interactions occur in other species. We therefore tested whether macaque monkeys experience the ventriloquism aftereffect similar to the way humans do. The ability of two monkeys to determine which side of the midline a sound was presented from was tested before and after a period of 20-60 min in which the monkeys experienced either spatially identical or spatially disparate auditory and visual stimuli. In agreement with human studies, the monkeys did experience a shift in their auditory spatial perception in the direction of the spatially disparate visual stimulus, and the aftereffect did not transfer across sounds that differed in frequency by two octaves. These results show that macaque monkeys experience the ventriloquism aftereffect similar to the way humans do in all tested respects, indicating that these multimodal interactions are a basic phenomenon of the central nervous system.
Qian, Ning; Dayan, Peter
2013-01-01
A wealth of studies has found that adapting to second-order visual stimuli has little effect on the perception of first-order stimuli. This is physiologically and psychologically troubling, since many cells show similar tuning to both classes of stimuli, and since adapting to first-order stimuli leads to aftereffects that do generalize to second-order stimuli. Focusing on high-level visual stimuli, we recently proposed the novel explanation that the lack of transfer arises partially from the characteristically different backgrounds of the two stimulus classes. Here, we consider the effect of stimulus backgrounds in the far more prevalent, lower-level, case of the orientation tilt aftereffect. Using a variety of first- and second-order oriented stimuli, we show that we could increase or decrease both within- and cross-class adaptation aftereffects by increasing or decreasing the similarity of the otherwise apparently uninteresting or irrelevant backgrounds of adapting and test patterns. Our results suggest that similarity between background statistics of the adapting and test stimuli contributes to low-level visual adaptation, and that these backgrounds are thus not discarded by visual processing but provide contextual modulation of adaptation. Null cross-adaptation aftereffects must also be interpreted cautiously. These findings reduce the apparent inconsistency between psychophysical and neurophysiological data about first- and second-order stimuli. PMID:23732217
Visual aftereffects and sensory nonlinearities from a single statistical framework
Laparra, Valero; Malo, Jesús
2015-01-01
When adapted to a particular scenery our senses may fool us: colors are misinterpreted, certain spatial patterns seem to fade out, and static objects appear to move in reverse. A mere empirical description of the mechanisms tuned to color, texture, and motion may tell us where these visual illusions come from. However, such empirical models of gain control do not explain why these mechanisms work in this apparently dysfunctional manner. Current normative explanations of aftereffects based on scene statistics derive gain changes by (1) invoking decorrelation and linear manifold matching/equalization, or (2) using nonlinear divisive normalization obtained from parametric scene models. These principled approaches have different drawbacks: the first is not compatible with the known saturation nonlinearities in the sensors and it cannot fully accomplish information maximization due to its linear nature. In the second, gain change is almost determined a priori by the assumed parametric image model linked to divisive normalization. In this study we show that both the response changes that lead to aftereffects and the nonlinear behavior can be simultaneously derived from a single statistical framework: the Sequential Principal Curves Analysis (SPCA). As opposed to mechanistic models, SPCA is not intended to describe how physiological sensors work, but it is focused on explaining why they behave as they do. Nonparametric SPCA has two key advantages as a normative model of adaptation: (i) it is better than linear techniques as it is a flexible equalization that can be tuned for more sensible criteria other than plain decorrelation (either full information maximization or error minimization); and (ii) it makes no a priori functional assumption regarding the nonlinearity, so the saturations emerge directly from the scene data and the goal (and not from the assumed function). It turns out that the optimal responses derived from these more sensible criteria and SPCA are consistent with dysfunctional behaviors such as aftereffects. PMID:26528165
Image statistics and the perception of surface gloss and lightness.
Kim, Juno; Anderson, Barton L
2010-07-01
Despite previous data demonstrating the critical importance of 3D surface geometry in the perception of gloss and lightness, I. Motoyoshi, S. Nishida, L. Sharan, and E. H. Adelson (2007) recently proposed that a simple image statistic--histogram or sub-band skew--is computed by the visual system to infer the gloss and albedo of surfaces. One key source of evidence used to support this claim was an experiment in which adaptation to skewed image statistics resulted in opponent aftereffects in observers' judgments of gloss and lightness. We report a series of adaptation experiments that were designed to assess the cause of these aftereffects. We replicated their original aftereffects in gloss but found no consistent aftereffect in lightness. We report that adaptation to zero-skew adaptors produced similar aftereffects as positively skewed adaptors, and that negatively skewed adaptors induced no reliable aftereffects. We further find that the adaptation effect observed with positively skewed adaptors is not robust to changes in mean luminance that diminish the intensity of the luminance extrema. Finally, we show that adaptation to positive skew reduces (rather than increases) the apparent lightness of light pigmentation on non-uniform albedo surfaces. These results challenge the view that the adaptation results reported by Motoyoshi et al. (2007) provide evidence that skew is explicitly computed by the visual system.
Imagining sex and adapting to it: different aftereffects after perceiving versus imagining faces.
D'Ascenzo, Stefania; Tommasi, Luca; Laeng, Bruno
2014-03-01
A prolonged exposure (i.e., perceptual adaptation) to a male or a female face can produce changes (i.e., aftereffects) in the subsequent gender attribution of a neutral or average face, so that it appears respectively more female or more male. Studies using imagery adaptation and its aftereffects have yielded conflicting results. In the present study we used an adaptation paradigm with both imagined and perceived faces as adaptors, and assessed the aftereffects in judged masculinity/femininity when viewing an androgynous test face. We monitored eye movements and pupillary responses as a way to confirm whether participants did actively engage in visual imagery. The results indicated that both perceptual and imagery adaptation produce aftereffects, but that they run in opposite directions: a contrast effect with perception (e.g., after visual exposure to a female face, the androgynous appears as more male) and an assimilation effect with imagery (e.g., after imaginative exposure to a female face, the androgynous face appears as more female). The pupillary responses revealed dilations consistent with increased cognitive effort during the imagery phase, suggesting that the assimilation aftereffect occurred in the presence of an active and effortful mental imagery process, as also witnessed by the pattern of eye movements recorded during the imagery adaptation phase. Copyright © 2014 Elsevier B.V. All rights reserved.
Pavan, Andrea; Marotti, Rosilari Bellacosa; Mather, George
2013-05-31
Motion and form encoding are closely coupled in the visual system. A number of physiological studies have shown that neurons in the striate and extrastriate cortex (e.g., V1 and MT) are selective for motion direction parallel to their preferred orientation, but some neurons also respond to motion orthogonal to their preferred spatial orientation. Recent psychophysical research (Mather, Pavan, Bellacosa, & Casco, 2012) has demonstrated that the strength of adaptation to two fields of transparently moving dots is modulated by simultaneously presented orientation signals, suggesting that the interaction occurs at the level of motion integrating receptive fields in the extrastriate cortex. In the present psychophysical study, we investigated whether motion-form interactions take place at a higher level of neural processing where optic flow components are extracted. In Experiment 1, we measured the duration of the motion aftereffect (MAE) generated by contracting or expanding dot fields in the presence of either radial (parallel) or concentric (orthogonal) counterphase pedestal gratings. To tap the stage at which optic flow is extracted, we measured the duration of the phantom MAE (Weisstein, Maguire, & Berbaum, 1977) in which we adapted and tested different parts of the visual field, with orientation signals presented either in the adapting (Experiment 2) or nonadapting (Experiments 3 and 4) sectors. Overall, the results showed that motion adaptation is suppressed most by orientation signals orthogonal to optic flow direction, suggesting that motion-form interactions also take place at the global motion level where optic flow is extracted.
Motion sickness incidence during a round-the-world yacht race.
Turner, M; Griffin, M J
1995-09-01
Motion sickness experiences were obtained from participants in a 9 month, round the world yacht race. Race participants completed questionnaires on their motion sickness experience 1 week prior to the start of the race, during the race, and following the race. Yacht headings, sea states, and wind directions were recorded throughout the race. Illness and the occurrence of vomiting were related to the duration at sea and yacht encounter directions relative to the prevailing wind. Individual crewmember characteristics, the use of anti-motion sickness drugs, activity while at sea, and after-effects of yacht motion were also examined with respect to sickness occurrence. Sickness was greatest among females and younger crewmembers, and among crewmembers who used anti-motion sickness drugs. Sickness varied as a function of drug type and activity while at sea. Crewmembers who reported after-effects of yacht motion also reported greater sickness while at sea. The primary determinants of motion sickness were the duration of time spent at sea and yacht encounter direction to the prevailing wind.
Binding of motion and colour is early and automatic.
Blaser, Erik; Papathomas, Thomas; Vidnyánszky, Zoltán
2005-04-01
At what stages of the human visual hierarchy different features are bound together, and whether this binding requires attention, is still highly debated. We used a colour-contingent motion after-effect (CCMAE) to study the binding of colour and motion signals. The logic of our approach was as follows: if CCMAEs can be evoked by targeted adaptation of early motion processing stages, without allowing for feedback from higher motion integration stages, then this would support our hypothesis that colour and motion are bound automatically on the basis of spatiotemporally local information. Our results show for the first time that CCMAE's can be evoked by adaptation to a locally paired opposite-motion dot display, a stimulus that, importantly, is known to trigger direction-specific responses in the primary visual cortex yet results in strong inhibition of the directional responses in area MT of macaques as well as in area MT+ in humans and, indeed, is perceived only as motionless flicker. The magnitude of the CCMAE in the locally paired condition was not significantly different from control conditions where the different directions were spatiotemporally separated (i.e. not locally paired) and therefore perceived as two moving fields. These findings provide evidence that adaptation at an early, local motion stage, and only adaptation at this stage, underlies this CCMAE, which in turn implies that spatiotemporally coincident colour and motion signals are bound automatically, most probably as early as cortical area V1, even when the association between colour and motion is perceptually inaccessible.
Pavan, Andrea; Marotti, Rosilari Bellacosa; Mather, George
2013-01-01
Motion and form encoding are closely coupled in the visual system. A number of physiological studies have shown that neurons in the striate and extrastriate cortex (e.g., V1 and MT) are selective for motion direction parallel to their preferred orientation, but some neurons also respond to motion orthogonal to their preferred spatial orientation. Recent psychophysical research (Mather, Pavan, Bellacosa, & Casco, 2012) has demonstrated that the strength of adaptation to two fields of transparently moving dots is modulated by simultaneously presented orientation signals, suggesting that the interaction occurs at the level of motion integrating receptive fields in the extrastriate cortex. In the present psychophysical study, we investigated whether motion-form interactions take place at a higher level of neural processing where optic flow components are extracted. In Experiment 1, we measured the duration of the motion aftereffect (MAE) generated by contracting or expanding dot fields in the presence of either radial (parallel) or concentric (orthogonal) counterphase pedestal gratings. To tap the stage at which optic flow is extracted, we measured the duration of the phantom MAE (Weisstein, Maguire, & Berbaum, 1977) in which we adapted and tested different parts of the visual field, with orientation signals presented either in the adapting (Experiment 2) or nonadapting (Experiments 3 and 4) sectors. Overall, the results showed that motion adaptation is suppressed most by orientation signals orthogonal to optic flow direction, suggesting that motion-form interactions also take place at the global motion level where optic flow is extracted. PMID:23729767
Visual adaptation and face perception
Webster, Michael A.; MacLeod, Donald I. A.
2011-01-01
The appearance of faces can be strongly affected by the characteristics of faces viewed previously. These perceptual after-effects reflect processes of sensory adaptation that are found throughout the visual system, but which have been considered only relatively recently in the context of higher level perceptual judgements. In this review, we explore the consequences of adaptation for human face perception, and the implications of adaptation for understanding the neural-coding schemes underlying the visual representation of faces. The properties of face after-effects suggest that they, in part, reflect response changes at high and possibly face-specific levels of visual processing. Yet, the form of the after-effects and the norm-based codes that they point to show many parallels with the adaptations and functional organization that are thought to underlie the encoding of perceptual attributes like colour. The nature and basis for human colour vision have been studied extensively, and we draw on ideas and principles that have been developed to account for norms and normalization in colour vision to consider potential similarities and differences in the representation and adaptation of faces. PMID:21536555
Visual adaptation and face perception.
Webster, Michael A; MacLeod, Donald I A
2011-06-12
The appearance of faces can be strongly affected by the characteristics of faces viewed previously. These perceptual after-effects reflect processes of sensory adaptation that are found throughout the visual system, but which have been considered only relatively recently in the context of higher level perceptual judgements. In this review, we explore the consequences of adaptation for human face perception, and the implications of adaptation for understanding the neural-coding schemes underlying the visual representation of faces. The properties of face after-effects suggest that they, in part, reflect response changes at high and possibly face-specific levels of visual processing. Yet, the form of the after-effects and the norm-based codes that they point to show many parallels with the adaptations and functional organization that are thought to underlie the encoding of perceptual attributes like colour. The nature and basis for human colour vision have been studied extensively, and we draw on ideas and principles that have been developed to account for norms and normalization in colour vision to consider potential similarities and differences in the representation and adaptation of faces.
Sleep's influence on a reflexive form of memory that does not require voluntary attention.
Sheth, Bhavin R; Serranzana, Andrew; Anjum, Syed F; Khan, Murtuza
2012-05-01
Studies to date have examined the influence of sleep on forms of memory that require voluntary attention. The authors examine the influence of sleep on a form of memory that is acquired by passive viewing. Induction of the McCollough effect, and measurement of perceptual color bias before and after induction, and before and after intervening sleep, wake, or visual deprivation. Sound-attenuated sleep research room. 13 healthy volunteers (mean age = 23 years; age range = 18-31 years) with normal or corrected-to-normal vision. N/A. ) ENCODING: sleep preceded adaptation. On separate nights, each participant slept for an average of 0 (wake), 1, 2, 4, or 7 hr (complete sleep). Upon awakening, the participant's baseline perceptual color bias was measured. Then, he or she viewed an adapter consisting of alternating red/horizontal and green/vertical gratings for 5 min. Color bias was remeasured. The strength of the aftereffect is the postadaptation color bias relative to baseline. A strong orientation contingent color aftereffect was observed in all participants, but total sleep duration (TSD) prior to the adaptation did not modulate aftereffect strength. Further, prior sleep provided no benefit over prior wake. Retention: sleep followed adaptation. The procedure was similar except that adaptation preceded sleep. Postadaptation sleep, irrespective of its duration (1, 3, 5, or 7 hr), arrested aftereffect decay. By contrast, aftereffect decay was arrested during subsequent wake only if the adapted eye was visually deprived. Sleep as well as passive sensory deprivation enables the retention of a color aftereffect. Sleep shelters this reflexive form of memory in a manner akin to preventing sensory interference.
Using virtual reality to augment perception, enhance sensorimotor adaptation, and change our minds.
Wright, W Geoffrey
2014-01-01
Technological advances that involve human sensorimotor processes can have both intended and unintended effects on the central nervous system (CNS). This mini review focuses on the use of virtual environments (VE) to augment brain functions by enhancing perception, eliciting automatic motor behavior, and inducing sensorimotor adaptation. VE technology is becoming increasingly prevalent in medical rehabilitation, training simulators, gaming, and entertainment. Although these VE applications have often been shown to optimize outcomes, whether it be to speed recovery, reduce training time, or enhance immersion and enjoyment, there are inherent drawbacks to environments that can potentially change sensorimotor calibration. Across numerous VE studies over the years, we have investigated the effects of combining visual and physical motion on perception, motor control, and adaptation. Recent results from our research involving exposure to dynamic passive motion within a visually-depicted VE reveal that short-term exposure to augmented sensorimotor discordance can result in systematic aftereffects that last beyond the exposure period. Whether these adaptations are advantageous or not, remains to be seen. Benefits as well as risks of using VE-driven sensorimotor stimulation to enhance brain processes will be discussed.
Equivalent background speed in recovery from motion adaptation.
Simpson, W A; Newman, A; Aasland, W
1997-01-01
We measured, in the same observers, (1) the detectability, d, of a small rotational jump following adaptation to rotational motion and (2) the detectability of the same jump when superimposed on one of several background rotation speeds. Following 90 s of motion adaptation the detectability of the jump was impaired, and sensitivity slowly recovered over the course of 60 s. The detectability of the jump was also impaired by the background speed in a way consistent with a quadratic form of Weber's law. We propose that motion adaptation impairs the detectability of the small jump because it is as if an equivalent background speed has been superimposed on the display. We measured the equivalent background by finding the real background speed that produced the same d' at each instant in the recovery from motion adaptation. The equivalent background started at approximately one to two thirds the speed of the adapting motion, declined rapidly, rose to a small peak at 30 s, then disappeared by 60 s. Since the equivalent background speed corresponds to the speed of the motion aftereffect, we have measured the time course of the motion aftereffect with objective psychophysics.
Reinhardt-Rutland, A H
2003-07-01
Induced motion is the illusory motion of a static stimulus in the opposite direction to a moving stimulus. Two types of induced motion have been distinguished: (a) when the moving stimulus is distant from the static stimulus and undergoes overall displacement, and (b) when the moving stimulus is pattern viewed within fixed boundaries that abut the static stimulus. Explanations of the 1st type of induced motion refer to mediating phenomena, such as vection, whereas the 2nd type is attributed to local processing by motion-sensitive neurons. The present research was directed to a display that elicited induced rotational motion with the characteristics of both types of induced motion: the moving stimulus lay within fixed boundaries, but the inducing and induced stimuli were distant from each other. The author investigated the properties that distinguished the two types of induced motion. In 3 experiments, induced motion persisted indefinitely, interocular transfer of the aftereffect of induced motion was limited to about 20%, and the time-course of the aftereffect of induced motion could not be attributed to vection. Those results were consistent with fixed-boundary induced motion. However, they could not be explained by local processing. Instead, the results might reflect the detection of object motion within a complex flow-field that resulted from the observer's motion.
Phantom motion after effects--evidence of detectors for the analysis of optic flow.
Snowden, R J; Milne, A B
1997-10-01
Electrophysiological recording from the extrastriate cortex of non-human primates has revealed neurons that have large receptive fields and are sensitive to various components of object or self movement, such as translations, rotations and expansion/contractions. If these mechanisms exist in human vision, they might be susceptible to adaptation that generates motion aftereffects (MAEs). Indeed, it might be possible to adapt the mechanism in one part of the visual field and reveal what we term a 'phantom MAE' in another part. The existence of phantom MAEs was probed by adapting to a pattern that contained motion in only two non-adjacent 'quarter' segments and then testing using patterns that had elements in only the other two segments. We also tested for the more conventional 'concrete' MAE by testing in the same two segments that had adapted. The strength of each MAE was quantified by measuring the percentage of dots that had to be moved in the opposite direction to the MAE in order to nullify it. Four experiments tested rotational motion, expansion/contraction motion, translational motion and a 'rotation' that consisted simply of the two segments that contained only translational motions of opposing direction. Compared to a baseline measurement where no adaptation took place, all subjects in all experiments exhibited both concrete and phantom MAEs, with the size of the latter approximately half that of the former. Adaptation to two segments that contained upward and downward motion induced the perception of leftward and rightward motion in another part of the visual field. This strongly suggests there are mechanisms in human vision that are sensitive to complex motions such as rotations.
Mental Imagery Induces Cross-Modal Sensory Plasticity and Changes Future Auditory Perception.
Berger, Christopher C; Ehrsson, H Henrik
2018-04-01
Can what we imagine in our minds change how we perceive the world in the future? A continuous process of multisensory integration and recalibration is responsible for maintaining a correspondence between the senses (e.g., vision, touch, audition) and, ultimately, a stable and coherent perception of our environment. This process depends on the plasticity of our sensory systems. The so-called ventriloquism aftereffect-a shift in the perceived localization of sounds presented alone after repeated exposure to spatially mismatched auditory and visual stimuli-is a clear example of this type of plasticity in the audiovisual domain. In a series of six studies with 24 participants each, we investigated an imagery-induced ventriloquism aftereffect in which imagining a visual stimulus elicits the same frequency-specific auditory aftereffect as actually seeing one. These results demonstrate that mental imagery can recalibrate the senses and induce the same cross-modal sensory plasticity as real sensory stimuli.
Type of featural attention differentially modulates hMT+ responses to illusory motion aftereffects.
Castelo-Branco, Miguel; Kozak, Lajos R; Formisano, Elia; Teixeira, João; Xavier, João; Goebel, Rainer
2009-11-01
Activity in the human motion complex (hMT(+)/V5) is related to the perception of motion, be it either real surface motion or an illusion of motion such as apparent motion (AM) or motion aftereffect (MAE). It is a long-lasting debate whether illusory motion-related activations in hMT(+) represent the motion itself or attention to it. We have asked whether hMT(+) responses to MAEs are present when shifts in arousal are suppressed and attention is focused on concurrent motion versus nonmotion features. Significant enhancement of hMT(+) activity was observed during MAEs when attention was focused either on concurrent spatial angle or color features. This observation was confirmed by direct comparison of adapting (MAE inducing) versus nonadapting conditions. In contrast, this effect was diminished when subjects had to report on concomitant speed changes of superimposed AM. The same finding was observed for concomitant orthogonal real motion (RM), suggesting that selective attention to concurrent illusory or real motion was interfering with the saliency of MAE signals in hMT(+). We conclude that MAE-related changes in the global activity of hMT(+) are present provided selective attention is not focused on an interfering feature such as concurrent motion. Accordingly, there is a genuine MAE-related motion signal in hMT(+) that is neither explained by shifts in arousal nor by selective attention.
Cross-Category Adaptation: Objects Produce Gender Adaptation in the Perception of Faces
Javadi, Amir Homayoun; Wee, Natalie
2012-01-01
Adaptation aftereffects have been found for low-level visual features such as colour, motion and shape perception, as well as higher-level features such as gender, race and identity in domains such as faces and biological motion. It is not yet clear if adaptation effects in humans extend beyond this set of higher order features. The aim of this study was to investigate whether objects highly associated with one gender, e.g. high heels for females or electric shavers for males can modulate gender perception of a face. In two separate experiments, we adapted subjects to a series of objects highly associated with one gender and subsequently asked participants to judge the gender of an ambiguous face. Results showed that participants are more likely to perceive an ambiguous face as male after being exposed to objects highly associated to females and vice versa. A gender adaptation aftereffect was obtained despite the adaptor and test stimuli being from different global categories (objects and faces respectively). These findings show that our perception of gender from faces is highly affected by our environment and recent experience. This suggests two possible mechanisms: (a) that perception of the gender associated with an object shares at least some brain areas with those responsible for gender perception of faces and (b) adaptation to gender, which is a high-level concept, can modulate brain areas that are involved in facial gender perception through top-down processes. PMID:23049942
Visual Attention in Flies-Dopamine in the Mushroom Bodies Mediates the After-Effect of Cueing.
Koenig, Sebastian; Wolf, Reinhard; Heisenberg, Martin
2016-01-01
Visual environments may simultaneously comprise stimuli of different significance. Often such stimuli require incompatible responses. Selective visual attention allows an animal to respond exclusively to the stimuli at a certain location in the visual field. In the process of establishing its focus of attention the animal can be influenced by external cues. Here we characterize the behavioral properties and neural mechanism of cueing in the fly Drosophila melanogaster. A cue can be attractive, repulsive or ineffective depending upon (e.g.) its visual properties and location in the visual field. Dopamine signaling in the brain is required to maintain the effect of cueing once the cue has disappeared. Raising or lowering dopamine at the synapse abolishes this after-effect. Specifically, dopamine is necessary and sufficient in the αβ-lobes of the mushroom bodies. Evidence is provided for an involvement of the αβposterior Kenyon cells.
Modulation frequency as a cue for auditory speed perception.
Senna, Irene; Parise, Cesare V; Ernst, Marc O
2017-07-12
Unlike vision, the mechanisms underlying auditory motion perception are poorly understood. Here we describe an auditory motion illusion revealing a novel cue to auditory speed perception: the temporal frequency of amplitude modulation (AM-frequency), typical for rattling sounds. Naturally, corrugated objects sliding across each other generate rattling sounds whose AM-frequency tends to directly correlate with speed. We found that AM-frequency modulates auditory speed perception in a highly systematic fashion: moving sounds with higher AM-frequency are perceived as moving faster than sounds with lower AM-frequency. Even more interestingly, sounds with higher AM-frequency also induce stronger motion aftereffects. This reveals the existence of specialized neural mechanisms for auditory motion perception, which are sensitive to AM-frequency. Thus, in spatial hearing, the brain successfully capitalizes on the AM-frequency of rattling sounds to estimate the speed of moving objects. This tightly parallels previous findings in motion vision, where spatio-temporal frequency of moving displays systematically affects both speed perception and the magnitude of the motion aftereffects. Such an analogy with vision suggests that motion detection may rely on canonical computations, with similar neural mechanisms shared across the different modalities. © 2017 The Author(s).
Multiple Concurrent Visual-Motor Mappings: Implications for Models of Adaptation
NASA Technical Reports Server (NTRS)
Cunningham, H. A.; Welch, Robert B.
1994-01-01
Previous research on adaptation to visual-motor rearrangement suggests that the central nervous system represents accurately only 1 visual-motor mapping at a time. This idea was examined in 3 experiments where subjects tracked a moving target under repeated alternations between 2 initially interfering mappings (the 'normal' mapping characteristic of computer input devices and a 108' rotation of the normal mapping). Alternation between the 2 mappings led to significant reduction in error under the rotated mapping and significant reduction in the adaptation aftereffect ordinarily caused by switching between mappings. Color as a discriminative cue, interference versus decay in adaptation aftereffect, and intermanual transfer were also examined. The results reveal a capacity for multiple concurrent visual-motor mappings, possibly controlled by a parametric process near the motor output stage of processing.
Prism adaptation and neck muscle vibration in healthy individuals: are two methods better than one?
Guinet, M; Michel, C
2013-12-19
Studies involving therapeutic combinations reveal an important benefit in the rehabilitation of neglect patients when compared to single therapies. In light of these observations our present work examines, in healthy individuals, sensorimotor and cognitive after-effects of prism adaptation and neck muscle vibration applied individually or simultaneously. We explored sensorimotor after-effects on visuo-manual open-loop pointing, visual and proprioceptive straight-ahead estimations. We assessed cognitive after-effects on the line bisection task. Fifty-four healthy participants were divided into six groups designated according to the exposure procedure used with each: 'Prism' (P) group; 'Vibration with a sensation of body rotation' (Vb) group; 'Vibration with a move illusion of the LED' (Vl) group; 'Association with a sensation of body rotation' (Ab) group; 'Association with a move illusion of the LED' (Al) group; and 'Control' (C) group. The main findings showed that prism adaptation applied alone or combined with vibration showed significant adaptation in visuo-manual open-loop pointing, visual straight-ahead and proprioceptive straight-ahead. Vibration alone produced significant after-effects on proprioceptive straight-ahead estimation in the Vl group. Furthermore all groups (except C group) showed a rightward neglect-like bias in line bisection following the training procedure. This is the first demonstration of cognitive after-effects following neck muscle vibration in healthy individuals. The simultaneous application of both methods did not produce significant greater after-effects than prism adaptation alone in both sensorimotor and cognitive tasks. These results are discussed in terms of transfer of sensorimotor plasticity to spatial cognition in healthy individuals. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.
Matsumiya, Kazumichi
2013-10-01
Current views on face perception assume that the visual system receives only visual facial signals. However, I show that the visual perception of faces is systematically biased by adaptation to a haptically explored face. Recently, face aftereffects (FAEs; the altered perception of faces after adaptation to a face) have been demonstrated not only in visual perception but also in haptic perception; therefore, I combined the two FAEs to examine whether the visual system receives face-related signals from the haptic modality. I found that adaptation to a haptically explored facial expression on a face mask produced a visual FAE for facial expression. This cross-modal FAE was not due to explicitly imaging a face, response bias, or adaptation to local features. Furthermore, FAEs transferred from vision to haptics. These results indicate that visual face processing depends on substrates adapted by haptic faces, which suggests that face processing relies on shared representation underlying cross-modal interactions.
Visual and vestibular components of motion sickness.
Eyeson-Annan, M; Peterken, C; Brown, B; Atchison, D
1996-10-01
The relative importance of visual and vestibular information in the etiology of motion sickness (MS) is not well understood, but these factors can be manipulated by inducing Coriolis and pseudo-Coriolis effects in experimental subjects. We hypothesized that visual and vestibular information are equivalent in producing MS. The experiments reported here aim, in part, to examine the relative influence of Coriolis and pseudo-Coriolis effects in inducing MS. We induced MS symptoms by combinations of whole body rotation and tilt, and environment rotation and tilt, in 22 volunteer subjects. Subjects participated in all of the experiments with at least 2 d between each experiment to dissipate after-effects. We recorded MS signs and symptoms when only visual stimulation was applied, when only vestibular stimulation was applied, and when both visual and vestibular stimulation were applied under specific conditions of whole body and environmental tilt. Visual stimuli produced more symptoms of MS than vestibular stimuli when only visual or vestibular stimuli were used (ANOVA F = 7.94, df = 1, 21 p = 0.01), but there was no significant difference in MS production when combined visual and vestibular stimulation were used to produce the Coriolis effect or pseudo-Coriolis effect (ANOVA: F = 0.40, df = 1, 21 p = 0.53). This was further confirmed by examination of the order in which the symptoms occurred and the lack of a correlation between previous experience and visually induced MS. Visual information is more important than vestibular input in causing MS when these stimuli are presented in isolation. In conditions where both visual and vestibular information are present, cross-coupling appears to occur between the pseudo-Coriolis effect and the Coriolis effect, as these two conditions are not significantly different in producing MS symptoms.
An aftereffect of adaptation to mean size
Corbett, Jennifer E.; Wurnitsch, Nicole; Schwartz, Alex; Whitney, David
2013-01-01
The visual system rapidly represents the mean size of sets of objects. Here, we investigated whether mean size is explicitly encoded by the visual system, along a single dimension like texture, numerosity, and other visual dimensions susceptible to adaptation. Observers adapted to two sets of dots with different mean sizes, presented simultaneously in opposite visual fields. After adaptation, two test patches replaced the adapting dot sets, and participants judged which test appeared to have the larger average dot diameter. They generally perceived the test that replaced the smaller mean size adapting set as being larger than the test that replaced the larger adapting set. This differential aftereffect held for single test dots (Experiment 2) and high-pass filtered displays (Experiment 3), and changed systematically as a function of the variance of the adapting dot sets (Experiment 4), providing additional support that mean size is adaptable, and therefore explicitly encoded dimension of visual scenes. PMID:24348083
From elements to perception: local and global processing in visual neurons.
Spillmann, L
1999-01-01
Gestalt psychologists in the early part of the century challenged psychophysical notions that perceptual phenomena can be understood from a punctate (atomistic) analysis of the elements present in the stimulus. Their ideas slowed later attempts to explain vision in terms of single-cell recordings from individual neurons. A rapprochement between Gestalt phenomenology and neurophysiology seemed unlikely when the first ECVP was held in Marburg, Germany, in 1978. Since that time, response properties of neurons have been discovered that invite an interpretation of visual phenomena (including illusions) in terms of neuronal processing by long-range interactions, as first proposed by Mach and Hering in the last century. This article traces a personal journey into the early days of neurophysiological vision research to illustrate the progress that has taken place from the first attempts to correlate single-cell responses with visual perceptions. Whereas initially the receptive-field properties of individual classes of cells--e.g., contrast, wavelength, orientation, motion, disparity, and spatial-frequency detectors--were used to account for relatively simple visual phenomena, nowadays complex perceptions are interpreted in terms of long-range interactions, involving many neurons. This change in paradigm from local to global processing was made possible by recent findings, in the cortex, on horizontal interactions and backward propagation (feedback loops) in addition to classical feedforward processing. These mechanisms are exemplified by studies of the tilt effect and tilt aftereffect, direction-specific motion adaptation, illusory contours, filling-in and fading, figure--ground segregation by orientation and motion contrast, and pop-out in dynamic visual-noise patterns. Major questions for future research and a discussion of their epistemological implications conclude the article.
Manipulating the content of dynamic natural scenes to characterize response in human MT/MST.
Durant, Szonya; Wall, Matthew B; Zanker, Johannes M
2011-09-09
Optic flow is one of the most important sources of information for enabling human navigation through the world. A striking finding from single-cell studies in monkeys is the rapid saturation of response of MT/MST areas with the density of optic flow type motion information. These results are reflected psychophysically in human perception in the saturation of motion aftereffects. We began by comparing responses to natural optic flow scenes in human visual brain areas to responses to the same scenes with inverted contrast (photo negative). This changes scene familiarity while preserving local motion signals. This manipulation had no effect; however, the response was only correlated with the density of local motion (calculated by a motion correlation model) in V1, not in MT/MST. To further investigate this, we manipulated the visible proportion of natural dynamic scenes and found that areas MT and MST did not increase in response over a 16-fold increase in the amount of information presented, i.e., response had saturated. This makes sense in light of the sparseness of motion information in natural scenes, suggesting that the human brain is well adapted to exploit a small amount of dynamic signal and extract information important for survival.
A neural network model of ventriloquism effect and aftereffect.
Magosso, Elisa; Cuppini, Cristiano; Ursino, Mauro
2012-01-01
Presenting simultaneous but spatially discrepant visual and auditory stimuli induces a perceptual translocation of the sound towards the visual input, the ventriloquism effect. General explanation is that vision tends to dominate over audition because of its higher spatial reliability. The underlying neural mechanisms remain unclear. We address this question via a biologically inspired neural network. The model contains two layers of unimodal visual and auditory neurons, with visual neurons having higher spatial resolution than auditory ones. Neurons within each layer communicate via lateral intra-layer synapses; neurons across layers are connected via inter-layer connections. The network accounts for the ventriloquism effect, ascribing it to a positive feedback between the visual and auditory neurons, triggered by residual auditory activity at the position of the visual stimulus. Main results are: i) the less localized stimulus is strongly biased toward the most localized stimulus and not vice versa; ii) amount of the ventriloquism effect changes with visual-auditory spatial disparity; iii) ventriloquism is a robust behavior of the network with respect to parameter value changes. Moreover, the model implements Hebbian rules for potentiation and depression of lateral synapses, to explain ventriloquism aftereffect (that is, the enduring sound shift after exposure to spatially disparate audio-visual stimuli). By adaptively changing the weights of lateral synapses during cross-modal stimulation, the model produces post-adaptive shifts of auditory localization that agree with in-vivo observations. The model demonstrates that two unimodal layers reciprocally interconnected may explain ventriloquism effect and aftereffect, even without the presence of any convergent multimodal area. The proposed study may provide advancement in understanding neural architecture and mechanisms at the basis of visual-auditory integration in the spatial realm.
Face Aftereffects Indicate Dissociable, but Not Distinct, Coding of Male and Female Faces
ERIC Educational Resources Information Center
Jaquet, Emma; Rhodes, Gillian
2008-01-01
It has been claimed that exposure to distorted faces of one sex induces perceptual aftereffects for test faces that are of the same sex, but not for test faces of the other sex (A. C. Little, L. M. DeBruine, & B. C. Jones, 2005). This result suggests that male and female faces have separate neural coding. Given the high degree of visual similarity…
Using virtual reality to augment perception, enhance sensorimotor adaptation, and change our minds
Wright, W. Geoffrey
2014-01-01
Technological advances that involve human sensorimotor processes can have both intended and unintended effects on the central nervous system (CNS). This mini review focuses on the use of virtual environments (VE) to augment brain functions by enhancing perception, eliciting automatic motor behavior, and inducing sensorimotor adaptation. VE technology is becoming increasingly prevalent in medical rehabilitation, training simulators, gaming, and entertainment. Although these VE applications have often been shown to optimize outcomes, whether it be to speed recovery, reduce training time, or enhance immersion and enjoyment, there are inherent drawbacks to environments that can potentially change sensorimotor calibration. Across numerous VE studies over the years, we have investigated the effects of combining visual and physical motion on perception, motor control, and adaptation. Recent results from our research involving exposure to dynamic passive motion within a visually-depicted VE reveal that short-term exposure to augmented sensorimotor discordance can result in systematic aftereffects that last beyond the exposure period. Whether these adaptations are advantageous or not, remains to be seen. Benefits as well as risks of using VE-driven sensorimotor stimulation to enhance brain processes will be discussed. PMID:24782724
Campana, Gianluca; Camilleri, Rebecca; Moret, Beatrice; Ghin, Filippo; Pavan, Andrea
2016-01-01
Transcranial random noise stimulation (tRNS) is a recent neuro-modulation technique whose effects at both behavioural and neural level are still debated. Here we employed the well-known phenomenon of motion after-effect (MAE) in order to investigate the effects of high- vs. low-frequency tRNS on motion adaptation and recovery. Participants were asked to estimate the MAE duration following prolonged adaptation (20 s) to a complex moving pattern, while being stimulated with either sham or tRNS across different blocks. Different groups were administered with either high- or low-frequency tRNS. Stimulation sites were either bilateral human MT complex (hMT+) or frontal areas. The results showed that, whereas no effects on MAE duration were induced by stimulating frontal areas, when applied to the bilateral hMT+, high-frequency tRNS caused a significant decrease in MAE duration whereas low-frequency tRNS caused a significant corresponding increase in MAE duration. These findings indicate that high- and low-frequency tRNS have opposed effects on the adaptation-dependent unbalance between neurons tuned to opposite motion directions, and thus on neuronal excitability. PMID:27934947
Ebner, Christian; Schroll, Henning; Winther, Gesche; Niedeggen, Michael; Hamker, Fred H
2015-09-01
How the brain decides which information to process 'consciously' has been debated over for decades without a simple explanation at hand. While most experiments manipulate the perceptual energy of presented stimuli, the distractor-induced blindness task is a prototypical paradigm to investigate gating of information into consciousness without or with only minor visual manipulation. In this paradigm, subjects are asked to report intervals of coherent dot motion in a rapid serial visual presentation (RSVP) stream, whenever these are preceded by a particular color stimulus in a different RSVP stream. If distractors (i.e., intervals of coherent dot motion prior to the color stimulus) are shown, subjects' abilities to perceive and report intervals of target dot motion decrease, particularly with short delays between intervals of target color and target motion. We propose a biologically plausible neuro-computational model of how the brain controls access to consciousness to explain how distractor-induced blindness originates from information processing in the cortex and basal ganglia. The model suggests that conscious perception requires reverberation of activity in cortico-subcortical loops and that basal-ganglia pathways can either allow or inhibit this reverberation. In the distractor-induced blindness paradigm, inadequate distractor-induced response tendencies are suppressed by the inhibitory 'hyperdirect' pathway of the basal ganglia. If a target follows such a distractor closely, temporal aftereffects of distractor suppression prevent target identification. The model reproduces experimental data on how delays between target color and target motion affect the probability of target detection. Copyright © 2015 Elsevier Inc. All rights reserved.
Sleep's Influence on a Reflexive Form of Memory That Does Not Require Voluntary Attention
Sheth, Bhavin R.; Serranzana, Andrew; Anjum, Syed F.; Khan, Murtuza
2012-01-01
Study Objectives: Studies to date have examined the influence of sleep on forms of memory that require voluntary attention. The authors examine the influence of sleep on a form of memory that is acquired by passive viewing. Design: Induction of the McCollough effect, and measurement of perceptual color bias before and after induction, and before and after intervening sleep, wake, or visual deprivation. Setting: Sound-attenuated sleep research room. Participants: 13 healthy volunteers (mean age = 23 years; age range = 18–31 years) with normal or corrected-to-normal vision. Interventions: N/A. Measurements and Results:) Encoding: sleep preceded adaptation. On separate nights, each participant slept for an average of 0 (wake), 1, 2, 4, or 7 hr (complete sleep). Upon awakening, the participant's baseline perceptual color bias was measured. Then, he or she viewed an adapter consisting of alternating red/horizontal and green/vertical gratings for 5 min. Color bias was remeasured. The strength of the aftereffect is the postadaptation color bias relative to baseline. A strong orientation contingent color aftereffect was observed in all participants, but total sleep duration (TSD) prior to the adaptation did not modulate aftereffect strength. Further, prior sleep provided no benefit over prior wake. Retention: sleep followed adaptation. The procedure was similar except that adaptation preceded sleep. Postadaptation sleep, irrespective of its duration (1, 3, 5, or 7 hr), arrested aftereffect decay. By contrast, aftereffect decay was arrested during subsequent wake only if the adapted eye was visually deprived. Conclusions: Sleep as well as passive sensory deprivation enables the retention of a color aftereffect. Sleep shelters this reflexive form of memory in a manner akin to preventing sensory interference. Citation: Sheth BR; Serranzana A; Anjum SF; Khan M. Sleep's influence on a reflexive form of memory that does not require voluntary attention. SLEEP 2012;35(5):657-666. PMID:22547892
Adaptation to visual or auditory time intervals modulates the perception of visual apparent motion
Zhang, Huihui; Chen, Lihan; Zhou, Xiaolin
2012-01-01
It is debated whether sub-second timing is subserved by a centralized mechanism or by the intrinsic properties of task-related neural activity in specific modalities (Ivry and Schlerf, 2008). By using a temporal adaptation task, we investigated whether adapting to different time intervals conveyed through stimuli in different modalities (i.e., frames of a visual Ternus display, visual blinking discs, or auditory beeps) would affect the subsequent implicit perception of visual timing, i.e., inter-stimulus interval (ISI) between two frames in a Ternus display. The Ternus display can induce two percepts of apparent motion (AM), depending on the ISI between the two frames: “element motion” for short ISIs, in which the endmost disc is seen as moving back and forth while the middle disc at the overlapping or central position remains stationary; “group motion” for longer ISIs, in which both discs appear to move in a manner of lateral displacement as a whole. In Experiment 1, participants adapted to either the typical “element motion” (ISI = 50 ms) or the typical “group motion” (ISI = 200 ms). In Experiments 2 and 3, participants adapted to a time interval of 50 or 200 ms through observing a series of two paired blinking discs at the center of the screen (Experiment 2) or hearing a sequence of two paired beeps (with pitch 1000 Hz). In Experiment 4, participants adapted to sequences of paired beeps with either low pitches (500 Hz) or high pitches (5000 Hz). After adaptation in each trial, participants were presented with a Ternus probe in which the ISI between the two frames was equal to the transitional threshold of the two types of motions, as determined by a pretest. Results showed that adapting to the short time interval in all the situations led to more reports of “group motion” in the subsequent Ternus probes; adapting to the long time interval, however, caused no aftereffect for visual adaptation but significantly more reports of group motion for auditory adaptation. These findings, suggesting amodal representation for sub-second timing across modalities, are interpreted in the framework of temporal pacemaker model. PMID:23133408
Adaptation to Skew Distortions of Natural Scenes and Retinal Specificity of Its Aftereffects
Habtegiorgis, Selam W.; Rifai, Katharina; Lappe, Markus; Wahl, Siegfried
2017-01-01
Image skew is one of the prominent distortions that exist in optical elements, such as in spectacle lenses. The present study evaluates adaptation to image skew in dynamic natural images. Moreover, the cortical levels involved in skew coding were probed using retinal specificity of skew adaptation aftereffects. Left and right skewed natural image sequences were shown to observers as adapting stimuli. The point of subjective equality (PSE), i.e., the skew amplitude in simple geometrical patterns that is perceived to be unskewed, was used to quantify the aftereffect of each adapting skew direction. The PSE, in a two-alternative forced choice paradigm, shifted toward the adapting skew direction. Moreover, significant adaptation aftereffects were obtained not only at adapted, but also at non-adapted retinal locations during fixation. Skew adaptation information was transferred partially to non-adapted retinal locations. Thus, adaptation to skewed natural scenes induces coordinated plasticity in lower and higher cortical areas of the visual pathway. PMID:28751870
Object size determines the spatial spread of visual time
McGraw, Paul V.; Roach, Neil W.; Whitaker, David
2016-01-01
A key question for temporal processing research is how the nervous system extracts event duration, despite a notable lack of neural structures dedicated to duration encoding. This is in stark contrast with the orderly arrangement of neurons tasked with spatial processing. In this study, we examine the linkage between the spatial and temporal domains. We use sensory adaptation techniques to generate after-effects where perceived duration is either compressed or expanded in the opposite direction to the adapting stimulus' duration. Our results indicate that these after-effects are broadly tuned, extending over an area approximately five times the size of the stimulus. This region is directly related to the size of the adapting stimulus—the larger the adapting stimulus the greater the spatial spread of the after-effect. We construct a simple model to test predictions based on overlapping adapted versus non-adapted neuronal populations and show that our effects cannot be explained by any single, fixed-scale neural filtering. Rather, our effects are best explained by a self-scaled mechanism underpinned by duration selective neurons that also pool spatial information across earlier stages of visual processing. PMID:27466452
Little, Anthony C; DeBruine, Lisa M; Jones, Benedict C
2011-01-01
A face appears normal when it approximates the average of a population. Consequently, exposure to faces biases perceptions of subsequently viewed faces such that faces similar to those recently seen are perceived as more normal. Simultaneously inducing such aftereffects in opposite directions for two groups of faces indicates somewhat discrete representations for those groups. Here we examine how labelling influences the perception of category in faces differing in colour. We show category-contingent aftereffects following exposure to faces differing in eye spacing (wide versus narrow) for blue versus red faces when such groups are consistently labelled with socially meaningful labels (Extravert versus Introvert; Soldier versus Builder). Category-contingent aftereffects were not seen using identical methodology when labels were not meaningful or were absent. These data suggest that human representations of faces can be rapidly tuned to code for meaningful social categories and that such tuning requires both a label and an associated visual difference. Results highlight the flexibility of the cognitive visual system to discriminate categories even in adulthood. Copyright © 2010 Elsevier B.V. All rights reserved.
Birznieks, I.; Vickery, R. M.; Holcombe, A. O.; Seizova-Cajic, T.
2016-01-01
Neurophysiological studies in primates have found that direction-sensitive neurons in the primary somatosensory cortex (SI) generally increase their response rate with increasing speed of object motion across the skin and show little evidence of speed tuning. We employed psychophysics to determine whether human perception of motion direction could be explained by features of such neurons and whether evidence can be found for a speed-tuned process. After adaptation to motion across the skin, a subsequently presented dynamic test stimulus yields an impression of motion in the opposite direction. We measured the strength of this tactile motion aftereffect (tMAE) induced with different combinations of adapting and test speeds. Distal-to-proximal or proximal-to-distal adapting motion was applied to participants' index fingers using a tactile array, after which participants reported the perceived direction of a bidirectional test stimulus. An intensive code for speed, like that observed in SI neurons, predicts greater adaptation (and a stronger tMAE) the faster the adapting speed, regardless of the test speed. In contrast, speed tuning of direction-sensitive neurons predicts the greatest tMAE when the adapting and test stimuli have matching speeds. We found that the strength of the tMAE increased monotonically with adapting speed, regardless of the test speed, showing no evidence of speed tuning. Our data are consistent with neurophysiological findings that suggest an intensive code for speed along the motion processing pathways comprising neurons sensitive both to speed and direction of motion. PMID:26823511
Visual adaptation enhances action sound discrimination.
Barraclough, Nick E; Page, Steve A; Keefe, Bruce D
2017-01-01
Prolonged exposure, or adaptation, to a stimulus in 1 modality can bias, but also enhance, perception of a subsequent stimulus presented within the same modality. However, recent research has also found that adaptation in 1 modality can bias perception in another modality. Here, we show a novel crossmodal adaptation effect, where adaptation to a visual stimulus enhances subsequent auditory perception. We found that when compared to no adaptation, prior adaptation to visual, auditory, or audiovisual hand actions enhanced discrimination between 2 subsequently presented hand action sounds. Discrimination was most enhanced when the visual action "matched" the auditory action. In addition, prior adaptation to a visual, auditory, or audiovisual action caused subsequent ambiguous action sounds to be perceived as less like the adaptor. In contrast, these crossmodal action aftereffects were not generated by adaptation to the names of actions. Enhanced crossmodal discrimination and crossmodal perceptual aftereffects may result from separate mechanisms operating in audiovisual action sensitive neurons within perceptual systems. Adaptation-induced crossmodal enhancements cannot be explained by postperceptual responses or decisions. More generally, these results together indicate that adaptation is a ubiquitous mechanism for optimizing perceptual processing of multisensory stimuli.
Myopes show increased susceptibility to nearwork aftereffects.
Ciuffreda, K J; Wallis, D M
1998-09-01
Some aspects of accommodation may be slightly abnormal (or different) in myopes, compared with accommodation in emmetropes and hyperopes. For example, the initial magnitude of accommodative adaptation in the dark after nearwork is greatest in myopes. However, the critical test is to assess this initial accommodative aftereffect and its subsequent decay in the light under more natural viewing conditions with blur-related visual feedback present, if a possible link between this phenomenon and clinical myopia is to be considered. Subjects consisted of adult late- (n = 11) and early-onset (n = 13) myopes, emmetropes (n = 11), and hyperopes (n = 9). The distance-refractive state was assessed objectively using an autorefractor immediately before and after a 10-minute binocular near task at 20 cm (5 diopters [D]). Group results showed that myopes were most susceptible to the nearwork aftereffect. It averaged 0.35 D in initial magnitude, with considerably faster posttask decay to baseline in the early-onset (35 seconds) versus late-onset (63 seconds) myopes. There was no myopic aftereffect in the remaining two refractive groups. The myopes showed particularly striking accommodatively related nearwork aftereffect susceptibility. As has been speculated and found by many others, transient pseudomyopia may cause or be a precursor to permanent myopia or myopic progression. Time-integrated increased retinal defocus causing axial elongation is proposed as a possible mechanism.
Spatial compression impairs prism adaptation in healthy individuals.
Scriven, Rachel J; Newport, Roger
2013-01-01
Neglect patients typically present with gross inattention to one side of space following damage to the contralateral hemisphere. While prism-adaptation (PA) is effective in ameliorating some neglect behaviors, the mechanisms involved and their relationship to neglect remain unclear. Recent studies have shown that conscious strategic control (SC) processes in PA may be impaired in neglect patients, who are also reported to show extraordinarily long aftereffects compared to healthy participants. Determining the underlying cause of these effects may be the key to understanding therapeutic benefits. Alternative accounts suggest that reduced SC might result from a failure to detect prism-induced reaching errors properly either because (a) the size of the error is underestimated in compressed visual space or (b) pathologically increased error-detection thresholds reduce the requirement for error correction. The purpose of this study was to model these two alternatives in healthy participants and to examine whether SC and subsequent aftereffects were abnormal compared to standard PA. Each participant completed three PA procedures within a MIRAGE mediated reality environment with direction errors recorded before, during and after adaptation. During PA, visual feedback of the reach could be compressed, perturbed by noise, or represented veridically. Compressed visual space significantly reduced SC and aftereffects compared to control and noise conditions. These results support recent observations in neglect patients, suggesting that a distortion of spatial representation may successfully model neglect and explain neglect performance while adapting to prisms.
Rhodes, Gillian; Pond, Stephen; Burton, Nichola; Kloth, Nadine; Jeffery, Linda; Bell, Jason; Ewing, Louise; Calder, Andrew J; Palermo, Romina
2015-09-01
Traditional models of face perception emphasize distinct routes for processing face identity and expression. These models have been highly influential in guiding neural and behavioural research on the mechanisms of face perception. However, it is becoming clear that specialised brain areas for coding identity and expression may respond to both attributes and that identity and expression perception can interact. Here we use perceptual aftereffects to demonstrate the existence of dimensions in perceptual face space that code both identity and expression, further challenging the traditional view. Specifically, we find a significant positive association between face identity aftereffects and expression aftereffects, which dissociates from other face (gaze) and non-face (tilt) aftereffects. Importantly, individual variation in the adaptive calibration of these common dimensions significantly predicts ability to recognize both identity and expression. These results highlight the role of common dimensions in our ability to recognize identity and expression, and show why the high-level visual processing of these attributes is not entirely distinct. Copyright © 2015 Elsevier B.V. All rights reserved.
Nine-year-old children use norm-based coding to visually represent facial expression.
Burton, Nichola; Jeffery, Linda; Skinner, Andrew L; Benton, Christopher P; Rhodes, Gillian
2013-10-01
Children are less skilled than adults at making judgments about facial expression. This could be because they have not yet developed adult-like mechanisms for visually representing faces. Adults are thought to represent faces in a multidimensional face-space, and have been shown to code the expression of a face relative to the norm or average face in face-space. Norm-based coding is economical and adaptive, and may be what makes adults more sensitive to facial expression than children. This study investigated the coding system that children use to represent facial expression. An adaptation aftereffect paradigm was used to test 24 adults and 18 children (9 years 2 months to 9 years 11 months old). Participants adapted to weak and strong antiexpressions. They then judged the expression of an average expression. Adaptation created aftereffects that made the test face look like the expression opposite that of the adaptor. Consistent with the predictions of norm-based but not exemplar-based coding, aftereffects were larger for strong than weak adaptors for both age groups. Results indicate that, like adults, children's coding of facial expressions is norm-based. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Rhodes, Gillian; Jeffery, Linda; Boeing, Alexandra; Calder, Andrew J
2013-04-01
Despite the discovery of body-selective neural areas in occipitotemporal cortex, little is known about how bodies are visually coded. We used perceptual adaptation to determine how body identity is coded. Brief exposure to a body (e.g., anti-Rose) biased perception toward an identity with opposite properties (Rose). Moreover, the size of this aftereffect increased with adaptor extremity, as predicted by norm-based, opponent coding of body identity. A size change between adapt and test bodies minimized the effects of low-level, retinotopic adaptation. These results demonstrate that body identity, like face identity, is opponent coded in higher-level vision. More generally, they show that a norm-based multidimensional framework, which is well established for face perception, may provide a powerful framework for understanding body perception.
Postural Control Disturbances Produced By Exposure to HMD and Dome Vr Systems
NASA Technical Reports Server (NTRS)
Harm, D. L.; Taylor, L. C.
2005-01-01
Two critical and unresolved human factors issues in VR systems are: 1) potential "cybersickness", a form of motion sickness which is experienced in virtual worlds, and 2) maladaptive sensorimotor performance following exposure to VR systems. Interestingly, these aftereffects are often quite similar to adaptive sensorimotor responses observed in astronauts during and/or following space flight. Most astronauts and cosmonauts experience perceptual and sensorimotor disturbances during and following space flight. All astronauts exhibit decrements in postural control following space flight. It has been suggested that training in virtual reality (VR) may be an effective countermeasure for minimizing perceptual and/or sensorimotor disturbances. People adapt to consistent, sustained alterations of sensory input such as those produced by microgravity, and experimentally-produced stimulus rearrangements (e.g., reversing prisms, magnifying lenses, flight simulators, and VR systems). Adaptation is revealed by aftereffects including perceptual disturbances and sensorimotor control disturbances. The purpose of the current study was to compare disturbances in postural control produced by dome and head-mounted virtual environment displays. Individuals recovered from motion sickness and the detrimental effects of exposure to virtual reality on postural control within one hour. Sickness severity and initial decrements in postural equilibrium decreases over days, which suggests that subjects become dual-adapted over time. These findings provide some direction for developing training schedules for VR users that facilitate adaptation, and address safety concerns about aftereffects.
Shabbott, Britne A; Sainburg, Robert L
2010-05-01
Visuomotor adaptation is mediated by errors between intended and sensory-detected arm positions. However, it is not clear whether visual-based errors that are shown during the course of motion lead to qualitatively different or more efficient adaptation than errors shown after movement. For instance, continuous visual feedback mediates online error corrections, which may facilitate or inhibit the adaptation process. We addressed this question by manipulating the timing of visual error information and task instructions during a visuomotor adaptation task. Subjects were exposed to a visuomotor rotation, during which they received continuous visual feedback (CF) of hand position with instructions to correct or not correct online errors, or knowledge-of-results (KR), provided as a static hand-path at the end of each trial. Our results showed that all groups improved performance with practice, and that online error corrections were inconsequential to the adaptation process. However, in contrast to the CF groups, the KR group showed relatively small reductions in mean error with practice, increased inter-trial variability during rotation exposure, and more limited generalization across target distances and workspace. Further, although the KR group showed improved performance with practice, after-effects were minimal when the rotation was removed. These findings suggest that simultaneous visual and proprioceptive information is critical in altering neural representations of visuomotor maps, although delayed error information may elicit compensatory strategies to offset perturbations.
The physiological locus of the spiral after-effect.
DOT National Transportation Integrated Search
1964-09-01
It has long been known that if an Archimedes spiral is rotated, an illusory motion of swelling or shrinking, depending on the direction of rotation, will be perceived. If, after the spiral is rotated, it is stopped and S looks at a stationary spiral,...
The surface and deep structure of the waterfall illusion.
Wade, Nicholas J; Ziefle, Martina
2008-11-01
The surface structure of the waterfall illusion or motion aftereffect (MAE) is its phenomenal visibility. Its deep structure will be examined in the context of a model of space and motion perception. The MAE can be observed following protracted observation of a pattern that is translating, rotating, or expanding/contracting, a static pattern appears to move in the opposite direction. The phenomenon has long been known, and it continues to present novel properties. One of the novel features of MAEs is that they can provide an ideal visual assay for distinguishing local from global processes. Motion during adaptation can be induced in a static central grating by moving surround gratings; the MAE is observed in the static central grating but not in static surrounds. The adaptation phase is local and the test phase is global. That is, localised adaptation can be expressed in different ways depending on the structure of the test display. These aspects of MAEs can be exploited to determine a variety of local/global interactions. Six experiments on MAEs are reported. The results indicated that relational motion is required to induce an MAE; the region adapted extends beyond that stimulated; storage can be complete when the MAE is not seen during the storage period; interocular transfer (IOT) is around 30% of monocular MAEs with phase alternation; large field spiral patterns yield MAEs with characteristic monocular and binocular interactions.
Why do shape aftereffects increase with eccentricity?
Gheorghiu, Elena; Kingdom, Frederick A A; Bell, Jason; Gurnsey, Rick
2011-12-20
Studies have shown that spatial aftereffects increase with eccentricity. Here, we demonstrate that the shape-frequency and shape-amplitude aftereffects, which describe the perceived shifts in the shape of a sinusoidal-shaped contour following adaptation to a slightly different sinusoidal-shaped contour, also increase with eccentricity. Why does this happen? We first demonstrate that the perceptual shift increases with eccentricity for stimuli of fixed sizes. These shifts are not attenuated by variations in stimulus size; in fact, at each eccentricity the degree of perceptual shift is scale-independent. This scale independence is specific to the aftereffect because basic discrimination thresholds (in the absence of adaptation) decrease as size increases. Structural aspects of the displays were found to have a modest effect on the degree of perceptual shift; the degree of adaptation depends modestly on distance between stimuli during adaptation and post-adaptation testing. There were similar temporal rates of decline of adaptation across the visual field and higher post-adaptation discrimination thresholds in the periphery than in the center. The observed results are consistent with greater sensitivity reduction in adapted mechanisms following adaptation in the periphery or an eccentricity-dependent increase in the bandwidth of the shape-frequency- and shape-amplitude-selective mechanisms.
Not just the norm: exemplar-based models also predict face aftereffects.
Ross, David A; Deroche, Mickael; Palmeri, Thomas J
2014-02-01
The face recognition literature has considered two competing accounts of how faces are represented within the visual system: Exemplar-based models assume that faces are represented via their similarity to exemplars of previously experienced faces, while norm-based models assume that faces are represented with respect to their deviation from an average face, or norm. Face identity aftereffects have been taken as compelling evidence in favor of a norm-based account over an exemplar-based account. After a relatively brief period of adaptation to an adaptor face, the perceived identity of a test face is shifted toward a face with attributes opposite to those of the adaptor, suggesting an explicit psychological representation of the norm. Surprisingly, despite near universal recognition that face identity aftereffects imply norm-based coding, there have been no published attempts to simulate the predictions of norm- and exemplar-based models in face adaptation paradigms. Here, we implemented and tested variations of norm and exemplar models. Contrary to common claims, our simulations revealed that both an exemplar-based model and a version of a two-pool norm-based model, but not a traditional norm-based model, predict face identity aftereffects following face adaptation.
Not Just the Norm: Exemplar-Based Models also Predict Face Aftereffects
Ross, David A.; Deroche, Mickael; Palmeri, Thomas J.
2014-01-01
The face recognition literature has considered two competing accounts of how faces are represented within the visual system: Exemplar-based models assume that faces are represented via their similarity to exemplars of previously experienced faces, while norm-based models assume that faces are represented with respect to their deviation from an average face, or norm. Face identity aftereffects have been taken as compelling evidence in favor of a norm-based account over an exemplar-based account. After a relatively brief period of adaptation to an adaptor face, the perceived identity of a test face is shifted towards a face with opposite attributes to the adaptor, suggesting an explicit psychological representation of the norm. Surprisingly, despite near universal recognition that face identity aftereffects imply norm-based coding, there have been no published attempts to simulate the predictions of norm- and exemplar-based models in face adaptation paradigms. Here we implemented and tested variations of norm and exemplar models. Contrary to common claims, our simulations revealed that both an exemplar-based model and a version of a two-pool norm-based model, but not a traditional norm-based model, predict face identity aftereffects following face adaptation. PMID:23690282
Separate Perceptual and Neural Processing of Velocity- and Disparity-Based 3D Motion Signals
Czuba, Thaddeus B.; Cormack, Lawrence K.; Huk, Alexander C.
2016-01-01
Although the visual system uses both velocity- and disparity-based binocular information for computing 3D motion, it is unknown whether (and how) these two signals interact. We found that these two binocular signals are processed distinctly at the levels of both cortical activity in human MT and perception. In human MT, adaptation to both velocity-based and disparity-based 3D motions demonstrated direction-selective neuroimaging responses. However, when adaptation to one cue was probed using the other cue, there was no evidence of interaction between them (i.e., there was no “cross-cue” adaptation). Analogous psychophysical measurements yielded correspondingly weak cross-cue motion aftereffects (MAEs) in the face of very strong within-cue adaptation. In a direct test of perceptual independence, adapting to opposite 3D directions generated by different binocular cues resulted in simultaneous, superimposed, opposite-direction MAEs. These findings suggest that velocity- and disparity-based 3D motion signals may both flow through area MT but constitute distinct signals and pathways. SIGNIFICANCE STATEMENT Recent human neuroimaging and monkey electrophysiology have revealed 3D motion selectivity in area MT, which is driven by both velocity-based and disparity-based 3D motion signals. However, to elucidate the neural mechanisms by which the brain extracts 3D motion given these binocular signals, it is essential to understand how—or indeed if—these two binocular cues interact. We show that velocity-based and disparity-based signals are mostly separate at the levels of both fMRI responses in area MT and perception. Our findings suggest that the two binocular cues for 3D motion might be processed by separate specialized mechanisms. PMID:27798134
Separate Perceptual and Neural Processing of Velocity- and Disparity-Based 3D Motion Signals.
Joo, Sung Jun; Czuba, Thaddeus B; Cormack, Lawrence K; Huk, Alexander C
2016-10-19
Although the visual system uses both velocity- and disparity-based binocular information for computing 3D motion, it is unknown whether (and how) these two signals interact. We found that these two binocular signals are processed distinctly at the levels of both cortical activity in human MT and perception. In human MT, adaptation to both velocity-based and disparity-based 3D motions demonstrated direction-selective neuroimaging responses. However, when adaptation to one cue was probed using the other cue, there was no evidence of interaction between them (i.e., there was no "cross-cue" adaptation). Analogous psychophysical measurements yielded correspondingly weak cross-cue motion aftereffects (MAEs) in the face of very strong within-cue adaptation. In a direct test of perceptual independence, adapting to opposite 3D directions generated by different binocular cues resulted in simultaneous, superimposed, opposite-direction MAEs. These findings suggest that velocity- and disparity-based 3D motion signals may both flow through area MT but constitute distinct signals and pathways. Recent human neuroimaging and monkey electrophysiology have revealed 3D motion selectivity in area MT, which is driven by both velocity-based and disparity-based 3D motion signals. However, to elucidate the neural mechanisms by which the brain extracts 3D motion given these binocular signals, it is essential to understand how-or indeed if-these two binocular cues interact. We show that velocity-based and disparity-based signals are mostly separate at the levels of both fMRI responses in area MT and perception. Our findings suggest that the two binocular cues for 3D motion might be processed by separate specialized mechanisms. Copyright © 2016 the authors 0270-6474/16/3610791-12$15.00/0.
Spatiotopic updating of visual feature information.
Zimmermann, Eckart; Weidner, Ralph; Fink, Gereon R
2017-10-01
Saccades shift the retina with high-speed motion. In order to compensate for the sudden displacement, the visuomotor system needs to combine saccade-related information and visual metrics. Many neurons in oculomotor but also in visual areas shift their receptive field shortly before the execution of a saccade (Duhamel, Colby, & Goldberg, 1992; Nakamura & Colby, 2002). These shifts supposedly enable the binding of information from before and after the saccade. It is a matter of current debate whether these shifts are merely location based (i.e., involve remapping of abstract spatial coordinates) or also comprise information about visual features. We have recently presented fMRI evidence for a feature-based remapping mechanism in visual areas V3, V4, and VO (Zimmermann, Weidner, Abdollahi, & Fink, 2016). In particular, we found fMRI adaptation in cortical regions representing a stimulus' retinotopic as well as its spatiotopic position. Here, we asked whether spatiotopic adaptation exists independently from retinotopic adaptation and which type of information is behaviorally more relevant after saccade execution. We first adapted at the saccade target location only and found a spatiotopic tilt aftereffect. Then, we simultaneously adapted both the fixation and the saccade target location but with opposite tilt orientations. As a result, adaptation from the fixation location was carried retinotopically to the saccade target position. The opposite tilt orientation at the retinotopic location altered the effects induced by spatiotopic adaptation. More precisely, it cancelled out spatiotopic adaptation at the saccade target location. We conclude that retinotopic and spatiotopic visual adaptation are independent effects.
ERIC Educational Resources Information Center
Rhodes, Gillian; Jeffery, Linda; Boeing, Alexandra; Calder, Andrew J.
2013-01-01
Despite the discovery of body-selective neural areas in occipitotemporal cortex, little is known about how bodies are visually coded. We used perceptual adaptation to determine how body identity is coded. Brief exposure to a body (e.g., anti-Rose) biased perception toward an identity with opposite properties (Rose). Moreover, the size of this…
Sakurada, Takeshi; Hirai, Masahiro; Watanabe, Eiju
2016-01-01
Motor learning performance has been shown to be affected by various cognitive factors such as the focus of attention and motor imagery ability. Most previous studies on motor learning have shown that directing the attention of participants externally, such as on the outcome of an assigned body movement, can be more effective than directing their attention internally, such as on body movement itself. However, to the best of our knowledge, no findings have been reported on the effect of the focus of attention selected according to the motor imagery ability of an individual on motor learning performance. We measured individual motor imagery ability assessed by the Movement Imagery Questionnaire and classified the participants into kinesthetic-dominant (n = 12) and visual-dominant (n = 8) groups based on the questionnaire score. Subsequently, the participants performed a motor learning task such as tracing a trajectory using visuomotor rotation. When the participants were required to direct their attention internally, the after-effects of the learning task in the kinesthetic-dominant group were significantly greater than those in the visual-dominant group. Conversely, when the participants were required to direct their attention externally, the after-effects of the visual-dominant group were significantly greater than those of the kinesthetic-dominant group. Furthermore, we found a significant positive correlation between the size of after-effects and the modality-dominance of motor imagery. These results suggest that a suitable attention strategy based on the intrinsic motor imagery ability of an individual can improve performance during motor learning tasks.
To hear or not to hear: Voice processing under visual load.
Zäske, Romi; Perlich, Marie-Christin; Schweinberger, Stefan R
2016-07-01
Adaptation to female voices causes subsequent voices to be perceived as more male, and vice versa. This contrastive aftereffect disappears under spatial inattention to adaptors, suggesting that voices are not encoded automatically. According to Lavie, Hirst, de Fockert, and Viding (2004), the processing of task-irrelevant stimuli during selective attention depends on perceptual resources and working memory. Possibly due to their social significance, faces may be an exceptional domain: That is, task-irrelevant faces can escape perceptual load effects. Here we tested voice processing, to study whether voice gender aftereffects (VGAEs) depend on low or high perceptual (Exp. 1) or working memory (Exp. 2) load in a relevant visual task. Participants adapted to irrelevant voices while either searching digit displays for a target (Exp. 1) or recognizing studied digits (Exp. 2). We found that the VGAE was unaffected by perceptual load, indicating that task-irrelevant voices, like faces, can also escape perceptual-load effects. Intriguingly, the VGAE was increased under high memory load. Therefore, visual working memory load, but not general perceptual load, determines the processing of task-irrelevant voices.
Enhanced attention amplifies face adaptation.
Rhodes, Gillian; Jeffery, Linda; Evangelista, Emma; Ewing, Louise; Peters, Marianne; Taylor, Libby
2011-08-15
Perceptual adaptation not only produces striking perceptual aftereffects, but also enhances coding efficiency and discrimination by calibrating coding mechanisms to prevailing inputs. Attention to simple stimuli increases adaptation, potentially enhancing its functional benefits. Here we show that attention also increases adaptation to faces. In Experiment 1, face identity aftereffects increased when attention to adapting faces was increased using a change detection task. In Experiment 2, figural (distortion) face aftereffects increased when attention was increased using a snap game (detecting immediate repeats) during adaptation. Both were large effects. Contributions of low-level adaptation were reduced using free viewing (both experiments) and a size change between adapt and test faces (Experiment 2). We suggest that attention may enhance adaptation throughout the entire cortical visual pathway, with functional benefits well beyond the immediate advantages of selective processing of potentially important stimuli. These results highlight the potential to facilitate adaptive updating of face-coding mechanisms by strategic deployment of attentional resources. Copyright © 2011 Elsevier Ltd. All rights reserved.
No Effect of Featural Attention on Body Size Aftereffects
Stephen, Ian D.; Bickersteth, Chloe; Mond, Jonathan; Stevenson, Richard J.; Brooks, Kevin R.
2016-01-01
Prolonged exposure to images of narrow bodies has been shown to induce a perceptual aftereffect, such that observers’ point of subjective normality (PSN) for bodies shifts toward narrower bodies. The converse effect is shown for adaptation to wide bodies. In low-level stimuli, object attention (attention directed to the object) and spatial attention (attention directed to the location of the object) have been shown to increase the magnitude of visual aftereffects, while object-based attention enhances the adaptation effect in faces. It is not known whether featural attention (attention directed to a specific aspect of the object) affects the magnitude of adaptation effects in body stimuli. Here, we manipulate the attention of Caucasian observers to different featural information in body images, by asking them to rate the fatness or sex typicality of male and female bodies manipulated to appear fatter or thinner than average. PSNs for body fatness were taken at baseline and after adaptation, and a change in PSN (ΔPSN) was calculated. A body size adaptation effect was found, with observers who viewed fat bodies showing an increased PSN, and those exposed to thin bodies showing a reduced PSN. However, manipulations of featural attention to body fatness or sex typicality produced equivalent results, suggesting that featural attention may not affect the strength of the body size aftereffect. PMID:27597835
Rhodes, Gillian; Ewing, Louise; Jeffery, Linda; Avard, Eleni; Taylor, Libby
2014-09-01
Faces are adaptively coded relative to visual norms that are updated by experience. This coding is compromised in autism and the broader autism phenotype, suggesting that atypical adaptive coding of faces may be an endophenotype for autism. Here we investigate the nature of this atypicality, asking whether adaptive face-coding mechanisms are fundamentally altered, or simply less responsive to experience, in autism. We measured adaptive coding, using face identity aftereffects, in cognitively able children and adolescents with autism and neurotypical age- and ability-matched participants. We asked whether these aftereffects increase with adaptor identity strength as in neurotypical populations, or whether they show a different pattern indicating a more fundamental alteration in face-coding mechanisms. As expected, face identity aftereffects were reduced in the autism group, but they nevertheless increased with adaptor strength, like those of our neurotypical participants, consistent with norm-based coding of face identity. Moreover, their aftereffects correlated positively with face recognition ability, consistent with an intact functional role for adaptive coding in face recognition ability. We conclude that adaptive norm-based face-coding mechanisms are basically intact in autism, but are less readily calibrated by experience. Copyright © 2014 Elsevier Ltd. All rights reserved.
No Effect of Featural Attention on Body Size Aftereffects.
Stephen, Ian D; Bickersteth, Chloe; Mond, Jonathan; Stevenson, Richard J; Brooks, Kevin R
2016-01-01
Prolonged exposure to images of narrow bodies has been shown to induce a perceptual aftereffect, such that observers' point of subjective normality (PSN) for bodies shifts toward narrower bodies. The converse effect is shown for adaptation to wide bodies. In low-level stimuli, object attention (attention directed to the object) and spatial attention (attention directed to the location of the object) have been shown to increase the magnitude of visual aftereffects, while object-based attention enhances the adaptation effect in faces. It is not known whether featural attention (attention directed to a specific aspect of the object) affects the magnitude of adaptation effects in body stimuli. Here, we manipulate the attention of Caucasian observers to different featural information in body images, by asking them to rate the fatness or sex typicality of male and female bodies manipulated to appear fatter or thinner than average. PSNs for body fatness were taken at baseline and after adaptation, and a change in PSN (ΔPSN) was calculated. A body size adaptation effect was found, with observers who viewed fat bodies showing an increased PSN, and those exposed to thin bodies showing a reduced PSN. However, manipulations of featural attention to body fatness or sex typicality produced equivalent results, suggesting that featural attention may not affect the strength of the body size aftereffect.
Veniero, Domenica; Oliveri, Massimiliano
2018-01-01
Prismatic adaption (PA) has been proposed as a tool to induce neural plasticity and is used to help neglect rehabilitation. It leads to a recalibration of visuomotor coordination during pointing as well as to aftereffects on a number of sensorimotor and attention tasks, but whether these effects originate at a motor or attentional level remains a matter of debate. Our aim was to further characterize PA aftereffects by using an approach that allows distinguishing between effects on attentional and motor processes. We recorded EEG in healthy human participants (9 females and 7 males) while performing a new double step, anticipatory attention/motor preparation paradigm before and after adaptation to rightward-shifting prisms, with neutral lenses as a control. We then examined PA aftereffects through changes in known oscillatory EEG signatures of spatial attention orienting and motor preparation in the alpha and beta frequency bands. Our results were twofold. First, we found PA to rightward-shifting prisms to selectively affect EEG signatures of motor but not attentional processes. More specifically, PA modulated preparatory motor EEG activity over central electrodes in the right hemisphere, contralateral to the PA-induced, compensatory leftward shift in pointing movements. No effects were found on EEG signatures of spatial attention orienting over occipitoparietal sites. Second, we found the PA effect on preparatory motor EEG activity to dominate in the beta frequency band. We conclude that changes to intentional visuomotor, rather than attentional visuospatial, processes underlie the PA aftereffect of rightward-deviating prisms in healthy participants. SIGNIFICANCE STATEMENT Prismatic adaptation (PA) has been proposed as a tool to induce neural plasticity in both healthy participants and patients, due to its aftereffect impacting on a number of visuospatial and visuomotor functions. However, the neural mechanisms underlying PA aftereffects are poorly understood as only little neuroimaging evidence is available. Here, we examined, for the first time, the origin of PA aftereffects studying oscillatory brain activity. Our results show a selective modulation of preparatory motor activity following PA in healthy participants but no effect on attention-related activity. This provides novel insight into the PA aftereffect in the healthy brain and may help to inform interventions in neglect patients. PMID:29255004
Why do parallel cortical systems exist for the perception of static form and moving form?
Grossberg, S
1991-02-01
This article analyzes computational properties that clarify why the parallel cortical systems V1----V2, V1----MT, and V1----V2----MT exist for the perceptual processing of static visual forms and moving visual forms. The article describes a symmetry principle, called FM symmetry, that is predicted to govern the development of these parallel cortical systems by computing all possible ways of symmetrically gating sustained cells with transient cells and organizing these sustained-transient cells into opponent pairs of on-cells and off-cells whose output signals are insensitive to direction of contrast. This symmetric organization explains how the static form system (static BCS) generates emergent boundary segmentations whose outputs are insensitive to direction of contrast and insensitive to direction of motion, whereas the motion form system (motion BCS) generates emergent boundary segmentations whose outputs are insensitive to direction of contrast but sensitive to direction of motion. FM symmetry clarifies why the geometries of static and motion form perception differ--for example, why the opposite orientation of vertical is horizontal (90 degrees), but the opposite direction of up is down (180 degrees). Opposite orientations and directions are embedded in gated dipole opponent processes that are capable of antagonistic rebound. Negative afterimages, such as the MacKay and waterfall illusions, are hereby explained as are aftereffects of long-range apparent motion. These antagonistic rebounds help to control a dynamic balance between complementary perceptual states of resonance and reset. Resonance cooperatively links features into emergent boundary segmentations via positive feedback in a CC loop, and reset terminates a resonance when the image changes, thereby preventing massive smearing of percepts. These complementary preattentive states of resonance and reset are related to analogous states that govern attentive feature integration, learning, and memory search in adaptive resonance theory. The mechanism used in the V1----MT system to generate a wave of apparent motion between discrete flashes may also be used in other cortical systems to generate spatial shifts of attention. The theory suggests how the V1----V2----MT cortical stream helps to compute moving form in depth and how long-range apparent motion of illusory contours occurs. These results collectively argue against vision theories that espouse independent processing modules. Instead, specialized subsystems interact to overcome computational uncertainties and complementary deficiencies, to cooperatively bind features into context-sensitive resonances, and to realize symmetry principles that are predicted to govern the development of the visual cortex.
Gómez-Moya, Rosinna; Díaz, Rosalinda; Fernandez-Ruiz, Juan
2016-04-01
Different processes are involved during visuomotor learning, including an error-based procedural and a strategy based cognitive mechanism. Our objective was to analyze if the changes in the adaptation or the aftereffect components of visuomotor learning measured across development, reflected different maturation rates of the aforementioned mechanisms. Ninety-five healthy children aged 4-12years and a group of young adults participated in a wedge prism and a dove prism throwing task, which laterally displace or horizontally reverse the visual field respectively. The results show that despite the age-related differences in motor control, all children groups adapted in the error-based wedge prisms condition. However, when removing the prism, small children showed a slower aftereffects extinction rate. On the strategy-based visual reversing task only the older children group reached adult-like levels. These results are consistent with the idea of different mechanisms with asynchronous maturation rates participating during visuomotor learning. Copyright © 2016 Elsevier B.V. All rights reserved.
Otten, Marte; Banaji, Mahzarin R.
2012-01-01
A number of recent behavioral studies have shown that emotional expressions are differently perceived depending on the race of a face, and that perception of race cues is influenced by emotional expressions. However, neural processes related to the perception of invariant cues that indicate the identity of a face (such as race) are often described to proceed independently of processes related to the perception of cues that can vary over time (such as emotion). Using a visual face adaptation paradigm, we tested whether these behavioral interactions between emotion and race also reflect interdependent neural representation of emotion and race. We compared visual emotion aftereffects when the adapting face and ambiguous test face differed in race or not. Emotion aftereffects were much smaller in different race (DR) trials than same race (SR) trials, indicating that the neural representation of a facial expression is significantly different depending on whether the emotional face is black or white. It thus seems that invariable cues such as race interact with variable face cues such as emotion not just at a response level, but also at the level of perception and neural representation. PMID:22403531
Dziuda, Lukasz; Biernacki, Marcin P; Baran, Paulina M; Truszczyński, Olaf E
2014-05-01
In the study, we checked: 1) how the simulator test conditions affect the severity of simulator sickness symptoms; 2) how the severity of simulator sickness symptoms changes over time; and 3) whether the conditions of the simulator test affect the severity of these symptoms in different ways, depending on the time that has elapsed since the performance of the task in the simulator. We studied 12 men aged 24-33 years (M = 28.8, SD = 3.26) using a truck simulator. The SSQ questionnaire was used to assess the severity of the symptoms of simulator sickness. Each of the subjects performed three 30-minute tasks running along the same route in a driving simulator. Each of these tasks was carried out in a different simulator configuration: A) fixed base platform with poor visibility; B) fixed base platform with good visibility; and C) motion base platform with good visibility. The measurement of the severity of the simulator sickness symptoms took place in five consecutive intervals. The results of the analysis showed that the simulator test conditions affect in different ways the severity of the simulator sickness symptoms, depending on the time which has elapsed since performing the task on the simulator. The simulator sickness symptoms persisted at the highest level for the test conditions involving the motion base platform. Also, when performing the tasks on the motion base platform, the severity of the simulator sickness symptoms varied depending on the time that had elapsed since performing the task. Specifically, the addition of motion to the simulation increased the oculomotor and disorientation symptoms reported as well as the duration of the after-effects. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Figural Aftereffects: An Explanation in Terms of Multiple Mechanisms in the Human Visual System,
1983-04-19
increments. The width of the four TFs was held constant at 30 min (the width of the smallest IF) while tae height varied from 15 ( TF1 ) to 60 (TF4...in width from 15 ( TF1 ) to 30 (TF4) Min. of arc in 5 min. increments and were oriented at 00, or vertical. A range of 900 to 1800 min 2 of visual angle
What Do We Learn from Binding Features? Evidence for Multilevel Feature Integration
ERIC Educational Resources Information Center
Colzato, Lorenza S.; Raffone, Antonino; Hommel, Bernhard
2006-01-01
Four experiments were conducted to investigate the relationship between the binding of visual features (as measured by their after-effects on subsequent binding) and the learning of feature-conjunction probabilities. Both binding and learning effects were obtained, but they did not interact. Interestingly, (shape-color) binding effects…
Top-down knowledge modulates onset capture in a feedforward manner.
Becker, Stefanie I; Lewis, Amanda J; Axtens, Jenna E
2017-04-01
How do we select behaviourally important information from cluttered visual environments? Previous research has shown that both top-down, goal-driven factors and bottom-up, stimulus-driven factors determine which stimuli are selected. However, it is still debated when top-down processes modulate visual selection. According to a feedforward account, top-down processes modulate visual processing even before the appearance of any stimuli, whereas others claim that top-down processes modulate visual selection only at a late stage, via feedback processing. In line with such a dual stage account, some studies found that eye movements to an irrelevant onset distractor are not modulated by its similarity to the target stimulus, especially when eye movements are launched early (within 150-ms post stimulus onset). However, in these studies the target transiently changed colour due to a colour after-effect that occurred during premasking, and the time course analyses were incomplete. The present study tested the feedforward account against the dual stage account in two eye tracking experiments, with and without colour after-effects (Exp. 1), as well when the target colour varied randomly and observers were informed of the target colour with a word cue (Exp. 2). The results showed that top-down processes modulated the earliest eye movements to the onset distractors (<150-ms latencies), without incurring any costs for selection of target matching distractors. These results unambiguously support a feedforward account of top-down modulation.
Robot-assisted adaptive training: custom force fields for teaching movement patterns.
Patton, James L; Mussa-Ivaldi, Ferdinando A
2004-04-01
Based on recent studies of neuro-adaptive control, we tested a new iterative algorithm to generate custom training forces to "trick" subjects into altering their target-directed reaching movements to a prechosen movement as an after-effect of adaptation. The prechosen movement goal, a sinusoidal-shaped path from start to end point, was never explicitly conveyed to the subject. We hypothesized that the adaptation would cause an alteration in the feedforward command that would result in the prechosen movement. Our results showed that when forces were suddenly removed after a training period of 330 movements, trajectories were significantly shifted toward the prechosen movement. However, de-adaptation occurred (i.e., the after-effect "washed out") in the 50-75 movements that followed the removal of the training forces. A second experiment suppressed vision of hand location and found a detectable reduction in the washout of after-effects, suggesting that visual feedback of error critically influences learning. A final experiment demonstrated that after-effects were also present in the neighborhood of training--44% of original directional shift was seen in adjacent, unpracticed movement directions to targets that were 60 degrees different from the targets used for training. These results demonstrate the potential for these methods for teaching motor skills and for neuro-rehabilitation of brain-injured patients. This is a form of "implicit learning," because unlike explicit training methods, subjects learn movements with minimal instructions, no knowledge of, and little attention to the trajectory.
Hummel, Dennis; Rudolf, Anne K; Brandi, Marie-Luise; Untch, Karl-Heinz; Grabhorn, Ralph; Hampel, Harald; Mohr, Harald M
2013-12-01
Visual perception can be strongly biased due to exposure to specific stimuli in the environment, often causing neural adaptation and visual aftereffects. In this study, we investigated whether adaptation to certain body shapes biases the perception of the own body shape. Furthermore, we aimed to evoke neural adaptation to certain body shapes. Participants completed a behavioral experiment (n = 14) to rate manipulated pictures of their own bodies after adaptation to demonstratively thin or fat pictures of their own bodies. The same stimuli were used in a second experiment (n = 16) using functional magnetic resonance imaging (fMRI) adaptation. In the behavioral experiment, after adapting to a thin picture of the own body participants also judged a thinner than actual body picture to be the most realistic and vice versa, resembling a typical aftereffect. The fusiform body area (FBA) and the right middle occipital gyrus (rMOG) show neural adaptation to specific body shapes while the extrastriate body area (EBA) bilaterally does not. The rMOG cluster is highly selective for bodies and perhaps body parts. The findings of the behavioral experiment support the existence of a perceptual body shape aftereffect, resulting from a specific adaptation to thin and fat pictures of one's own body. The fMRI results imply that body shape adaptation occurs in the FBA and the rMOG. The role of the EBA in body shape processing remains unclear. The results are also discussed in the light of clinical body image disturbances. Copyright © 2012 Wiley Periodicals, Inc.
The Role of Higher Level Adaptive Coding Mechanisms in the Development of Face Recognition
ERIC Educational Resources Information Center
Pimperton, Hannah; Pellicano, Elizabeth; Jeffery, Linda; Rhodes, Gillian
2009-01-01
DevDevelopmental improvements in face identity recognition ability are widely documented, but the source of children's immaturity in face recognition remains unclear. Differences in the way in which children and adults visually represent faces might underlie immaturities in face recognition. Recent evidence of a face identity aftereffect (FIAE),…
Flicker Adaptation of Low-Level Cortical Visual Neurons Contributes to Temporal Dilation
ERIC Educational Resources Information Center
Ortega, Laura; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru
2012-01-01
Several seconds of adaptation to a flickered stimulus causes a subsequent brief static stimulus to appear longer in duration. Nonsensory factors, such as increased arousal and attention, have been thought to mediate this flicker-based temporal-dilation aftereffect. In this study, we provide evidence that adaptation of low-level cortical visual…
Furl, N; van Rijsbergen, N J; Treves, A; Dolan, R J
2007-08-01
Previous studies have shown reductions of the functional magnetic resonance imaging (fMRI) signal in response to repetition of specific visual stimuli. We examined how adaptation affects the neural responses associated with categorization behavior, using face adaptation aftereffects. Adaptation to a given facial category biases categorization towards non-adapted facial categories in response to presentation of ambiguous morphs. We explored a hypothesis, posed by recent psychophysical studies, that these adaptation-induced categorizations are mediated by activity in relatively advanced stages within the occipitotemporal visual processing stream. Replicating these studies, we find that adaptation to a facial expression heightens perception of non-adapted expressions. Using comparable behavioral methods, we also show that adaptation to a specific identity heightens perception of a second identity in morph faces. We show both expression and identity effects to be associated with heightened anterior medial temporal lobe activity, specifically when perceiving the non-adapted category. These regions, incorporating bilateral anterior ventral rhinal cortices, perirhinal cortex and left anterior hippocampus are regions previously implicated in high-level visual perception. These categorization effects were not evident in fusiform or occipital gyri, although activity in these regions was reduced to repeated faces. The findings suggest that adaptation-induced perception is mediated by activity in regions downstream to those showing reductions due to stimulus repetition.
Anthony Eikema, Diderik Jan A.; Chien, Jung Hung; Stergiou, Nicholas; Myers, Sara A.; Scott-Pandorf, Melissa M.; Bloomberg, Jacob J.; Mukherjee, Mukul
2015-01-01
Human locomotor adaptation requires feedback and feed-forward control processes to maintain an appropriate walking pattern. Adaptation may require the use of visual and proprioceptive input to decode altered movement dynamics and generate an appropriate response. After a person transfers from an extreme sensory environment and back, as astronauts do when they return from spaceflight, the prolonged period required for re-adaptation can pose a significant burden. In our previous paper, we showed that plantar tactile vibration during a split-belt adaptation task did not interfere with the treadmill adaptation however, larger overground transfer effects with a slower decay resulted. Such effects, in the absence of visual feedback (of motion) and perturbation of tactile feedback, is believed to be due to a higher proprioceptive gain because, in the absence of relevant external dynamic cues such as optic flow, reliance on body-based cues is enhanced during gait tasks through multisensory integration. In this study we therefore investigated the effect of optic flow on tactile stimulated split-belt adaptation as a paradigm to facilitate the sensorimotor adaptation process. Twenty healthy young adults, separated into two matched groups, participated in the study. All participants performed an overground walking trial followed by a split-belt treadmill adaptation protocol. The tactile group (TC) received vibratory plantar tactile stimulation only, whereas the virtual reality and tactile group (VRT) received an additional concurrent visual stimulation: a moving virtual corridor, inducing perceived self-motion. A post-treadmill overground trial was performed to determine adaptation transfer. Interlimb coordination of spatiotemporal and kinetic variables was quantified using symmetry indices, and analyzed using repeated-measures ANOVA. Marked changes of step length characteristics were observed in both groups during split-belt adaptation. Stance and swing time symmetry were similar in the two groups, suggesting that temporal parameters are not modified by optic flow. However, whereas the TC group displayed significant stance time asymmetries during the post-treadmill session, such aftereffects were absent in the VRT group. The results indicated that the enhanced transfer resulting from exposure to plantar cutaneous vibration during adaptation was alleviated by optic flow information. The presence of visual self-motion information may have reduced proprioceptive gain during learning. Thus, during overground walking, the learned proprioceptive split-belt pattern is more rapidly overridden by visual input due to its increased relative gain. The results suggest that when visual stimulation is provided during adaptive training, the system acquires the novel movement dynamics while maintaining the ability to flexibly adapt to different environments. PMID:26525712
Seizova-Cajic, Tatjana; Holcombe, Alex O.
2015-01-01
After prolonged exposure to a surface moving across the skin, this felt movement appears slower, a phenomenon known as the tactile speed aftereffect (tSAE). We asked which feature of the adapting motion drives the tSAE: speed, the spacing between texture elements, or the frequency with which they cross the skin. After adapting to a ridged moving surface with one hand, participants compared the speed of test stimuli on adapted and unadapted hands. We used surfaces with different spatial periods (SPs; 3, 6, 12 mm) that produced adapting motion with different combinations of adapting speed (20, 40, 80 mm/s) and temporal frequency (TF; 3.4, 6.7, 13.4 ridges/s). The primary determinant of tSAE magnitude was speed of the adapting motion, not SP or TF. This suggests that adaptation occurs centrally, after speed has been computed from SP and TF, and/or that it reflects a speed cue independent of those features in the first place (e.g., indentation force). In a second experiment, we investigated the properties of the neural code for speed. Speed tuning predicts that adaptation should be greatest for speeds at or near the adapting speed. However, the tSAE was always stronger when the adapting stimulus was faster (242 mm/s) than the test (30–143 mm/s) compared with when the adapting and test speeds were matched. These results give no indication of speed tuning and instead suggest that adaptation occurs at a level where an intensive code dominates. In an intensive code, the faster the stimulus, the more the neurons fire. PMID:26631149
Disorders of motion and depth.
Nawrot, Mark
2003-08-01
Damage to the human homologue of area MT produces a motion perception deficit similar to that found in the monkey with MT lesions. Even temporary disruption of MT processing with transcranial magnetic stimulation can produce a temporary akinetopsia [127]. Motion perception deficits, however, also are found with a variety of subcortical lesions and other neurologic disorders that can best be described as causing a disconnection within the motion processing stream. The precise role of these subcortical structures, such as the cerebellum, remains to be determined. Simple motion perception, moreover, is only a part of MT function. It undoubtedly has an important role in the perception of depth from motion and stereopsis [112]. Psychophysical studies using aftereffects in normal observers suggest a link between stereo mechanisms and the perception of depth from motion [9-11]. There is even a simple correlation between stereo acuity and the perception of depth from motion [128]. Future studies of patients with cortical lesions will take a closer look at depth perception in association with motion perception and should provide a better understanding of how motion and depth are processed together.
The background is remapped across saccades.
Cha, Oakyoon; Chong, Sang Chul
2014-02-01
Physiological studies have found that neurons prepare for impending eye movements, showing anticipatory responses to stimuli presented at the location of the post-saccadic receptive fields (RFs) (Wurtz in Vis Res 48:2070-2089, 2008). These studies proposed that visual neurons with shifting RFs prepared for the stimuli they would process after an impending saccade. Additionally, psychophysical studies have shown behavioral consequences of those anticipatory responses, including the transfer of aftereffects (Melcher in Nat Neurosci 10:903-907, 2007) and the remapping of attention (Rolfs et al. in Nat Neurosci 14:252-258, 2011). As the physiological studies proposed, the shifting RF mechanism explains the transfer of aftereffects. Recently, a new mechanism based on activation transfer via a saliency map was proposed, which accounted for the remapping of attention (Cavanagh et al. in Trends Cogn Sci 14:147-153, 2010). We hypothesized that there would be different aspects of the remapping corresponding to these different neural mechanisms. This study found that the information in the background was remapped to a similar extent as the figure, provided that the visual context remained stable. We manipulated the status of the figure and the ground in the saliency map and showed that the manipulation modulated the remapping of the figure and the ground in different ways. These results suggest that the visual system has an ability to remap the background as well as the figure, but lacks the ability to modulate the remapping of the background based on the visual context, and that different neural mechanisms might work together to maintain visual stability across saccades.
Visuomotor adaptation needs a validation of prediction error by feedback error
Gaveau, Valérie; Prablanc, Claude; Laurent, Damien; Rossetti, Yves; Priot, Anne-Emmanuelle
2014-01-01
The processes underlying short-term plasticity induced by visuomotor adaptation to a shifted visual field are still debated. Two main sources of error can induce motor adaptation: reaching feedback errors, which correspond to visually perceived discrepancies between hand and target positions, and errors between predicted and actual visual reafferences of the moving hand. These two sources of error are closely intertwined and difficult to disentangle, as both the target and the reaching limb are simultaneously visible. Accordingly, the goal of the present study was to clarify the relative contributions of these two types of errors during a pointing task under prism-displaced vision. In “terminal feedback error” condition, viewing of their hand by subjects was allowed only at movement end, simultaneously with viewing of the target. In “movement prediction error” condition, viewing of the hand was limited to movement duration, in the absence of any visual target, and error signals arose solely from comparisons between predicted and actual reafferences of the hand. In order to prevent intentional corrections of errors, a subthreshold, progressive stepwise increase in prism deviation was used, so that subjects remained unaware of the visual deviation applied in both conditions. An adaptive aftereffect was observed in the “terminal feedback error” condition only. As far as subjects remained unaware of the optical deviation and self-assigned pointing errors, prediction error alone was insufficient to induce adaptation. These results indicate a critical role of hand-to-target feedback error signals in visuomotor adaptation; consistent with recent neurophysiological findings, they suggest that a combination of feedback and prediction error signals is necessary for eliciting aftereffects. They also suggest that feedback error updates the prediction of reafferences when a visual perturbation is introduced gradually and cognitive factors are eliminated or strongly attenuated. PMID:25408644
Squat exercise biomechanics during short-radius centrifugation.
Duda, Kevin R; Jarchow, Thomas; Young, Laurence R
2012-02-01
Centrifuge-induced artificial gravity (AG) with exercise is a promising comprehensive countermeasure against the physiological de-conditioning that results from exposure to weightlessness. However, body movements onboard a rotating centrifuge are affected by both the gravity gradient and Coriolis accelerations. The effect of centrifugation on squat exercise biomechanics was investigated, and differences between AG and upright squat biomechanics were quantified. There were 28 subjects (16 male) who participated in two separate experiments. Knee position, foot reaction forces, and motion sickness were recorded during the squats in a 1-G field while standing upright and while supine on a horizontally rotating 2 m radius centrifuge at 0, 23, or 30 rpm. No participants terminated the experiment due to motion sickness symptoms. Total mediolateral knee deflection increased by 1.0 to 2.0 cm during centrifugation, and did not result in any injuries. There was no evidence of an increased mediolateral knee travel "after-effect" during postrotation supine squats. Peak foot reaction forces increased with rotation rate up to approximately 200% bodyweight (iRED on ISS provides approximately 210% bodyweight resistance). The ratio of left-to-right foot force throughout the squat cycle on the centrifuge was nonconstant and approximately sinusoidal. Total foot reaction force versus knee flexion-extension angles differed between upright and AG squats due to centripetal acceleration on the centrifuge. A brief exercise protocol during centrifugation can be safely completed without significant after-effects in mediolateral knee position or motion sickness. Several recommendations are made for the design of future centrifuge-based exercise protocols for in-space applications.
[Brain Mechanisms for Measuring Time: Population Coding of Durations].
Hayashi, Masamichi J
2016-11-01
Temporal processing is crucial in many aspects of our perception and action. While there is mounting evidence for the encoding mechanisms of spatial ("where") and identity ("what") information, those of temporal information ("when") remain largely unknown. Recent studies suggested that, similarly to the basic visual stimulus features such as orientation, motion direction, and numerical quantity, event durations are also represented by a population of neurons that are tuned for specific, preferred durations. This paper first reviews recent psychophysical studies on duration aftereffect. Changes in the three parameters (response gain, shift, and width of tuning curves) are then discussed that may need to be taken into account in the putative duration-channel model. Next, the potential neural basis of the duration channels is examined by overviewing recent neuroimaging and electrophysiological studies on time perception. Finally, this paper proposes a general neural basis of timing that commonly represents time-differences independent of stimulus types (e.g., a single duration v.s. multiple brief events). This extends the idea of the "when pathway" from the perception of temporal order to the general timing mechanisms for the perception of duration, temporal frequency, and synchrony.
Estimation of Peak Ground Acceleration (PGA) for Peninsular Malaysia using geospatial approach
NASA Astrophysics Data System (ADS)
Nouri Manafizad, Amir; Pradhan, Biswajeet; Abdullahi, Saleh
2016-06-01
Among the various types of natural disasters, earthquake is considered as one of the most destructive events which impose a great amount of human fatalities and economic losses. Visualization of earthquake events and estimation of peak ground motions provides a strong tool for scientists and authorities to predict and mitigate the aftereffects of earthquakes. In addition it is useful for some businesses like insurance companies to evaluate the amount of investing risk. Although Peninsular Malaysian is situated in the stable part of Sunda plate, it is seismically influenced by very active earthquake sources of Sumatra's fault and subduction zones. This study modelled the seismic zones and estimates maximum credible earthquake (MCE) based on classified data for period 1900 to 2014. The deterministic approach was implemented for the analysis. Attenuation equations were used for two zones. Results show that, the PGA produced from subduction zone is from 2-64 (gal) and from the fault zone varies from 1-191(gal). In addition, the PGA generated from fault zone is more critical than subduction zone for selected seismic model.
Gender in facial representations: a contrast-based study of adaptation within and between the sexes.
Oruç, Ipek; Guo, Xiaoyue M; Barton, Jason J S
2011-01-18
Face aftereffects are proving to be an effective means of examining the properties of face-specific processes in the human visual system. We examined the role of gender in the neural representation of faces using a contrast-based adaptation method. If faces of different genders share the same representational face space, then adaptation to a face of one gender should affect both same- and different-gender faces. Further, if these aftereffects differ in magnitude, this may indicate distinct gender-related factors in the organization of this face space. To control for a potential confound between physical similarity and gender, we used a Bayesian ideal observer and human discrimination data to construct a stimulus set in which pairs of different-gender faces were equally dissimilar as same-gender pairs. We found that the recognition of both same-gender and different-gender faces was suppressed following a brief exposure of 100 ms. Moreover, recognition was more suppressed for test faces of a different-gender than those of the same-gender as the adaptor, despite the equivalence in physical and psychophysical similarity. Our results suggest that male and female faces likely occupy the same face space, allowing transfer of aftereffects between the genders, but that there are special properties that emerge along gender-defining dimensions of this space.
Mechanisms of the dimming and brightening aftereffects
Bosten, Jenny M.; MacLeod, Donald I. A.
2013-01-01
Abstract Dimming and brightening aftereffects occur after exposure to a temporal luminance sawtooth stimulus: A subsequently presented steady test field appears to become progressively dimmer or brighter, depending on the polarity of the adapting sawtooth. Although described as “dimming” and “brightening,” it is plausible that a component of the aftereffects is based on contrast changes rather than on luminance changes. We conducted two experiments to reveal any contrast component. In the first we investigated whether the aftereffects result from the same mechanism that causes a polarity-selective loss in contrast sensitivity following luminance sawtooth adaptation. We manipulated test contrast: If a component of the aftereffect results from a polarity selective loss of contrast sensitivity we would expect that the aftereffects would differ in magnitude depending on the contrast polarity of the test fields. We found no effect of test-field polarity. In the second experiment we used an adapting sawtooth with a polarity consistent in contrast but alternating in luminance in order to induce a potential equivalent aftereffect of contrast. Again, we found no evidence that the aftereffects result from contrast adaptation. In a third experiment, we used S-cone isolating stimuli to discover whether there are S-cone dimming and brightening aftereffects. We found no aftereffects. However, in a fourth experiment we replicated Krauskopf and Zaidi's (1986) finding that adaptation to S-cone sawtooth stimuli affects thresholds for increment and decrement detection. The mechanism underlying the dimming and brightening aftereffects thus seems to be independent of the mechanism underlying the concurrent polarity selective reductions in contrast sensitivity. PMID:23695534
Locomotor adaptation is modulated by observing the actions of others
Patel, Mitesh; Roberts, R. Edward; Riyaz, Mohammed U.; Ahmed, Maroof; Buckwell, David; Bunday, Karen; Ahmad, Hena; Kaski, Diego; Arshad, Qadeer
2015-01-01
Observing the motor actions of another person could facilitate compensatory motor behavior in the passive observer. Here we explored whether action observation alone can induce automatic locomotor adaptation in humans. To explore this possibility, we used the “broken escalator” paradigm. Conventionally this involves stepping upon a stationary sled after having previously experienced it actually moving (Moving trials). This history of motion produces a locomotor aftereffect when subsequently stepping onto a stationary sled. We found that viewing an actor perform the Moving trials was sufficient to generate a locomotor aftereffect in the observer, the size of which was significantly correlated with the size of the movement (postural sway) observed. Crucially, the effect is specific to watching the task being performed, as no motor adaptation occurs after simply viewing the sled move in isolation. These findings demonstrate that locomotor adaptation in humans can be driven purely by action observation, with the brain adapting motor plans in response to the size of the observed individual's motion. This mechanism may be mediated by a mirror neuron system that automatically adapts behavior to minimize movement errors and improve motor skills through social cues, although further neurophysiological studies are required to support this theory. These data suggest that merely observing the gait of another person in a challenging environment is sufficient to generate appropriate postural countermeasures, implying the existence of an automatic mechanism for adapting locomotor behavior. PMID:26156386
The tactile movement aftereffect.
Hollins, M; Favorov, O
1994-01-01
The existence of a tactile movement aftereffect was established in a series of experiments on the palmar surface of the hand and fingers of psychophysical observers. During adaptation, observers cupped their hand around a moving drum for up to 3 min; following this period of stimulation, they typically reported an aftereffect consisting of movement sensations located on and deep to the skin, and lasting for up to 1 min. Preliminary experiments comparing a number of stimulus materials mounted on the drum demonstrated that a surface approximating a low-spatial-frequency square wave, with a smooth microtexture, was especially effective at inducing the aftereffect; this adapting stimulus was therefore used throughout the two main experiments. In Experiment 1, the vividness of the aftereffect produced by 2 min of adaptation was determined under three test conditions: with the hand (1) remaining on the now stationary drum; (2) in contact with a soft, textured surface; or (3) suspended in air. Subjects' free magnitude estimates of the peak vividness of the aftereffect were not significantly different across conditions; each subject experienced the aftereffect at least once under each condition. Thus the tactile movement aftereffect does not seem to depend critically on the ponditions of stimulation that obtain while it is being experienced. In Experiment 2, the vividness and duration of the aftereffect were measured as a function of the duration of the adapting stimulus. Both measures increased steadily over the range of durations explored (30-180 sec). In its dependence on adapting duration, the aftereffect resembles the waterfall illusion in vision. An explanation for the tactile movement aftereffect is proposed, based on the model of cortical dynamics of Whitsel et al. (1989, 1991). With assumed modest variation of one parameter across individuals, this application of the model is able to account both for the data of the majority of subjects, who experienced the aftereffect as opposite in direction to the adapting stimulus, and for those of an anomalous subject, who consistently experienced the aftereffect as being in the same direction as the adapting stimulus.
Patton, James L; Stoykov, Mary Ellen; Kovic, Mark; Mussa-Ivaldi, Ferdinando A
2006-01-01
This investigation is one in a series of studies that address the possibility of stroke rehabilitation using robotic devices to facilitate "adaptive training." Healthy subjects, after training in the presence of systematically applied forces, typically exhibit a predictable "after-effect." A critical question is whether this adaptive characteristic is preserved following stroke so that it might be exploited for restoring function. Another important question is whether subjects benefit more from training forces that enhance their errors than from forces that reduce their errors. We exposed hemiparetic stroke survivors and healthy age-matched controls to a pattern of disturbing forces that have been found by previous studies to induce a dramatic adaptation in healthy individuals. Eighteen stroke survivors made 834 movements in the presence of a robot-generated force field that pushed their hands proportional to its speed and perpendicular to its direction of motion--either clockwise or counterclockwise. We found that subjects could adapt, as evidenced by significant after-effects. After-effects were not correlated with the clinical scores that we used for measuring motor impairment. Further examination revealed that significant improvements occurred only when the training forces magnified the original errors, and not when the training forces reduced the errors or were zero. Within this constrained experimental task we found that error-enhancing therapy (as opposed to guiding the limb closer to the correct path) to be more effective than therapy that assisted the subject.
Challinor, Kirsten L; Mond, Jonathan; Stephen, Ian D; Mitchison, Deborah; Stevenson, Richard J; Hay, Phillipa; Brooks, Kevin R
2017-12-01
Although body size and shape misperception (BSSM) is a common feature of anorexia nervosa, bulimia nervosa and muscle dysmorphia, little is known about its underlying neural mechanisms. Recently, a new approach has emerged, based on the long-established non-invasive technique of perceptual adaptation, which allows for inferences about the structure of the neural apparatus responsible for alterations in visual appearance. Here, we describe several recent experimental examples of BSSM, wherein exposure to "extreme" body stimuli causes visual aftereffects of biased perception. The implications of these studies for our understanding of the neural and cognitive representation of human bodies, along with their implications for clinical practice are discussed.
Erasing the face after-effect.
Kiani, Ghazaleh; Davies-Thompson, Jodie; Barton, Jason J S
2014-10-24
Perceptual after-effects decay over time at a rate that depends on several factors, such as the duration of adaptation and the duration of the test stimuli. Whether this decay is accelerated by exposure to other faces after adaptation is not known. Our goal was to determine if the appearance of other faces during a delay period after adaptation affected the face identity after-effect. In the first experiment we investigated whether, in the perception of ambiguous stimuli created by morphing between two faces, the repulsive after-effects from adaptation to one face were reduced by brief presentation of the second face in a delay period. We found no effect; however, this may have been confounded by a small attractive after-effect from the interference face. In the second experiment, the interference stimuli were faces unrelated to those used as adaptation stimuli, and we examined after-effects at three different delay periods. This showed a decline in after-effects as the time since adaptation increased, and an enhancement of this decline by the presentation of intervening faces. An exponential model estimated that the intervening faces caused an 85% reduction in the time constant of the after-effect decay. In conclusion, we confirm that face after-effects decline rapidly after adaptation and that exposure to other faces hastens the re-setting of the system. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.
Second-order motions contribute to vection.
Gurnsey, R; Fleet, D; Potechin, C
1998-09-01
First- and second-order motions differ in their ability to induce motion aftereffects (MAEs) and the kinetic depth effect (KDE). To test whether second-order stimuli support computations relating to motion-in-depth we examined the vection illusion (illusory self motion induced by image flow) using a vection stimulus (V, expanding concentric rings) that depicted a linear path through a circular tunnel. The set of vection stimuli contained differing amounts of first- and second-order motion energy (ME). Subjects reported the duration of the perceived MAEs and the duration of their vection percept. In Experiment 1 both MAEs and vection durations were longest when the first-order (Fourier) components of V were present in the stimulus. In Experiment 2, V was multiplicatively combined with static noise carriers having different check sizes. The amount of first-order ME associated with V increases with check size. MAEs were found to increase with check size but vection durations were unaffected. In general MAEs depend on the amount of first-order ME present in the signal. Vection, on the other hand, appears to depend on a representation of image flow that combines first- and second-order ME.
NASA Technical Reports Server (NTRS)
Mattson, D. L.
1975-01-01
The effect of prolonged angular acceleration on choice reaction time to an accelerating visual stimulus was investigated, with 10 commercial airline pilots serving as subjects. The pattern of reaction times during and following acceleration was compared with the pattern of velocity estimates reported during identical trials. Both reaction times and velocity estimates increased at the onset of acceleration, declined prior to the termination of acceleration, and showed an aftereffect. These results are inconsistent with the torsion-pendulum theory of semicircular canal function and suggest that the vestibular adaptation is of central origin.
Use of prism adaptation in children with unilateral brain lesion: Is it feasible?
Riquelme, Inmaculada; Henne, Camille; Flament, Benoit; Legrain, Valéry; Bleyenheuft, Yannick; Hatem, Samar M
2015-01-01
Unilateral visuospatial deficits have been observed in children with brain damage. While the effectiveness of prism adaptation for treating unilateral neglect in adult stroke patients has been demonstrated previously, the usefulness of prism adaptation in a pediatric population is still unknown. The present study aims at evaluating the feasibility of prism adaptation in children with unilateral brain lesion and comparing the validity of a game procedure designed for child-friendly paediatric intervention, with the ecological task used for prism adaptation in adult patients. Twenty-one children with unilateral brain lesion randomly were assigned to a prism group wearing prismatic glasses, or a control group wearing neutral glasses during a bimanual task intervention. All children performed two different bimanual tasks on randomly assigned consecutive days: ecological tasks or game tasks. The efficacy of prism adaptation was measured by assessing its after-effects with visual open loop pointing (visuoproprioceptive test) and subjective straight-ahead pointing (proprioceptive test). Game tasks and ecological tasks produced similar after-effects. Prismatic glasses elicited a significant shift of visuospatial coordinates which was not observed in the control group. Prism adaptation performed with game tasks seems an effective procedure to obtain after-effects in children with unilateral brain lesion. The usefulness of repetitive prism adaptation sessions as a therapeutic intervention in children with visuospatial deficits and/or neglect, should be investigated in future studies. Copyright © 2015 Elsevier Ltd. All rights reserved.
Gender in Facial Representations: A Contrast-Based Study of Adaptation within and between the Sexes
Oruç, Ipek; Guo, Xiaoyue M.; Barton, Jason J. S.
2011-01-01
Face aftereffects are proving to be an effective means of examining the properties of face-specific processes in the human visual system. We examined the role of gender in the neural representation of faces using a contrast-based adaptation method. If faces of different genders share the same representational face space, then adaptation to a face of one gender should affect both same- and different-gender faces. Further, if these aftereffects differ in magnitude, this may indicate distinct gender-related factors in the organization of this face space. To control for a potential confound between physical similarity and gender, we used a Bayesian ideal observer and human discrimination data to construct a stimulus set in which pairs of different-gender faces were equally dissimilar as same-gender pairs. We found that the recognition of both same-gender and different-gender faces was suppressed following a brief exposure of 100ms. Moreover, recognition was more suppressed for test faces of a different-gender than those of the same-gender as the adaptor, despite the equivalence in physical and psychophysical similarity. Our results suggest that male and female faces likely occupy the same face space, allowing transfer of aftereffects between the genders, but that there are special properties that emerge along gender-defining dimensions of this space. PMID:21267414
Kloth, Nadine; Rhodes, Gillian; Schweinberger, Stefan R.
2015-01-01
Face aftereffects (e.g., expression aftereffects) can be simultaneously induced in opposite directions for different face categories (e.g., male and female faces). Such aftereffects are typically interpreted as indicating that distinct neural populations code the categories on which adaptation is contingent, e.g., male and female faces. Moreover, they suggest that these distinct populations selectively respond to variations in the secondary stimulus dimension, e.g., emotional expression. However, contingent aftereffects have now been reported for so many different combinations of face characteristics, that one might question this interpretation. Instead, the selectivity might be generated during the adaptation procedure, for instance as a result of associative learning, and not indicate pre-existing response selectivity in the face perception system. To alleviate this concern, one would need to demonstrate some limit to contingent aftereffects. Here, we report a clear limit, showing that gaze direction aftereffects are not contingent on face sex. We tested 36 young Caucasian adults in a gaze adaptation paradigm. We initially established their ability to discriminate the gaze direction of male and female test faces in a pre-adaptation phase. Afterwards, half of the participants adapted to female faces looking left and male faces looking right, and half adapted to the reverse pairing. We established the effects of this adaptation on the perception of gaze direction in subsequently presented male and female test faces. We found that adaptation induced pronounced gaze direction aftereffects, i.e., participants were biased to perceive small gaze deviations to both the left and right as direct. Importantly, however, aftereffects were identical for male and female test faces, showing that the contingency of face sex and gaze direction participants experienced during the adaptation procedure had no effect. PMID:26648890
Factors contributing to the adaptation aftereffects of facial expression.
Butler, Andrea; Oruc, Ipek; Fox, Christopher J; Barton, Jason J S
2008-01-29
Previous studies have demonstrated the existence of adaptation aftereffects for facial expressions. Here we investigated which aspects of facial stimuli contribute to these aftereffects. In Experiment 1, we examined the role of local adaptation to image elements such as curvature, shape and orientation, independent of expression, by using hybrid faces constructed from either the same or opposing expressions. While hybrid faces made with consistent expressions generated aftereffects as large as those with normal faces, there were no aftereffects from hybrid faces made from different expressions, despite the fact that these contained the same local image elements. In Experiment 2, we examined the role of facial features independent of the normal face configuration by contrasting adaptation with whole faces to adaptation with scrambled faces. We found that scrambled faces also generated significant aftereffects, indicating that expressive features without a normal facial configuration could generate expression aftereffects. In Experiment 3, we examined the role of facial configuration by using schematic faces made from line elements that in isolation do not carry expression-related information (e.g. curved segments and straight lines) but that convey an expression when arranged in a normal facial configuration. We obtained a significant aftereffect for facial configurations but not scrambled configurations of these line elements. We conclude that facial expression aftereffects are not due to local adaptation to image elements but due to high-level adaptation of neural representations that involve both facial features and facial configuration.
ERIC Educational Resources Information Center
Frings, Christian; Amendt, Anna; Spence, Charles
2011-01-01
Negative priming (NP) refers to the finding that people's responses to probe targets previously presented as prime distractors are usually slower than to unrepeated stimuli. Intriguingly, the effect sizes of tactile NP were much larger than the effect sizes for visual NP. We analyzed whether the large tactile NP effect is just a side effect of the…
Beyond the Sensorimotor Plasticity: Cognitive Expansion of Prism Adaptation in Healthy Individuals.
Michel, Carine
2015-01-01
Sensorimotor plasticity allows us to maintain an efficient motor behavior in reaction to environmental changes. One of the classical models for the study of sensorimotor plasticity is prism adaptation. It consists of pointing to visual targets while wearing prismatic lenses that shift the visual field laterally. The conditions of the development of the plasticity and the sensorimotor after-effects have been extensively studied for more than a century. However, the interest taken in this phenomenon was considerably increased since the demonstration of neglect rehabilitation following prism adaptation by Rossetti et al. (1998). Mirror effects, i.e., simulation of neglect in healthy individuals, were observed for the first time by Colent et al. (2000). The present review focuses on the expansion of prism adaptation to cognitive functions in healthy individuals during the last 15 years. Cognitive after-effects have been shown in numerous tasks even in those that are not intrinsically spatial in nature. Altogether, these results suggest the existence of a strong link between low-level sensorimotor plasticity and high-level cognitive functions and raise important questions about the mechanisms involved in producing unexpected cognitive effects following prism adaptation. Implications for the functional mechanisms and neuroanatomical network of prism adaptation are discussed to explain how sensorimotor plasticity may affect cognitive processes.
Beyond the Sensorimotor Plasticity: Cognitive Expansion of Prism Adaptation in Healthy Individuals
Michel, Carine
2016-01-01
Sensorimotor plasticity allows us to maintain an efficient motor behavior in reaction to environmental changes. One of the classical models for the study of sensorimotor plasticity is prism adaptation. It consists of pointing to visual targets while wearing prismatic lenses that shift the visual field laterally. The conditions of the development of the plasticity and the sensorimotor after-effects have been extensively studied for more than a century. However, the interest taken in this phenomenon was considerably increased since the demonstration of neglect rehabilitation following prism adaptation by Rossetti et al. (1998). Mirror effects, i.e., simulation of neglect in healthy individuals, were observed for the first time by Colent et al. (2000). The present review focuses on the expansion of prism adaptation to cognitive functions in healthy individuals during the last 15 years. Cognitive after-effects have been shown in numerous tasks even in those that are not intrinsically spatial in nature. Altogether, these results suggest the existence of a strong link between low-level sensorimotor plasticity and high-level cognitive functions and raise important questions about the mechanisms involved in producing unexpected cognitive effects following prism adaptation. Implications for the functional mechanisms and neuroanatomical network of prism adaptation are discussed to explain how sensorimotor plasticity may affect cognitive processes. PMID:26779088
The moving platform after-effect reveals dissociation between what we know and how we walk.
Reynolds, R; Bronstein, A
2007-01-01
Gait adaptation is crucial for coping with varying terrain and biological needs. It is also important that any acquired adaptation is expressed only in the appropriate context. Here we review a recent series of experiments which demonstrate inappropriate expression of gait adaptation. We showed that a brief period of walking onto a platform previously experienced as moving results in a large forward sway despite full awareness of the changing context. The adaptation mechanisms involved in this paradigm are extremely fast, just 1-2 discrete exposures to the moving platform results in a motor after-effect. This after-effect still occurs even if subjects deliberately attempt to suppress it. However it disappears when the location or method of gait is altered, indicating that after-effect expression is context dependent. Conversely, making gait self-initiated increased sway during the after-effect. This after-effect demonstrates a profound dissociation between knowledge and action. The absence of generalisation suggests a simple form of motor learning. However, persistent expression of gait after-effects may be dependent on an intact cerebral cortex. The fact that the after-effect is greater during self-initiated gait, and is context dependent, would be consistent with the involvement of supraspinal areas.
Motion Controlled Gait Enhancing Mobile Shoe for Rehabilitation
Handzic, Ismet; Vasudevan, Erin V.; Reed, Kyle B.
2011-01-01
Walking on a split-belt treadmill, which has two belts that can be run at different speeds, has been shown to improve walking patterns post-stroke. However, these improvements are only temporarily retained once individuals transition to walking over ground. We hypothesize that longer-lasting effects would be observed if the training occurred during natural walking over ground, as opposed to on a treadmill. In order to study such long-term effects, we have developed a mobile and portable device which can simulate the same gait altering movements experienced on a split-belt treadmill. The new motion controlled gait enhancing mobile shoe improves upon the previous version’s drawbacks. This version of the GEMS has motion that is continuous, smooth, and regulated with on-board electronics. A vital component of this new design is the Archimedean spiral wheel shape that redirects the wearer’s downward force into a horizontal backward motion. The design is passive and does not utilize any motors. Its motion is regulated only by a small magnetic particle brake. Further experimentation is needed to evaluate the long-term after-effects. PMID:22275620
Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Hayward, William G; Ewing, Louise
2014-06-01
Despite their similarity as visual patterns, we can discriminate and recognize many thousands of faces. This expertise has been linked to 2 coding mechanisms: holistic integration of information across the face and adaptive coding of face identity using norms tuned by experience. Recently, individual differences in face recognition ability have been discovered and linked to differences in holistic coding. Here we show that they are also linked to individual differences in adaptive coding of face identity, measured using face identity aftereffects. Identity aftereffects correlated significantly with several measures of face-selective recognition ability. They also correlated marginally with own-race face recognition ability, suggesting a role for adaptive coding in the well-known other-race effect. More generally, these results highlight the important functional role of adaptive face-coding mechanisms in face expertise, taking us beyond the traditional focus on holistic coding mechanisms. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Three timescales in prism adaptation.
Inoue, Masato; Uchimura, Motoaki; Karibe, Ayaka; O'Shea, Jacinta; Rossetti, Yves; Kitazawa, Shigeru
2015-01-01
It has been proposed that motor adaptation depends on at least two learning systems, one that learns fast but with poor retention and another that learns slowly but with better retention (Smith MA, Ghazizadeh A, Shadmehr R. PLoS Biol 4: e179, 2006). This two-state model has been shown to account for a range of behavior in the force field adaptation task. In the present study, we examined whether such a two-state model could also account for behavior arising from adaptation to a prismatic displacement of the visual field. We first confirmed that an "adaptation rebound," a critical prediction of the two-state model, occurred when visual feedback was deprived after an adaptation-extinction episode. We then examined the speed of decay of the prism aftereffect (without any visual feedback) after repetitions of 30, 150, and 500 trials of prism exposure. The speed of decay decreased with the number of exposure trials, a phenomenon that was best explained by assuming an "ultraslow" system, in addition to the fast and slow systems. Finally, we compared retention of aftereffects 24 h after 150 or 500 trials of exposure: retention was significantly greater after 500 than 150 trials. This difference in retention could not be explained by the two-state model but was well explained by the three-state model as arising from the difference in the amount of adaptation of the "ultraslow process." These results suggest that there are not only fast and slow systems but also an ultraslow learning system in prism adaptation that is activated by prolonged prism exposure of 150-500 trials. Copyright © 2015 the American Physiological Society.
An Adaptation-Induced Repulsion Illusion in Tactile Spatial Perception
Li, Lux; Chan, Arielle; Iqbal, Shah M.; Goldreich, Daniel
2017-01-01
Following focal sensory adaptation, the perceived separation between visual stimuli that straddle the adapted region is often exaggerated. For instance, in the tilt aftereffect illusion, adaptation to tilted lines causes subsequently viewed lines with nearby orientations to be perceptually repelled from the adapted orientation. Repulsion illusions in the nonvisual senses have been less studied. Here, we investigated whether adaptation induces a repulsion illusion in tactile spatial perception. In a two-interval forced-choice task, participants compared the perceived separation between two point-stimuli applied on the forearms successively. Separation distance was constant on one arm (the reference) and varied on the other arm (the comparison). In Experiment 1, we took three consecutive baseline measurements, verifying that in the absence of manipulation, participants’ distance perception was unbiased across arms and stable across experimental blocks. In Experiment 2, we vibrated a region of skin on the reference arm, verifying that this focally reduced tactile sensitivity, as indicated by elevated monofilament detection thresholds. In Experiment 3, we applied vibration between the two reference points in our distance perception protocol and discovered that this caused an illusory increase in the separation between the points. We conclude that focal adaptation induces a repulsion aftereffect illusion in tactile spatial perception. The illusion provides clues as to how the tactile system represents spatial information. The analogous repulsion aftereffects caused by adaptation in different stimulus domains and sensory systems may point to fundamentally similar strategies for dynamic sensory coding. PMID:28701936
ERIC Educational Resources Information Center
Vida, Mark D.; Mondloch, Catherine J.
2009-01-01
This investigation used adaptation aftereffects to examine developmental changes in the perception of facial expressions. Previous studies have shown that adults' perceptions of ambiguous facial expressions are biased following adaptation to intense expressions. These expression aftereffects are strong when the adapting and probe expressions share…
Malinina, E S; Andreeva, I G
2013-01-01
The perceptual peculiarities of sound source withdrawing and approaching and their influence on auditory aftereffects were studied in the free field. The radial movement of the auditory adapting stimuli was imitated by two methods: (1) by oppositely directed simultaneous amplitude change of the wideband signals at two loudspeakers placed at 1.1 and 4.5 m from a listener; (2) by an increase or a decrease of the wideband noise amplitude of the impulses at one of the loudspeakers--whether close or distant. The radial auditory movement of test stimuli was imitated by using the first method of imitation of adapting stimuli movement. Nine listeners estimated the direction of test stimuli movement without adaptation (control) and after adaptation. Adapting stimuli were stationary, slowly moving with sound level variation of 2 dB and rapidly moving with variation of 12 dB. The percentage of "withdrawing" responses was used for psychometric curve construction. Three perceptual phenomena were found. The growing louder effect was shown in control series without adaptation. The effect was characterized by a decrease of the number of "withdrawing" responses and overestimation of test stimuli as approaching. The position-dependent aftereffects were noticed after adaptation to the stationary and slowly moving sound stimuli. The aftereffect was manifested as an increase of the number of "withdrawing" responses and overestimation of test stimuli as withdrawal. The effect was reduced with increase of the distance between the listener and the loudspeaker. Movement aftereffects were revealed after adaptation to the rapidly moving stimuli. Aftereffects were direction-dependent: the number of "withdrawal" responses after adaptation to approach increased, whereas after adaptation to withdrawal it decreased relative to control. The movement aftereffects were more pronounced at imitation of movement of adapting stimuli by the first method. In this case the listener could determine the starting and the finishing points of movement trajectory. Interaction of movement aftereffects with the growing louder effect was absent in all ways of presentation of adapting stimuli. With increase of distance to the source of adapting stimuli, there was observed a tendency for a decrease of aftereffect of approach and for an increase of aftereffect of withdrawal.
Bunday, Karen L.
2009-01-01
We studied 12 peripheral neuropathy patients (PNP) and 13 age-matched controls with the “broken escalator” paradigm to see how somatosensory loss affects gait adaptation and the release and recovery (“braking”) of the forward trunk overshoot observed during this locomotor aftereffect. Trunk displacement, foot contact signals, and leg electromyograms (EMGs) were recorded while subjects walked onto a stationary sled (BEFORE trials), onto the moving sled (MOVING or adaptation trials), and again onto the stationary sled (AFTER trials). PNP were unsteady during the MOVING trials, but this progressively improved, indicating some adaptation. During the after trials, 77% of control subjects displayed a trunk overshoot aftereffect but over half of the PNP (58%) did not. The PNP without a trunk aftereffect adapted to the MOVING trials by increasing distance traveled; subsequently this was expressed as increased distance traveled during the aftereffect rather than as a trunk overshoot. This clear separation in consequent aftereffects was not seen in the normal controls suggesting that, as a result of somatosensory loss, some PNP use distinctive strategies to negotiate the moving sled, in turn resulting in a distinct aftereffects. In addition, PNP displayed earlier than normal anticipatory leg EMG activity during the first after trial. Although proprioceptive inputs are not critical for the emergence or termination of the aftereffect, somatosensory loss induces profound changes in motor adaptation and anticipation. Our study has found individual differences in adaptive motor performance, indicative that PNP adopt different feed-forward gait compensatory strategies in response to peripheral sensory loss. PMID:19741105
Intramanual and intermanual transfer of the curvature aftereffect
Duijndam, Maarten J. A.; Ketels, Myrna F. M.; Wilbers, Martine T. J. M.; Zwijsen, Sandra A.; Kappers, Astrid M. L.
2008-01-01
The existence and transfer of a haptic curvature aftereffect was investigated to obtain a greater insight into neural representation of shape. The haptic curvature aftereffect is the phenomenon whereby a flat surface is judged concave if the preceding touched stimulus was convex and vice versa. Single fingers were used to touch the subsequently presented stimuli. A substantial aftereffect was found when the adaptation surface and the test surface were touched by the same finger. Furthermore, a partial, but significant transfer of the aftereffect was demonstrated between fingers of the same hand and between fingers of both the hands. These results provide evidence that curvature information is not only represented at a level that is directly connected to the mechanoreceptors of individual fingers but is also represented at a stage in the somatosensory cortex shared by the fingers of both the hands. PMID:18438649
Path integration in tactile perception of shapes.
Moscatelli, Alessandro; Naceri, Abdeldjallil; Ernst, Marc O
2014-11-01
Whenever we move the hand across a surface, tactile signals provide information about the relative velocity between the skin and the surface. If the system were able to integrate the tactile velocity information over time, cutaneous touch may provide an estimate of the relative displacement between the hand and the surface. Here, we asked whether humans are able to form a reliable representation of the motion path from tactile cues only, integrating motion information over time. In order to address this issue, we conducted three experiments using tactile motion and asked participants (1) to estimate the length of a simulated triangle, (2) to reproduce the shape of a simulated triangular path, and (3) to estimate the angle between two-line segments. Participants were able to accurately indicate the length of the path, whereas the perceived direction was affected by a direction bias (inward bias). The response pattern was thus qualitatively similar to the ones reported in classical path integration studies involving locomotion. However, we explain the directional biases as the result of a tactile motion aftereffect. Copyright © 2014 Elsevier B.V. All rights reserved.
Jansen-Osmann, Petra; Richter, Stefanie; Konczak, Jürgen; Kalveram, Karl-Theodor
2002-03-01
When humans perform goal-directed arm movements under the influence of an external damping force, they learn to adapt to these external dynamics. After removal of the external force field, they reveal kinematic aftereffects that are indicative of a neural controller that still compensates the no longer existing force. Such behavior suggests that the adult human nervous system uses a neural representation of inverse arm dynamics to control upper-extremity motion. Central to the notion of an inverse dynamic model (IDM) is that learning generalizes. Consequently, aftereffects should be observable even in untrained workspace regions. Adults have shown such behavior, but the ontogenetic development of this process remains unclear. This study examines the adaptive behavior of children and investigates whether learning a force field in one hemifield of the right arm workspace has an effect on force adaptation in the other hemifield. Thirty children (aged 6-10 years) and ten adults performed 30 degrees elbow flexion movements under two conditions of external damping (negative and null). We found that learning to compensate an external damping force transferred to the opposite hemifield, which indicates that a model of the limb dynamics rather than an association of visited space and experienced force was acquired. Aftereffects were more pronounced in the younger children and readaptation to a null-force condition was prolonged. This finding is consistent with the view that IDMs in children are imprecise neural representations of the actual arm dynamics. It indicates that the acquisition of IDMs is a developmental achievement and that the human motor system is inherently flexible enough to adapt to any novel force within the limits of the organism's biomechanics.
Cotton, Seonaidh; Sharp, Linda; Cochran, Claire; Gray, Nicola; Cruickshank, Maggie; Smart, Louise; Thornton, Alison; Little, Julian
2011-06-01
Although it is recognised that some women experience pain or bleeding during a cervical cytology test, few studies have quantified physical after-effects of these tests. To investigate the frequency, severity, and duration of after-effects in women undergoing follow-up cervical cytology tests, and to identify subgroups with higher frequencies in Grampian, Tayside, and Nottingham. Cohort study nested with a multi-centre individually randomised controlled trial. The cohort included 1120 women, aged 20-59 years, with low-grade abnormal cervical cytology who completed a baseline sociodemographic questionnaire and had a follow-up cervical cytology test in primary care 6 months later. Six weeks after this test, women completed a postal questionnaire on pain, bleeding, and discharge experienced after the test, including duration and severity. The adjusted prevalence of each after-effect was computed using logistic regression. A total of 884 women (79%) completed the after-effects questionnaire; 30% of women experienced one or more after-effect: 15% reported pain, 16% bleeding, and 7% discharge. The duration of discharge was ≤2 days for 66%, 3-6 days for 22%, and ≥7 days for 11% of women. Pain or bleeding lasted ≤2 days in more than 80% of women. Severe after-effects were reported by <1% of women. The prevalence of pain decreased with increasing age. Bleeding was more frequent among nulliparous women. Discharge was more common among oral contraceptive users. Pain, bleeding, and discharge are not uncommon in women having follow-up cervical cytology tests. Informing women about possible after-effects could better prepare them and provide reassurance, thereby minimising potential non-adherence with follow-up or non-participation with screening in the future.
Cotton, Seonaidh; Sharp, Linda; Cochran, Claire; Gray, Nicola; Cruickshank, Maggie; Smart, Louise; Thornton, Alison; Little, Julian
2011-01-01
Background Although it is recognised that some women experience pain or bleeding during a cervical cytology test, few studies have quantified physical after-effects of these tests. Aim To investigate the frequency, severity, and duration of after-effects in women undergoing follow-up cervical cytology tests, and to identify subgroups with higher frequencies in Grampian, Tayside, and Nottingham. Design Cohort study nested with a multi-centre individually randomised controlled trial. Method The cohort included 1120 women, aged 20–59 years, with low-grade abnormal cervical cytology who completed a baseline sociodemographic questionnaire and had a follow-up cervical cytology test in primary care 6 months later. Six weeks after this test, women completed a postal questionnaire on pain, bleeding, and discharge experienced after the test, including duration and severity. The adjusted prevalence of each after-effect was computed using logistic regression. Results A total of 884 women (79%) completed the after-effects questionnaire; 30% of women experienced one or more after-effect: 15% reported pain, 16% bleeding, and 7% discharge. The duration of discharge was ≤2 days for 66%, 3–6 days for 22%, and ≥7 days for 11% of women. Pain or bleeding lasted ≤2 days in more than 80% of women. Severe after-effects were reported by <1% of women. The prevalence of pain decreased with increasing age. Bleeding was more frequent among nulliparous women. Discharge was more common among oral contraceptive users. Conclusion Pain, bleeding, and discharge are not uncommon in women having follow-up cervical cytology tests. Informing women about possible after-effects could better prepare them and provide reassurance, thereby minimising potential non-adherence with follow-up or non-participation with screening in the future. PMID:21801512
Neck Proprioception Shapes Body Orientation and Perception of Motion
Pettorossi, Vito Enrico; Schieppati, Marco
2014-01-01
This review article deals with some effects of neck muscle proprioception on human balance, gait trajectory, subjective straight-ahead (SSA), and self-motion perception. These effects are easily observed during neck muscle vibration, a strong stimulus for the spindle primary afferent fibers. We first remind the early findings on human balance, gait trajectory, SSA, induced by limb, and neck muscle vibration. Then, more recent findings on self-motion perception of vestibular origin are described. The use of a vestibular asymmetric yaw-rotation stimulus for emphasizing the proprioceptive modulation of motion perception from the neck is mentioned. In addition, an attempt has been made to conjointly discuss the effects of unilateral neck proprioception on motion perception, SSA, and walking trajectory. Neck vibration also induces persistent aftereffects on the SSA and on self-motion perception of vestibular origin. These perceptive effects depend on intensity, duration, side of the conditioning vibratory stimulation, and on muscle status. These effects can be maintained for hours when prolonged high-frequency vibration is superimposed on muscle contraction. Overall, this brief outline emphasizes the contribution of neck muscle inflow to the construction and fine-tuning of perception of body orientation and motion. Furthermore, it indicates that tonic neck-proprioceptive input may induce persistent influences on the subject’s mental representation of space. These plastic changes might adapt motion sensitiveness to lasting or permanent head positional or motor changes. PMID:25414660
Neck proprioception shapes body orientation and perception of motion.
Pettorossi, Vito Enrico; Schieppati, Marco
2014-01-01
This review article deals with some effects of neck muscle proprioception on human balance, gait trajectory, subjective straight-ahead (SSA), and self-motion perception. These effects are easily observed during neck muscle vibration, a strong stimulus for the spindle primary afferent fibers. We first remind the early findings on human balance, gait trajectory, SSA, induced by limb, and neck muscle vibration. Then, more recent findings on self-motion perception of vestibular origin are described. The use of a vestibular asymmetric yaw-rotation stimulus for emphasizing the proprioceptive modulation of motion perception from the neck is mentioned. In addition, an attempt has been made to conjointly discuss the effects of unilateral neck proprioception on motion perception, SSA, and walking trajectory. Neck vibration also induces persistent aftereffects on the SSA and on self-motion perception of vestibular origin. These perceptive effects depend on intensity, duration, side of the conditioning vibratory stimulation, and on muscle status. These effects can be maintained for hours when prolonged high-frequency vibration is superimposed on muscle contraction. Overall, this brief outline emphasizes the contribution of neck muscle inflow to the construction and fine-tuning of perception of body orientation and motion. Furthermore, it indicates that tonic neck-proprioceptive input may induce persistent influences on the subject's mental representation of space. These plastic changes might adapt motion sensitiveness to lasting or permanent head positional or motor changes.
Visual and Non-Visual Contributions to the Perception of Object Motion during Self-Motion
Fajen, Brett R.; Matthis, Jonathan S.
2013-01-01
Many locomotor tasks involve interactions with moving objects. When observer (i.e., self-)motion is accompanied by object motion, the optic flow field includes a component due to self-motion and a component due to object motion. For moving observers to perceive the movement of other objects relative to the stationary environment, the visual system could recover the object-motion component – that is, it could factor out the influence of self-motion. In principle, this could be achieved using visual self-motion information, non-visual self-motion information, or a combination of both. In this study, we report evidence that visual information about the speed (Experiment 1) and direction (Experiment 2) of self-motion plays a role in recovering the object-motion component even when non-visual self-motion information is also available. However, the magnitude of the effect was less than one would expect if subjects relied entirely on visual self-motion information. Taken together with previous studies, we conclude that when self-motion is real and actively generated, both visual and non-visual self-motion information contribute to the perception of object motion. We also consider the possible role of this process in visually guided interception and avoidance of moving objects. PMID:23408983
The moving platform aftereffect: limited generalization of a locomotor adaptation.
Reynolds, R F; Bronstein, A M
2004-01-01
We have recently described a postural after-effect of walking onto a stationary platform previously experienced as moving, which occurs despite full knowledge that the platform will no longer move. This experiment involves an initial baseline period when the platform is kept stationary (BEFORE condition), followed by a brief adaptation period when subjects learn to walk onto the platform moving at 1.2 m/s (MOVING condition). Subjects are clearly warned that the platform will no longer move and asked to walk onto it again (AFTER condition). Despite the warning, they walk toward the platform with a velocity greater than that observed during the BEFORE condition, and a large forward sway of the trunk is observed once they have landed on the platform. This aftereffect, which disappears within three trials, represents dissociation of knowledge and action. In the current set of experiments, to gain further insight into this phenomenon, we have manipulated three variables, the context, location, and method of the walking task, between the MOVING and AFTER conditions, to determine how far the adaptation will generalize. It was found that when the gait initiation cue was changed from beeps to a flashing light, or vice versa, there was no difference in the magnitude of the aftereffect, either in terms of walking velocity or forward sway of the trunk. Changing the leg with which gait was initiated, however, reduced sway magnitude by approximately 50%. When subjects changed from forward walking to backward walking, the aftereffect was abolished. Similarly, walking in a location other than the mobile platform did not produce any aftereffect. However, in these latter two experiments, the aftereffect reappeared when subjects reverted to the walking pattern used during the MOVING condition. Hence, these results show that a change in abstract context had no influence, whereas any deviation from the way and location in which the moving platform task was originally performed profoundly reduced the size of the aftereffect. Although the moving platform aftereffect is an example of inappropriate generalization by the motor system across time, these results show that this generalization is highly limited to the method and location in which the original adaptation took place.
Differential effect of visual motion adaption upon visual cortical excitability.
Lubeck, Astrid J A; Van Ombergen, Angelique; Ahmad, Hena; Bos, Jelte E; Wuyts, Floris L; Bronstein, Adolfo M; Arshad, Qadeer
2017-03-01
The objectives of this study were 1 ) to probe the effects of visual motion adaptation on early visual and V5/MT cortical excitability and 2 ) to investigate whether changes in cortical excitability following visual motion adaptation are related to the degree of visual dependency, i.e., an overreliance on visual cues compared with vestibular or proprioceptive cues. Participants were exposed to a roll motion visual stimulus before, during, and after visual motion adaptation. At these stages, 20 transcranial magnetic stimulation (TMS) pulses at phosphene threshold values were applied over early visual and V5/MT cortical areas from which the probability of eliciting a phosphene was calculated. Before and after adaptation, participants aligned the subjective visual vertical in front of the roll motion stimulus as a marker of visual dependency. During adaptation, early visual cortex excitability decreased whereas V5/MT excitability increased. After adaptation, both early visual and V5/MT excitability were increased. The roll motion-induced tilt of the subjective visual vertical (visual dependence) was not influenced by visual motion adaptation and did not correlate with phosphene threshold or visual cortex excitability. We conclude that early visual and V5/MT cortical excitability is differentially affected by visual motion adaptation. Furthermore, excitability in the early or late visual cortex is not associated with an increase in visual reliance during spatial orientation. Our findings complement earlier studies that have probed visual cortical excitability following motion adaptation and highlight the differential role of the early visual cortex and V5/MT in visual motion processing. NEW & NOTEWORTHY We examined the influence of visual motion adaptation on visual cortex excitability and found a differential effect in V1/V2 compared with V5/MT. Changes in visual excitability following motion adaptation were not related to the degree of an individual's visual dependency. Copyright © 2017 the American Physiological Society.
Mechanisms of Sensorimotor Adaptation to Centrifugation
NASA Technical Reports Server (NTRS)
Paloski, W. H.; Wood, S. J.; Kaufman, G. D.
1999-01-01
We postulate that centripetal acceleration induced by centrifugation can be used as an inflight sensorimotor countermeasure to retain and/or promote appropriate crewmember responses to sustained changes in gravito-inertial force conditions. Active voluntary motion is required to promote vestibular system conditioning, and both visual and graviceptor sensory feedback are critical for evaluating internal representations of spatial orientation. The goal of our investigation is to use centrifugation to develop an analog to the conflicting visual/gravito-inertial force environment experienced during space flight, and to use voluntary head movements during centrifugation to study mechanisms of adaptation to altered gravity environments. We address the following two hypotheses: (1) Discordant canal-otolith feedback during head movements in a hypergravity tilted environment will cause a reorganization of the spatial processing required for multisensory integration and motor control, resulting in decreased postural stability upon return to normal gravity environment. (2) Adaptation to this "gravito-inertial tilt distortion" will result in a negative after-effect, and readaptation will be expressed by return of postural stability to baseline conditions. During the third year of our grant we concentrated on examining changes in balance control following 90-180 min of centrifugation at 1.4 9. We also began a control study in which we exposed subjects to 90 min of sustained roll tilt in a static (non-rotating) chair. This allowed us to examine adaptation to roll tilt without the hypergravity induced by centrifugation. To these ends, we addressed the question: Is gravity an internal calibration reference for postural control? The remainder of this report is limited to presenting preliminary findings from this study.
Walser, Moritz; Plessow, Franziska; Goschke, Thomas; Fischer, Rico
2014-07-01
Previous studies have shown that completed prospective memory (PM) intentions entail aftereffects in terms of ongoing-task-performance decrements in trials containing repeated PM cues which previously served as PM cues triggering the intended action. Previous research reported that PM aftereffects decrease over time, thus revealing a specific time course of PM aftereffects. In the present study, we tested two accounts for this pattern, assuming either that the decline of aftereffects is related to the temporal distance to PM task completion or may be a result of the repeated exposure of repeated PM cues in the ongoing task. In three experiments, we manipulated both the temporal distance to PM task completion and the frequency of repeated PM cues and demonstrated that aftereffects of completed intentions declined with repeated exposure of formerly relevant PM cues. In addition, effects of repeated exposure were not only limited to the repetition of specific PM-cue exemplars but also generalized to other semantically related PM cues within the PM-cue category. Together, findings show that decreased aftereffects of completed intentions are not related to the temporal duration of the subsequent test block, but crucially depend on the repeated exposure of the previously relevant PM cues.
Gaze direction affects the magnitude of face identity aftereffects.
Kloth, Nadine; Jeffery, Linda; Rhodes, Gillian
2015-02-20
The face perception system partly owes its efficiency to adaptive mechanisms that constantly recalibrate face coding to our current diet of faces. Moreover, faces that are better attended produce more adaptation. Here, we investigated whether the social cues conveyed by a face can influence the amount of adaptation that face induces. We compared the magnitude of face identity aftereffects induced by adaptors with direct and averted gazes. We reasoned that faces conveying direct gaze may be more engaging and better attended and thus produce larger aftereffects than those with averted gaze. Using an adaptation duration of 5 s, we found that aftereffects for adaptors with direct and averted gazes did not differ (Experiment 1). However, when processing demands were increased by reducing adaptation duration to 1 s, we found that gaze direction did affect the magnitude of the aftereffect, but in an unexpected direction: Aftereffects were larger for adaptors with averted rather than direct gaze (Experiment 2). Eye tracking revealed that differences in looking time to the faces between the two gaze directions could not account for these findings. Subsequent ratings of the stimuli (Experiment 3) showed that adaptors with averted gaze were actually perceived as more expressive and interesting than adaptors with direct gaze. Therefore it appears that the averted-gaze faces were more engaging and better attended, leading to larger aftereffects. Overall, our results suggest that naturally occurring facial signals can modulate the adaptive impact a face exerts on our perceptual system. Specifically, the faces that we perceive as most interesting also appear to calibrate the organization of our perceptual system most strongly. © 2015 ARVO.
Rosenblatt, Steven David; Crane, Benjamin Thomas
2015-01-01
A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37) participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s) at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001) and rotation (p<0.001), and arrows (p = 0.02). For the visual motion stimuli, inertial motion perception was shifted in the direction consistent with the visual stimulus. Arrows had a small effect on self-motion perception driven by a minority of subjects. There was no significant effect of illusory motion on self-motion perception for either translation or rotation (p>0.1 for both). Thus, although a true moving visual field can induce self-motion, results of this study show that illusory motion does not.
Wu, Ming; Hsu, Chao-Jung; Kim, Janis
2018-01-01
The goal of this study was to determine how individuals post-stroke response to the lateral assistance force applied to the pelvis during treadmill walking. Ten individuals post chronic (> 6 months) stroke were recruited to participate in this study. A controlled assistance force (~10% of body weight) was applied to the pelvis in the lateral direction toward the paretic side during stance of the paretic leg. Kinematics of the pelvis and legs were recorded. Applying pelvis assistance force facilitated weight shifting toward the paretic side, resulting in a more symmetrical gait pattern but also inducing an enlarged range of motion of the pelvis during early adaptation period. The neural system of individuals post stroke adapted to the pelvis assistance force and showed an aftereffect consists of reduced range of motion of the pelvis following load release during post adaptation period. PMID:28813835
Wu, Ming; Hsu, Chao-Jung; Kim, Janis
2017-07-01
The goal of this study was to determine how individuals post-stroke response to the lateral assistance force applied to the pelvis during treadmill walking. Ten individuals post chronic (> 6 months) stroke were recruited to participate in this study. A controlled assistance force (∼10% of body weight) was applied to the pelvis in the lateral direction toward the paretic side during stance of the paretic leg. Kinematics of the pelvis and legs were recorded. Applying pelvis assistance force facilitated weight shifting toward the paretic side, resulting in a more symmetrical gait pattern but also inducing an enlarged range of motion of the pelvis during early adaptation period. The neural system of individuals post stroke adapted to the pelvis assistance force and showed an aftereffect consists of reduced range of motion of the pelvis following load release during post adaptation period.
The role of human ventral visual cortex in motion perception
Saygin, Ayse P.; Lorenzi, Lauren J.; Egan, Ryan; Rees, Geraint; Behrmann, Marlene
2013-01-01
Visual motion perception is fundamental to many aspects of visual perception. Visual motion perception has long been associated with the dorsal (parietal) pathway and the involvement of the ventral ‘form’ (temporal) visual pathway has not been considered critical for normal motion perception. Here, we evaluated this view by examining whether circumscribed damage to ventral visual cortex impaired motion perception. The perception of motion in basic, non-form tasks (motion coherence and motion detection) and complex structure-from-motion, for a wide range of motion speeds, all centrally displayed, was assessed in five patients with a circumscribed lesion to either the right or left ventral visual pathway. Patients with a right, but not with a left, ventral visual lesion displayed widespread impairments in central motion perception even for non-form motion, for both slow and for fast speeds, and this held true independent of the integrity of areas MT/V5, V3A or parietal regions. In contrast with the traditional view in which only the dorsal visual stream is critical for motion perception, these novel findings implicate a more distributed circuit in which the integrity of the right ventral visual pathway is also necessary even for the perception of non-form motion. PMID:23983030
Verticality perception during and after galvanic vestibular stimulation.
Volkening, Katharina; Bergmann, Jeannine; Keller, Ingo; Wuehr, Max; Müller, Friedemann; Jahn, Klaus
2014-10-03
The human brain constructs verticality perception by integrating vestibular, somatosensory, and visual information. Here we investigated whether galvanic vestibular stimulation (GVS) has an effect on verticality perception both during and after application, by assessing the subjective verticals (visual, haptic and postural) in healthy subjects at those times. During stimulation the subjective visual vertical and the subjective haptic vertical shifted towards the anode, whereas this shift was reversed towards the cathode in all modalities once stimulation was turned off. Overall, the effects were strongest for the haptic modality. Additional investigation of the time course of GVS-induced changes in the haptic vertical revealed that anodal shifts persisted for the entire 20-min stimulation interval in the majority of subjects. Aftereffects exhibited different types of decay, with a preponderance for an exponential decay. The existence of such reverse effects after stimulation could have implications for GVS-based therapy. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Ji, Gong-Jun; Yu, Fengqiong; Liao, Wei; Wang, Kai
2017-04-01
The supplementary motor area (SMA) is a key node of the motor network. Inhibitory repetitive transcranial magnetic stimulation (rTMS) of the SMA can potentially improve movement disorders. However, the aftereffects of inhibitory rTMS on brain function remain largely unknown. Using a single-blind, crossover within-subject design, we investigated the role of aftereffects with two inhibitory rTMS protocols [1800 pulses of either 1-Hz repetitive stimulation or continuous theta burst stimulation (cTBS)] on the left SMA. A total of 19 healthy volunteers participated in the rTMS sessions on 2 separate days. Firstly, short-term aftereffects were estimated at three levels (functional connectivity, local activity, and network properties) by comparing the resting-state functional magnetic resonance imaging datasets (9min) acquired before and after each rTMS session. Local activity and network properties were not significantly altered by either protocol. Functional connectivity within the SMA network was increased (in the left paracentral gyrus) by 1-Hz stimulation and decreased (in the left inferior frontal gyrus and SMA/middle cingulate cortex) by cTBS. The subsequent three-way analysis of variance (site×time×protocol) did not show a significant interaction effect or "protocol" main effect, suggesting that the two protocols share an underlying mechanism. Secondly, sliding-window analysis was used to evaluate the dynamic features of aftereffects in the ~29min after the end of stimulation. Aftereffects were maintained for a maximum of 9.8 and 6.6min after the 1-Hz and cTBS protocols, respectively. In summary, this study revealed topographical and temporal aftereffects in the SMA network following inhibitory rTMS protocols, providing valuable information for their application in future neuroscience and clinical studies. Copyright © 2017 Elsevier Inc. All rights reserved.
Watching the brain recalibrate: Neural correlates of renormalization during face adaptation.
Kloth, Nadine; Rhodes, Gillian; Schweinberger, Stefan R
2017-07-15
The face perception system flexibly adjusts its neural responses to current face exposure, inducing aftereffects in the perception of subsequent faces. For instance, adaptation to expanded faces makes undistorted faces appear compressed, and adaptation to compressed faces makes undistorted faces appear expanded. Such distortion aftereffects have been proposed to result from renormalization, in which the visual system constantly updates a prototype according to the adaptors' characteristics and evaluates subsequent faces relative to that. However, although consequences of adaptation are easily observed in behavioral aftereffects, it has proven difficult to observe renormalization during adaptation itself. Here we directly measured brain responses during adaptation to establish a neural correlate of renormalization. Given that the face-evoked occipito-temporal P2 event-related brain potential has been found to increase with face prototypicality, we reasoned that the adaptor-elicited P2 could serve as an electrophysiological indicator for renormalization. Participants adapted to sequences of four distorted (compressed or expanded) or undistorted faces, followed by a slightly distorted test face, which they had to classify as undistorted or distorted. We analysed ERPs evoked by each of the adaptors and found that P2 (but not N170) amplitudes evoked by consecutive adaptor faces exhibited an electrophysiological pattern of renormalization during adaptation to distorted faces: P2 amplitudes evoked by both compressed and expanded adaptors significantly increased towards asymptotic levels as adaptation proceeded. P2 amplitudes were smallest for the first adaptor, significantly larger for the second, and yet larger for the third adaptor. We conclude that the sensitivity of the occipito-temporal P2 to the perceived deviation of a face from the norm makes this component an excellent tool to study adaptation-induced renormalization. Copyright © 2017 Elsevier Inc. All rights reserved.
Multiple Motor Learning Strategies in Visuomotor Rotation
Saijo, Naoki; Gomi, Hiroaki
2010-01-01
Background When exposed to a continuous directional discrepancy between movements of a visible hand cursor and the actual hand (visuomotor rotation), subjects adapt their reaching movements so that the cursor is brought to the target. Abrupt removal of the discrepancy after training induces reaching error in the direction opposite to the original discrepancy, which is called an aftereffect. Previous studies have shown that training with gradually increasing visuomotor rotation results in a larger aftereffect than with a suddenly increasing one. Although the aftereffect difference implies a difference in the learning process, it is still unclear whether the learned visuomotor transformations are qualitatively different between the training conditions. Methodology/Principal Findings We examined the qualitative changes in the visuomotor transformation after the learning of the sudden and gradual visuomotor rotations. The learning of the sudden rotation led to a significant increase of the reaction time for arm movement initiation and then the reaching error decreased, indicating that the learning is associated with an increase of computational load in motor preparation (planning). In contrast, the learning of the gradual rotation did not change the reaction time but resulted in an increase of the gain of feedback control, suggesting that the online adjustment of the reaching contributes to the learning of the gradual rotation. When the online cursor feedback was eliminated during the learning of the gradual rotation, the reaction time increased, indicating that additional computations are involved in the learning of the gradual rotation. Conclusions/Significance The results suggest that the change in the motor planning and online feedback adjustment of the movement are involved in the learning of the visuomotor rotation. The contributions of those computations to the learning are flexibly modulated according to the visual environment. Such multiple learning strategies would be required for reaching adaptation within a short training period. PMID:20195373
Avanzino, Laura; Ravaschio, Andrea; Lagravinese, Giovanna; Bonassi, Gaia; Abbruzzese, Giovanni; Pelosin, Elisa
2018-01-01
It is under debate whether the cerebellum plays a role in dystonia pathophysiology and in the expression of clinical phenotypes. We investigated a typical cerebellar function (anticipatory movement control) in patients with cervical dystonia (CD) with and without tremor. Twenty patients with CD, with and without tremor, and 17 healthy controls were required to catch balls of different load: 15 trials with a light ball, 25 trials with a heavy ball (adaptation) and 15 trials with a light ball (post-adaptation). Arm movements were recorded using a motion capture system. We evaluated: (i) the anticipatory adjustment (just before the impact); (ii) the extent and rate of the adaptation (at the impact) and (iii) the aftereffect in the post-adaptation phase. The anticipatory adjustment was reduced during adaptation in CD patients with tremor respect to CD patients without tremor and controls. The extent and rate of adaptation and the aftereffect in the post-adaptation phase were smaller in CD with tremor than in controls and CD without tremor. Patients with cervical dystonia and tremor display an abnormal predictive movement control. Our findings point to a possible role of cerebellum in the expression of a clinical phenotype in dystonia. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
You are only as old as you sound: auditory aftereffects in vocal age perception.
Zäske, Romi; Schweinberger, Stefan R
2011-12-01
High-level adaptation not only biases the perception of faces, but also causes transient distortions in auditory perception of non-linguistic voice information about gender, identity, and emotional intonation. Here we report a novel auditory aftereffect in perceiving vocal age: age estimates were elevated in age-morphed test voices when preceded by adaptor voices of young speakers (∼20 yrs), compared to old adaptor voices (∼70 yrs). This vocal age aftereffect (VAAE) complements a recently reported face aftereffect (Schweinberger et al., 2010) and points to selective neuronal coding of vocal age. Intriguingly, post-adaptation assessment revealed that VAAEs could persist for minutes after adaptation, although reduced in magnitude. As an important qualification, VAAEs during post-adaptation were modulated by gender congruency between speaker and listener. For both male and female listeners, VAAEs were much reduced for test voices of opposite gender. Overall, this study establishes a new auditory aftereffect in the perception of vocal age. We offer a tentative sociobiological explanation for the differential, gender-dependent recovery from vocal age adaptation. Copyright © 2011 Elsevier B.V. All rights reserved.
Adaptation effects to attractiveness of face photographs and art portraits are domain-specific
Hayn-Leichsenring, Gregor U.; Kloth, Nadine; Schweinberger, Stefan R.; Redies, Christoph
2013-01-01
We studied the neural coding of facial attractiveness by investigating effects of adaptation to attractive and unattractive human faces on the perceived attractiveness of veridical human face pictures (Experiment 1) and art portraits (Experiment 2). Experiment 1 revealed a clear pattern of contrastive aftereffects. Relative to a pre-adaptation baseline, the perceived attractiveness of faces was increased after adaptation to unattractive faces, and was decreased after adaptation to attractive faces. Experiment 2 revealed similar aftereffects when art portraits rather than face photographs were used as adaptors and test stimuli, suggesting that effects of adaptation to attractiveness are not restricted to facial photographs. Additionally, we found similar aftereffects in art portraits for beauty, another aesthetic feature that, unlike attractiveness, relates to the properties of the image (rather than to the face displayed). Importantly, Experiment 3 showed that aftereffects were abolished when adaptors were art portraits and face photographs were test stimuli. These results suggest that adaptation to facial attractiveness elicits aftereffects in the perception of subsequently presented faces, for both face photographs and art portraits, and that these effects do not cross image domains. PMID:24349690
Diels, Cyriel; Bos, Jelte E
2016-03-01
This paper discusses the predicted increase in the occurrence and severity of motion sickness in self-driving cars. Self-driving cars have the potential to lead to significant benefits. From the driver's perspective, the direct benefits of this technology are considered increased comfort and productivity. However, we here show that the envisaged scenarios all lead to an increased risk of motion sickness. As such, the benefits this technology is assumed to bring may not be capitalised on, in particular by those already susceptible to motion sickness. This can negatively affect user acceptance and uptake and, in turn, limit the potential socioeconomic benefits that this emerging technology may provide. Following a discussion on the causes of motion sickness in the context of self-driving cars, we present guidelines to steer the design and development of automated vehicle technologies. The aim is to limit or avoid the impact of motion sickness and ultimately promote the uptake of self-driving cars. Attention is also given to less well known consequences of motion sickness, in particular negative aftereffects such as postural instability, and detrimental effects on task performance and how this may impact the use and design of self-driving cars. We conclude that basic perceptual mechanisms need to be considered in the design process whereby self-driving cars cannot simply be thought of as living rooms, offices, or entertainment venues on wheels. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Tilt aftereffect following adaptation to translational Glass patterns
Pavan, Andrea; Hocketstaller, Johanna; Contillo, Adriano; Greenlee, Mark W.
2016-01-01
Glass patterns (GPs) consist of randomly distributed dot pairs (dipoles) whose orientations are determined by specific geometric transforms. We assessed whether adaptation to stationary oriented translational GPs suppresses the activity of orientation selective detectors producing a tilt aftereffect (TAE). The results showed that adaptation to GPs produces a TAE similar to that reported in previous studies, though reduced in amplitude. This suggests the involvement of orientation selective mechanisms. We also measured the interocular transfer (IOT) of the GP-induced TAE and found an almost complete IOT, indicating the involvement of orientation selective and binocularly driven units. In additional experiments, we assessed the role of attention in TAE from GPs. The results showed that distraction during adaptation similarly modulates the TAE after adapting to both GPs and gratings. Moreover, in the case of GPs, distraction is likely to interfere with the adaptation process rather than with the spatial summation of local dipoles. We conclude that TAE from GPs possibly relies on visual processing levels in which the global orientation of GPs has been encoded by neurons that are mostly binocularly driven, orientation selective and whose adaptation-related neural activity is strongly modulated by attention. PMID:27005949
Four year-olds use norm-based coding for face identity.
Jeffery, Linda; Read, Ainsley; Rhodes, Gillian
2013-05-01
Norm-based coding, in which faces are coded as deviations from an average face, is an efficient way of coding visual patterns that share a common structure and must be distinguished by subtle variations that define individuals. Adults and school-aged children use norm-based coding for face identity but it is not yet known if pre-school aged children also use norm-based coding. We reasoned that the transition to school could be critical in developing a norm-based system because school places new demands on children's face identification skills and substantially increases experience with faces. Consistent with this view, face identification performance improves steeply between ages 4 and 7. We used face identity aftereffects to test whether norm-based coding emerges between these ages. We found that 4 year-old children, like adults, showed larger face identity aftereffects for adaptors far from the average than for adaptors closer to the average, consistent with use of norm-based coding. We conclude that experience prior to age 4 is sufficient to develop a norm-based face-space and that failure to use norm-based coding cannot explain 4 year-old children's poor face identification skills. Copyright © 2013 Elsevier B.V. All rights reserved.
Audiovisual associations alter the perception of low-level visual motion
Kafaligonul, Hulusi; Oluk, Can
2015-01-01
Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role. PMID:25873869
Neural mechanisms underlying sound-induced visual motion perception: An fMRI study.
Hidaka, Souta; Higuchi, Satomi; Teramoto, Wataru; Sugita, Yoichi
2017-07-01
Studies of crossmodal interactions in motion perception have reported activation in several brain areas, including those related to motion processing and/or sensory association, in response to multimodal (e.g., visual and auditory) stimuli that were both in motion. Recent studies have demonstrated that sounds can trigger illusory visual apparent motion to static visual stimuli (sound-induced visual motion: SIVM): A visual stimulus blinking at a fixed location is perceived to be moving laterally when an alternating left-right sound is also present. Here, we investigated brain activity related to the perception of SIVM using a 7T functional magnetic resonance imaging technique. Specifically, we focused on the patterns of neural activities in SIVM and visually induced visual apparent motion (VIVM). We observed shared activations in the middle occipital area (V5/hMT), which is thought to be involved in visual motion processing, for SIVM and VIVM. Moreover, as compared to VIVM, SIVM resulted in greater activation in the superior temporal area and dominant functional connectivity between the V5/hMT area and the areas related to auditory and crossmodal motion processing. These findings indicate that similar but partially different neural mechanisms could be involved in auditory-induced and visually-induced motion perception, and neural signals in auditory, visual, and, crossmodal motion processing areas closely and directly interact in the perception of SIVM. Copyright © 2017 Elsevier B.V. All rights reserved.
Smelling directions: Olfaction modulates ambiguous visual motion perception
Kuang, Shenbing; Zhang, Tao
2014-01-01
Senses of smells are often accompanied by simultaneous visual sensations. Previous studies have documented enhanced olfactory performance with concurrent presence of congruent color- or shape- related visual cues, and facilitated visual object perception when congruent smells are simultaneously present. These visual object-olfaction interactions suggest the existences of couplings between the olfactory pathway and the visual ventral processing stream. However, it is not known if olfaction can modulate visual motion perception, a function that is related to the visual dorsal stream. We tested this possibility by examining the influence of olfactory cues on the perceptions of ambiguous visual motion signals. We showed that, after introducing an association between motion directions and olfactory cues, olfaction could indeed bias ambiguous visual motion perceptions. Our result that olfaction modulates visual motion processing adds to the current knowledge of cross-modal interactions and implies a possible functional linkage between the olfactory system and the visual dorsal pathway. PMID:25052162
Aftereffects of Lithium-Conditioned Stimuli on Consummatory Behavior
ERIC Educational Resources Information Center
Domjan, Michael; Gillan, Douglas J.
1977-01-01
To complement investigations of the direct effects of lithium toxicosis on consummatory behavior, these experiments were designed to determine the aftereffects on drinking of exposure to a conditioned stimulus previously paired with lithium. (Author/RK)
Alekseichuk, Ivan; Diers, Kersten; Paulus, Walter; Antal, Andrea
2016-10-15
The aim of this study was to investigate if the blood oxygenation level-dependent (BOLD) changes in the visual cortex can be used as biomarkers reflecting the online and offline effects of transcranial electrical stimulation (tES). Anodal transcranial direct current stimulation (tDCS) and 10Hz transcranial alternating current stimulation (tACS) were applied for 10min duration over the occipital cortex of healthy adults during the presentation of different visual stimuli, using a crossover, double-blinded design. Control experiments were also performed, in which sham stimulation as well as another electrode montage were used. Anodal tDCS over the visual cortex induced a small but significant further increase in BOLD response evoked by a visual stimulus; however, no aftereffect was observed. Ten hertz of tACS did not result in an online effect, but in a widespread offline BOLD decrease over the occipital, temporal, and frontal areas. These findings demonstrate that tES during visual perception affects the neuronal metabolism, which can be detected with functional magnetic resonance imaging (fMRI). Copyright © 2016 Elsevier Inc. All rights reserved.
Trait approach motivation moderates the aftereffects of self-control
Crowell, Adrienne; Kelley, Nicholas J.; Schmeichel, Brandon J.
2014-01-01
Numerous experiments have found that exercising self-control reduces success on subsequent, seemingly unrelated self-control tasks. Such evidence lends support to a strength model that posits a limited and depletable resource underlying all manner of self-control. Recent theory and evidence suggest that exercising self-control may also increase approach-motivated impulse strength. The two studies reported here tested two implications of this increased approach motivation hypothesis. First, aftereffects of self-control should be evident even in responses that require little or no self-control. Second, participants higher in trait approach motivation should be particularly susceptible to such aftereffects. In support, exercising self-control led to increased optimism (Study 1) and broadened attention (Study 2), but only among individuals higher in trait approach motivation. These findings suggest that approach motivation is an important key to understanding the aftereffects of exercising self-control. PMID:25324814
People can understand descriptions of motion without activating visual motion brain regions
Dravida, Swethasri; Saxe, Rebecca; Bedny, Marina
2013-01-01
What is the relationship between our perceptual and linguistic neural representations of the same event? We approached this question by asking whether visual perception of motion and understanding linguistic depictions of motion rely on the same neural architecture. The same group of participants took part in two language tasks and one visual task. In task 1, participants made semantic similarity judgments with high motion (e.g., “to bounce”) and low motion (e.g., “to look”) words. In task 2, participants made plausibility judgments for passages describing movement (“A centaur hurled a spear … ”) or cognitive events (“A gentleman loved cheese …”). Task 3 was a visual motion localizer in which participants viewed animations of point-light walkers, randomly moving dots, and stationary dots changing in luminance. Based on the visual motion localizer we identified classic visual motion areas of the temporal (MT/MST and STS) and parietal cortex (inferior and superior parietal lobules). We find that these visual cortical areas are largely distinct from neural responses to linguistic depictions of motion. Motion words did not activate any part of the visual motion system. Motion passages produced a small response in the right superior parietal lobule, but none of the temporal motion regions. These results suggest that (1) as compared to words, rich language stimuli such as passages are more likely to evoke mental imagery and more likely to affect perceptual circuits and (2) effects of language on the visual system are more likely in secondary perceptual areas as compared to early sensory areas. We conclude that language and visual perception constitute distinct but interacting systems. PMID:24009592
Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu
2015-01-01
Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828
Veniero, Domenica; Vossen, Alexandra; Gross, Joachim; Thut, Gregor
2015-01-01
A number of rhythmic protocols have emerged for non-invasive brain stimulation (NIBS) in humans, including transcranial alternating current stimulation (tACS), oscillatory transcranial direct current stimulation (otDCS), and repetitive (also called rhythmic) transcranial magnetic stimulation (rTMS). With these techniques, it is possible to match the frequency of the externally applied electromagnetic fields to the intrinsic frequency of oscillatory neural population activity (“frequency-tuning”). Mounting evidence suggests that by this means tACS, otDCS, and rTMS can entrain brain oscillations and promote associated functions in a frequency-specific manner, in particular during (i.e., online to) stimulation. Here, we focus instead on the changes in oscillatory brain activity that persist after the end of stimulation. Understanding such aftereffects in healthy participants is an important step for developing these techniques into potentially useful clinical tools for the treatment of specific patient groups. Reviewing the electrophysiological evidence in healthy participants, we find aftereffects on brain oscillations to be a common outcome following tACS/otDCS and rTMS. However, we did not find a consistent, predictable pattern of aftereffects across studies, which is in contrast to the relative homogeneity of reported online effects. This indicates that aftereffects are partially dissociated from online, frequency-specific (entrainment) effects during tACS/otDCS and rTMS. We outline possible accounts and future directions for a better understanding of the link between online entrainment and offline aftereffects, which will be key for developing more targeted interventions into oscillatory brain activity. PMID:26696834
Keshner, E A; Dhaher, Y
2008-07-01
Multiplanar environmental motion could generate head instability, particularly if the visual surround moves in planes orthogonal to a physical disturbance. We combined sagittal plane surface translations with visual field disturbances in 12 healthy (29-31 years) and 3 visually sensitive (27-57 years) adults. Center of pressure (COP), peak head angles, and RMS values of head motion were calculated and a three-dimensional model of joint motion was developed to examine gross head motion in three planes. We found that subjects standing quietly in front of a visual scene translating in the sagittal plane produced significantly greater (p<0.003) head motion in yaw than when on a translating platform. However, when the platform was translated in the dark or with a visual scene rotating in roll, head motion orthogonal to the plane of platform motion significantly increased (p<0.02). Visually sensitive subjects having no history of vestibular disorder produced large, delayed compensatory head motion. Orthogonal head motions were significantly greater in visually sensitive than in healthy subjects in the dark (p<0.05) and with a stationary scene (p<0.01). We concluded that motion of the visual field could modify compensatory response kinematics of a freely moving head in planes orthogonal to the direction of a physical perturbation. These results suggest that the mechanisms controlling head orientation in space are distinct from those that control trunk orientation in space. These behaviors would have been missed if only COP data were considered. Data suggest that rehabilitation training can be enhanced by combining visual and mechanical perturbation paradigms.
Effects of Crowding and Attention on High-Levels of Motion Processing and Motion Adaptation
Pavan, Andrea; Greenlee, Mark W.
2015-01-01
The motion after-effect (MAE) persists in crowding conditions, i.e., when the adaptation direction cannot be reliably perceived. The MAE originating from complex moving patterns spreads into non-adapted sectors of a multi-sector adapting display (i.e., phantom MAE). In the present study we used global rotating patterns to measure the strength of the conventional and phantom MAEs in crowded and non-crowded conditions, and when attention was directed to the adapting stimulus and when it was diverted away from the adapting stimulus. The results show that: (i) the phantom MAE is weaker than the conventional MAE, for both non-crowded and crowded conditions, and when attention was focused on the adapting stimulus and when it was diverted from it, (ii) conventional and phantom MAEs in the crowded condition are weaker than in the non-crowded condition. Analysis conducted to assess the effect of crowding on high-level of motion adaptation suggests that crowding is likely to affect the awareness of the adapting stimulus rather than degrading its sensory representation, (iii) for high-level of motion processing the attentional manipulation does not affect the strength of either conventional or phantom MAEs, neither in the non-crowded nor in the crowded conditions. These results suggest that high-level MAEs do not depend on attention and that at high-level of motion adaptation the effects of crowding are not modulated by attention. PMID:25615577
Duration as a measure of the spiral aftereffect.
DOT National Transportation Integrated Search
1963-01-01
Describes an experiment that studied the reliability of duration as a measure of the spiral aftereffect. The results for 10 Ss indicate that duration is a highly reliable measure and that duration is a simple monotonic function of exposure-time.
Spatiotemporal Filter for Visual Motion Integration from Pursuit Eye Movements in Humans and Monkeys
Liu, Bing
2017-01-01
Despite the enduring interest in motion integration, a direct measure of the space–time filter that the brain imposes on a visual scene has been elusive. This is perhaps because of the challenge of estimating a 3D function from perceptual reports in psychophysical tasks. We take a different approach. We exploit the close connection between visual motion estimates and smooth pursuit eye movements to measure stimulus–response correlations across space and time, computing the linear space–time filter for global motion direction in humans and monkeys. Although derived from eye movements, we find that the filter predicts perceptual motion estimates quite well. To distinguish visual from motor contributions to the temporal duration of the pursuit motion filter, we recorded single-unit responses in the monkey middle temporal cortical area (MT). We find that pursuit response delays are consistent with the distribution of cortical neuron latencies and that temporal motion integration for pursuit is consistent with a short integration MT subpopulation. Remarkably, the visual system appears to preferentially weight motion signals across a narrow range of foveal eccentricities rather than uniformly over the whole visual field, with a transiently enhanced contribution from locations along the direction of motion. We find that the visual system is most sensitive to motion falling at approximately one-third the radius of the stimulus aperture. Hypothesizing that the visual drive for pursuit is related to the filtered motion energy in a motion stimulus, we compare measured and predicted eye acceleration across several other target forms. SIGNIFICANCE STATEMENT A compact model of the spatial and temporal processing underlying global motion perception has been elusive. We used visually driven smooth eye movements to find the 3D space–time function that best predicts both eye movements and perception of translating dot patterns. We found that the visual system does not appear to use all available motion signals uniformly, but rather weights motion preferentially in a narrow band at approximately one-third the radius of the stimulus. Although not universal, the filter predicts responses to other types of stimuli, demonstrating a remarkable degree of generalization that may lead to a deeper understanding of visual motion processing. PMID:28003348
Arfeller, Carola; Schwarzbach, Jens; Ubaldi, Silvia; Ferrari, Paolo; Barchiesi, Guido; Cattaneo, Luigi
2013-04-01
The posterior superior temporal sulcus (pSTS) is active when observing biological motion. We investigated the functional connections of the pSTS node within the action observation network by measuring the after-effect of focal repetitive transcranial magnetic stimulation (rTMS) with whole-brain functional magnetic resonance imaging (fMRI). Participants received 1-Hz rTMS over the pSTS region for 10 min and underwent fMRI immediately after. While scanned, they were shown short video clips of a hand grasping an object (grasp clips) or moving next to it (control clips). rTMS-fMRI was repeated for four consecutive blocks. In two blocks we stimulated the left pSTS region and in the other two the right pSTS region. For each side TMS was applied with an effective intensity (95 % of motor threshold) or with ineffective intensity (50 % of motor threshold). Brain regions showing interactive effects of (clip type) × (TMS intensity) were identified in the lateral temporo-occipital cortex, in the anterior intraparietal region and in the ventral premotor cortex. Remote effects of rTMS were mostly limited to the stimulated hemisphere and consisted in an increase of blood oxygen level-dependent responses to grasp clips compared to control clips. We show that the pSTS occupies a pivotal relay position during observation of goal-directed actions.
Keshner, E.A.; Dhaher, Y.
2008-01-01
Multiplanar environmental motion could generate head instability, particularly if the visual surround moves in planes orthogonal to a physical disturbance. We combined sagittal plane surface translations with visual field disturbances in 12 healthy (29–31 years) and 3 visually sensitive (27–57 years) adults. Center of pressure (COP), peak head angles, and RMS values of head motion were calculated and a 3-dimensional model of joint motion11 was developed to examine gross head motion in 3 planes. We found that subjects standing quietly in front of a visual scene translating in the sagittal plane produced significantly greater (p<0.003) head motion in yaw than when on a translating platform. However, when the platform was translated in the dark or with a visual scene rotating in roll, head motion orthogonal to the plane of platform motion significantly increased (p<0.02). Visually sensitive subjects having no history of vestibular disorder produced large, delayed compensatory head motion. Orthogonal head motions were significantly greater in visually sensitive than in healthy subjects in the dark (p<0.05) and with a stationary scene (p<0.01). We concluded that motion of the visual field can modify compensatory response kinematics of a freely moving head in planes orthogonal to the direction of a physical perturbation. These results suggest that the mechanisms controlling head orientation in space are distinct from those that control trunk orientation in space. These behaviors would have been missed if only COP data were considered. Data suggest that rehabilitation training can be enhanced by combining visual and mechanical perturbation paradigms. PMID:18162402
Modeling and measuring the visual detection of ecologically relevant motion by an Anolis lizard.
Pallus, Adam C; Fleishman, Leo J; Castonguay, Philip M
2010-01-01
Motion in the visual periphery of lizards, and other animals, often causes a shift of visual attention toward the moving object. This behavioral response must be more responsive to relevant motion (predators, prey, conspecifics) than to irrelevant motion (windblown vegetation). Early stages of visual motion detection rely on simple local circuits known as elementary motion detectors (EMDs). We presented a computer model consisting of a grid of correlation-type EMDs, with videos of natural motion patterns, including prey, predators and windblown vegetation. We systematically varied the model parameters and quantified the relative response to the different classes of motion. We carried out behavioral experiments with the lizard Anolis sagrei and determined that their visual response could be modeled with a grid of correlation-type EMDs with a spacing parameter of 0.3 degrees visual angle, and a time constant of 0.1 s. The model with these parameters gave substantially stronger responses to relevant motion patterns than to windblown vegetation under equivalent conditions. However, the model is sensitive to local contrast and viewer-object distance. Therefore, additional neural processing is probably required for the visual system to reliably distinguish relevant from irrelevant motion under a full range of natural conditions.
Accuracy and Tuning of Flow Parsing for Visual Perception of Object Motion During Self-Motion
Niehorster, Diederick C.
2017-01-01
How do we perceive object motion during self-motion using visual information alone? Previous studies have reported that the visual system can use optic flow to identify and globally subtract the retinal motion component resulting from self-motion to recover scene-relative object motion, a process called flow parsing. In this article, we developed a retinal motion nulling method to directly measure and quantify the magnitude of flow parsing (i.e., flow parsing gain) in various scenarios to examine the accuracy and tuning of flow parsing for the visual perception of object motion during self-motion. We found that flow parsing gains were below unity for all displays in all experiments; and that increasing self-motion and object motion speed did not alter flow parsing gain. We conclude that visual information alone is not sufficient for the accurate perception of scene-relative motion during self-motion. Although flow parsing performs global subtraction, its accuracy also depends on local motion information in the retinal vicinity of the moving object. Furthermore, the flow parsing gain was constant across common self-motion or object motion speeds. These results can be used to inform and validate computational models of flow parsing. PMID:28567272
Postural time-to-contact as a precursor of visually induced motion sickness.
Li, Ruixuan; Walter, Hannah; Curry, Christopher; Rath, Ruth; Peterson, Nicolette; Stoffregen, Thomas A
2018-06-01
The postural instability theory of motion sickness predicts that subjective symptoms of motion sickness will be preceded by unstable control of posture. In previous studies, this prediction has been confirmed with measures of the spatial magnitude and the temporal dynamics of postural activity. In the present study, we examine whether precursors of visually induced motion sickness might exist in postural time-to-contact, a measure of postural activity that is related to the risk of falling. Standing participants were exposed to oscillating visual motion stimuli in a standard laboratory protocol. Both before and during exposure to visual motion stimuli, we monitored the kinematics of the body's center of pressure. We predicted that postural activity would differ between participants who reported motion sickness and those who did not, and that these differences would exist before participants experienced subjective symptoms of motion sickness. During exposure to visual motion stimuli, the multifractality of sway differed between the Well and Sick groups. Postural time-to-contact differed between the Well and Sick groups during exposure to visual motion stimuli, but also before exposure to any motion stimuli. The results provide a qualitatively new type of support for the postural instability theory of motion sickness.
Evidence for auditory-visual processing specific to biological motion.
Wuerger, Sophie M; Crocker-Buque, Alexander; Meyer, Georg F
2012-01-01
Biological motion is usually associated with highly correlated sensory signals from more than one modality: an approaching human walker will not only have a visual representation, namely an increase in the retinal size of the walker's image, but also a synchronous auditory signal since the walker's footsteps will grow louder. We investigated whether the multisensorial processing of biological motion is subject to different constraints than ecologically invalid motion. Observers were presented with a visual point-light walker and/or synchronised auditory footsteps; the walker was either approaching the observer (looming motion) or walking away (receding motion). A scrambled point-light walker served as a control. Observers were asked to detect the walker's motion as quickly and as accurately as possible. In Experiment 1 we tested whether the reaction time advantage due to redundant information in the auditory and visual modality is specific for biological motion. We found no evidence for such an effect: the reaction time reduction was accounted for by statistical facilitation for both biological and scrambled motion. In Experiment 2, we dissociated the auditory and visual information and tested whether inconsistent motion directions across the auditory and visual modality yield longer reaction times in comparison to consistent motion directions. Here we find an effect specific to biological motion: motion incongruency leads to longer reaction times only when the visual walker is intact and recognisable as a human figure. If the figure of the walker is abolished by scrambling, motion incongruency has no effect on the speed of the observers' judgments. In conjunction with Experiment 1 this suggests that conflicting auditory-visual motion information of an intact human walker leads to interference and thereby delaying the response.
Sharp, Linda; Cotton, Seonaidh; Cochran, Claire; Gray, Nicola; Little, Julian; Neal, Keith; Cruickshank, Maggie
2009-10-01
Few studies have investigated physical after-effects of colposcopy. We compared post-colposcopy self-reported pain, bleeding, discharge and menstrual changes in women who underwent: colposcopic examination only; cervical punch biopsies; and large loop excision of the transformation zone (LLETZ). Observational study nested within a randomised controlled trial. Grampian, Tayside and Nottingham. Nine hundred-and-twenty-nine women, aged 20-59, with low-grade cytology, who had completed their initial colposcopic management. Women completed questionnaires on after-effects at approximately 6-weeks, and on menstruation at 4-months, post-colposcopy. Frequency of pain, bleeding, discharge; changes to first menstrual period post-colposcopy. Seven hundred-and-fifty-one women (80%) completed the 6-week questionnaire. Of women who had only a colposcopic examination, 14-18% reported pain, bleeding or discharge. Around half of women who had biopsies only and two-thirds treated by LLETZ reported pain or discharge (biopsies: 53% pain, 46% discharge; LLETZ: 67% pain, 63% discharge). The frequency of bleeding was similar in the biopsy (79%) and LLETZ groups (87%). Women treated by LLETZ reported bleeding and discharge of significantly longer duration than other women. The duration of pain was similar across management groups. Forty-three percent of women managed by biopsies and 71% managed by LLETZ reported some change to their first period post-colposcopy, as did 29% who only had a colposcopic examination. Cervical punch biopsies and, especially, LLETZ carry a substantial risk of after-effects. After-effects are also reported by women managed solely by colposcopic examination. Ensuring that women are fully informed about after-effects may help to alleviate anxiety and provide reassurance, thereby minimising the harms of screening.
Sparing of Sensitivity to Biological Motion but Not of Global Motion after Early Visual Deprivation
ERIC Educational Resources Information Center
Hadad, Bat-Sheva; Maurer, Daphne; Lewis, Terri L.
2012-01-01
Patients deprived of visual experience during infancy by dense bilateral congenital cataracts later show marked deficits in the perception of global motion (dorsal visual stream) and global form (ventral visual stream). We expected that they would also show marked deficits in sensitivity to biological motion, which is normally processed in the…
Response normalization and blur adaptation: Data and multi-scale model
Elliott, Sarah L.; Georgeson, Mark A.; Webster, Michael A.
2011-01-01
Adapting to blurred or sharpened images alters perceived blur of a focused image (M. A. Webster, M. A. Georgeson, & S. M. Webster, 2002). We asked whether blur adaptation results in (a) renormalization of perceived focus or (b) a repulsion aftereffect. Images were checkerboards or 2-D Gaussian noise, whose amplitude spectra had (log–log) slopes from −2 (strongly blurred) to 0 (strongly sharpened). Observers adjusted the spectral slope of a comparison image to match different test slopes after adaptation to blurred or sharpened images. Results did not show repulsion effects but were consistent with some renormalization. Test blur levels at and near a blurred or sharpened adaptation level were matched by more focused slopes (closer to 1/f) but with little or no change in appearance after adaptation to focused (1/f) images. A model of contrast adaptation and blur coding by multiple-scale spatial filters predicts these blur aftereffects and those of Webster et al. (2002). A key proposal is that observers are pre-adapted to natural spectra, and blurred or sharpened spectra induce changes in the state of adaptation. The model illustrates how norms might be encoded and recalibrated in the visual system even when they are represented only implicitly by the distribution of responses across multiple channels. PMID:21307174
Perceptual response to visual noise and display media
NASA Technical Reports Server (NTRS)
Durgin, Frank H.; Proffitt, Dennis R.
1993-01-01
The present project was designed to follow up an earlier investigation in which perceptual adaptation in response to the use of Night Vision Goggles, or image intensification (I squared) systems, such as those employed in the military were studied. Our chief concern in the earlier studies was with the dynamic visual noise that is a byproduct of the I(sup 2) technology: under low light conditions, there is a great deal of 'snow' or sporadic 'twinkling' of pixels in the I(sup 2) display which is more salient as the ambient light levels are lower. Because prolonged exposure to static visual noise produces strong adaptation responses, we reasoned that the dynamic visual noise of I(sup 2) displays might have a similar effect, which could have implications for their long term use. However, in the series of experiments reported last year, no evidence at all of such aftereffects following extended exposure to I(sup 2) displays were found. This finding surprised us, and led us to propose the following studies: (1) an investigation of dynamic visual noise and its capacity to produce after effects; and (2) an investigation of the perceptual consequences of characteristics of the display media.
Houck, M R; Hoffman, J E
1986-05-01
According to feature-integration theory (Treisman & Gelade, 1980), separable features such as color and shape exist in separate maps in preattentive vision and can be integrated only through the use of spatial attention. Many perceptual aftereffects, however, which are also assumed to reflect the features available in preattentive vision, are sensitive to conjunctions of features. One possible resolution of these views holds that adaptation to conjunctions depends on spatial attention. We tested this proposition by presenting observers with gratings varying in color and orientation. The resulting McCollough aftereffects were independent of whether the adaptation stimuli were presented inside or outside of the focus of spatial attention. Therefore, color and shape appear to be conjoined preattentively, when perceptual aftereffects are used as the measure. These same stimuli, however, appeared to be separable in two additional experiments that required observers to search for gratings of a specified color and orientation. These results show that different experimental procedures may be tapping into different stages of preattentive vision.
Visual motion integration for perception and pursuit
NASA Technical Reports Server (NTRS)
Stone, L. S.; Beutter, B. R.; Lorenceau, J.
2000-01-01
To examine the relationship between visual motion processing for perception and pursuit, we measured the pursuit eye-movement and perceptual responses to the same complex-motion stimuli. We show that humans can both perceive and pursue the motion of line-figure objects, even when partial occlusion makes the resulting image motion vastly different from the underlying object motion. Our results show that both perception and pursuit can perform largely accurate motion integration, i.e. the selective combination of local motion signals across the visual field to derive global object motion. Furthermore, because we manipulated perceived motion while keeping image motion identical, the observed parallel changes in perception and pursuit show that the motion signals driving steady-state pursuit and perception are linked. These findings disprove current pursuit models whose control strategy is to minimize retinal image motion, and suggest a new framework for the interplay between visual cortex and cerebellum in visuomotor control.
Peripheral Vision of Youths with Low Vision: Motion Perception, Crowding, and Visual Search
Tadin, Duje; Nyquist, Jeffrey B.; Lusk, Kelly E.; Corn, Anne L.; Lappin, Joseph S.
2012-01-01
Purpose. Effects of low vision on peripheral visual function are poorly understood, especially in children whose visual skills are still developing. The aim of this study was to measure both central and peripheral visual functions in youths with typical and low vision. Of specific interest was the extent to which measures of foveal function predict performance of peripheral tasks. Methods. We assessed central and peripheral visual functions in youths with typical vision (n = 7, ages 10–17) and low vision (n = 24, ages 9–18). Experimental measures used both static and moving stimuli and included visual crowding, visual search, motion acuity, motion direction discrimination, and multitarget motion comparison. Results. In most tasks, visual function was impaired in youths with low vision. Substantial differences, however, were found both between participant groups and, importantly, across different tasks within participant groups. Foveal visual acuity was a modest predictor of peripheral form vision and motion sensitivity in either the central or peripheral field. Despite exhibiting normal motion discriminations in fovea, motion sensitivity of youths with low vision deteriorated in the periphery. This contrasted with typically sighted participants, who showed improved motion sensitivity with increasing eccentricity. Visual search was greatly impaired in youths with low vision. Conclusions. Our results reveal a complex pattern of visual deficits in peripheral vision and indicate a significant role of attentional mechanisms in observed impairments. These deficits were not adequately captured by measures of foveal function, arguing for the importance of independently assessing peripheral visual function. PMID:22836766
Peripheral vision of youths with low vision: motion perception, crowding, and visual search.
Tadin, Duje; Nyquist, Jeffrey B; Lusk, Kelly E; Corn, Anne L; Lappin, Joseph S
2012-08-24
Effects of low vision on peripheral visual function are poorly understood, especially in children whose visual skills are still developing. The aim of this study was to measure both central and peripheral visual functions in youths with typical and low vision. Of specific interest was the extent to which measures of foveal function predict performance of peripheral tasks. We assessed central and peripheral visual functions in youths with typical vision (n = 7, ages 10-17) and low vision (n = 24, ages 9-18). Experimental measures used both static and moving stimuli and included visual crowding, visual search, motion acuity, motion direction discrimination, and multitarget motion comparison. In most tasks, visual function was impaired in youths with low vision. Substantial differences, however, were found both between participant groups and, importantly, across different tasks within participant groups. Foveal visual acuity was a modest predictor of peripheral form vision and motion sensitivity in either the central or peripheral field. Despite exhibiting normal motion discriminations in fovea, motion sensitivity of youths with low vision deteriorated in the periphery. This contrasted with typically sighted participants, who showed improved motion sensitivity with increasing eccentricity. Visual search was greatly impaired in youths with low vision. Our results reveal a complex pattern of visual deficits in peripheral vision and indicate a significant role of attentional mechanisms in observed impairments. These deficits were not adequately captured by measures of foveal function, arguing for the importance of independently assessing peripheral visual function.
Stochastic growth logistic model with aftereffect for batch fermentation process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosli, Norhayati; Ayoubi, Tawfiqullah; Bahar, Arifah
2014-06-19
In this paper, the stochastic growth logistic model with aftereffect for the cell growth of C. acetobutylicum P262 and Luedeking-Piret equations for solvent production in batch fermentation system is introduced. The parameters values of the mathematical models are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic models numerically. The effciency of mathematical models is measured by comparing the simulated result and the experimental data of the microbial growth and solvent production in batch system. Low values of Root Mean-Square Error (RMSE) of stochastic models with aftereffect indicate good fits.
Stochastic growth logistic model with aftereffect for batch fermentation process
NASA Astrophysics Data System (ADS)
Rosli, Norhayati; Ayoubi, Tawfiqullah; Bahar, Arifah; Rahman, Haliza Abdul; Salleh, Madihah Md
2014-06-01
In this paper, the stochastic growth logistic model with aftereffect for the cell growth of C. acetobutylicum P262 and Luedeking-Piret equations for solvent production in batch fermentation system is introduced. The parameters values of the mathematical models are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic models numerically. The effciency of mathematical models is measured by comparing the simulated result and the experimental data of the microbial growth and solvent production in batch system. Low values of Root Mean-Square Error (RMSE) of stochastic models with aftereffect indicate good fits.
Takeuchi, Tatsuto; Yoshimoto, Sanae; Shimada, Yasuhiro; Kochiyama, Takanori; Kondo, Hirohito M
2017-02-19
Recent studies have shown that interindividual variability can be a rich source of information regarding the mechanism of human visual perception. In this study, we examined the mechanisms underlying interindividual variability in the perception of visual motion, one of the fundamental components of visual scene analysis, by measuring neurotransmitter concentrations using magnetic resonance spectroscopy. First, by psychophysically examining two types of motion phenomena-motion assimilation and contrast-we found that, following the presentation of the same stimulus, some participants perceived motion assimilation, while others perceived motion contrast. Furthermore, we found that the concentration of the excitatory neurotransmitter glutamate-glutamine (Glx) in the dorsolateral prefrontal cortex (Brodmann area 46) was positively correlated with the participant's tendency to motion assimilation over motion contrast; however, this effect was not observed in the visual areas. The concentration of the inhibitory neurotransmitter γ-aminobutyric acid had only a weak effect compared with that of Glx. We conclude that excitatory process in the suprasensory area is important for an individual's tendency to determine antagonistically perceived visual motion phenomena.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).
Causal evidence for retina dependent and independent visual motion computations in mouse cortex
Hillier, Daniel; Fiscella, Michele; Drinnenberg, Antonia; Trenholm, Stuart; Rompani, Santiago B.; Raics, Zoltan; Katona, Gergely; Juettner, Josephine; Hierlemann, Andreas; Rozsa, Balazs; Roska, Botond
2017-01-01
How neuronal computations in the sensory periphery contribute to computations in the cortex is not well understood. We examined this question in the context of visual-motion processing in the retina and primary visual cortex (V1) of mice. We disrupted retinal direction selectivity – either exclusively along the horizontal axis using FRMD7 mutants or along all directions by ablating starburst amacrine cells – and monitored neuronal activity in layer 2/3 of V1 during stimulation with visual motion. In control mice, we found an overrepresentation of cortical cells preferring posterior visual motion, the dominant motion direction an animal experiences when it moves forward. In mice with disrupted retinal direction selectivity, the overrepresentation of posterior-motion-preferring cortical cells disappeared, and their response at higher stimulus speeds was reduced. This work reveals the existence of two functionally distinct, sensory-periphery-dependent and -independent computations of visual motion in the cortex. PMID:28530661
NASA Technical Reports Server (NTRS)
Berthoz, A.; Pavard, B.; Young, L. R.
1975-01-01
The basic characteristics of the sensation of linear horizontal motion have been studied. Objective linear motion was induced by means of a moving cart. Visually induced linear motion perception (linearvection) was obtained by projection of moving images at the periphery of the visual field. Image velocity and luminance thresholds for the appearance of linearvection have been measured and are in the range of those for image motion detection (without sensation of self motion) by the visual system. Latencies of onset are around 1 sec and short term adaptation has been shown. The dynamic range of the visual analyzer as judged by frequency analysis is lower than the vestibular analyzer. Conflicting situations in which visual cues contradict vestibular and other proprioceptive cues show, in the case of linearvection a dominance of vision which supports the idea of an essential although not independent role of vision in self motion perception.
Face Adaptation and Attractiveness Aftereffects in 8-Year-Olds and Adults
ERIC Educational Resources Information Center
Anzures, Gizelle; Mondloch, Catherine J.; Lackner, Christine
2009-01-01
A novel method was used to investigate developmental changes in face processing: attractiveness aftereffects. Consistent with the norm-based coding model, viewing consistently distorted faces shifts adults' attractiveness preferences toward the adapting stimuli. Thus, adults' attractiveness judgments are influenced by a continuously updated face…
Nonlinear circuits for naturalistic visual motion estimation
Fitzgerald, James E; Clark, Damon A
2015-01-01
Many animals use visual signals to estimate motion. Canonical models suppose that animals estimate motion by cross-correlating pairs of spatiotemporally separated visual signals, but recent experiments indicate that humans and flies perceive motion from higher-order correlations that signify motion in natural environments. Here we show how biologically plausible processing motifs in neural circuits could be tuned to extract this information. We emphasize how known aspects of Drosophila's visual circuitry could embody this tuning and predict fly behavior. We find that segregating motion signals into ON/OFF channels can enhance estimation accuracy by accounting for natural light/dark asymmetries. Furthermore, a diversity of inputs to motion detecting neurons can provide access to more complex higher-order correlations. Collectively, these results illustrate how non-canonical computations improve motion estimation with naturalistic inputs. This argues that the complexity of the fly's motion computations, implemented in its elaborate circuits, represents a valuable feature of its visual motion estimator. DOI: http://dx.doi.org/10.7554/eLife.09123.001 PMID:26499494
A novel role for visual perspective cues in the neural computation of depth.
Kim, HyungGoo R; Angelaki, Dora E; DeAngelis, Gregory C
2015-01-01
As we explore a scene, our eye movements add global patterns of motion to the retinal image, complicating visual motion produced by self-motion or moving objects. Conventionally, it has been assumed that extraretinal signals, such as efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. We consider an alternative possibility: namely, that the visual system can infer eye rotations from global patterns of image motion. We visually simulated combinations of eye translation and rotation, including perspective distortions that change dynamically over time. We found that incorporating these 'dynamic perspective' cues allowed the visual system to generate selectivity for depth sign from motion parallax in macaque cortical area MT, a computation that was previously thought to require extraretinal signals regarding eye velocity. Our findings suggest neural mechanisms that analyze global patterns of visual motion to perform computations that require knowledge of eye rotations.
After-effects of human-computer interaction indicated by P300 of the event-related brain potential.
Trimmel, M; Huber, R
1998-05-01
After-effects of human-computer interaction (HCI) were investigated by using the P300 component of the event-related brain potential (ERP). Forty-nine subjects (naive non-users, beginners, experienced users, programmers) completed three paper/pencil tasks (text editing, solving intelligence test items, filling out a questionnaire on sensation seeking) and three HCI tasks (text editing, executing a tutor program or programming, playing Tetris). The sequence of 7-min tasks was randomized between subjects and balanced between groups. After each experimental condition ERPs were recorded during an acoustic discrimination task at F3, F4, Cz, P3 and P4. Data indicate that: (1) mental after-effects of HCI can be detected by P300 of the ERP; (2) HCI showed in general a reduced amplitude; (3) P300 amplitude varied also with type of task, mainly at F4 where it was smaller after cognitive tasks (intelligence test/programming) and larger after emotion-based tasks (sensation seeking/Tetris); (4) cognitive tasks showed shorter latencies; (5) latencies were widely location-independent (within the range of 356-358 ms at F3, F4, P3 and P4) after executing the tutor program or programming; and (6) all observed after-effects were independent of the user's experience in operating computers and may therefore reflect short-term after-effects only and no structural changes of information processing caused by HCI.
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Bowles, R. L.
1983-01-01
This paper addresses the issues of motion/visual cueing fidelity requirements for vortex encounters during simulated transport visual approaches and landings. Four simulator configurations were utilized to provide objective performance measures during simulated vortex penetrations, and subjective comments from pilots were collected. The configurations used were as follows: fixed base with visual degradation (delay), fixed base with no visual degradation, moving base with visual degradation (delay), and moving base with no visual degradation. The statistical comparisons of the objective measures and the subjective pilot opinions indicated that although both minimum visual delay and motion cueing are recommended for the vortex penetration task, the visual-scene delay characteristics were not as significant a fidelity factor as was the presence of motion cues. However, this indication was applicable to a restricted task, and to transport aircraft. Although they were statistically significant, the effects of visual delay and motion cueing on the touchdown-related measures were considered to be of no practical consequence.
DOT National Transportation Integrated Search
1964-01-01
This experiment has shown that, although both rods and cones mediate the spiral aftereffect, cone areas give a larger response. Increasing size of the retinal image results in longer durations of SAE but rods are more affected by this increase than a...
Can walking motions improve visually induced rotational self-motion illusions in virtual reality?
Riecke, Bernhard E; Freiberg, Jacob B; Grechkin, Timofey Y
2015-02-04
Illusions of self-motion (vection) can provide compelling sensations of moving through virtual environments without the need for complex motion simulators or large tracked physical walking spaces. Here we explore the interaction between biomechanical cues (stepping along a rotating circular treadmill) and visual cues (viewing simulated self-rotation) for providing stationary users a compelling sensation of rotational self-motion (circular vection). When tested individually, biomechanical and visual cues were similarly effective in eliciting self-motion illusions. However, in combination they yielded significantly more intense self-motion illusions. These findings provide the first compelling evidence that walking motions can be used to significantly enhance visually induced rotational self-motion perception in virtual environments (and vice versa) without having to provide for physical self-motion or motion platforms. This is noteworthy, as linear treadmills have been found to actually impair visually induced translational self-motion perception (Ash, Palmisano, Apthorp, & Allison, 2013). Given the predominant focus on linear walking interfaces for virtual-reality locomotion, our findings suggest that investigating circular and curvilinear walking interfaces offers a promising direction for future research and development and can help to enhance self-motion illusions, presence and immersion in virtual-reality systems. © 2015 ARVO.
Perception of Visual Speed While Moving
ERIC Educational Resources Information Center
Durgin, Frank H.; Gigone, Krista; Scott, Rebecca
2005-01-01
During self-motion, the world normally appears stationary. In part, this may be due to reductions in visual motion signals during self-motion. In 8 experiments, the authors used magnitude estimation to characterize changes in visual speed perception as a result of biomechanical self-motion alone (treadmill walking), physical translation alone…
Demonstrating the Potential for Dynamic Auditory Stimulation to Contribute to Motion Sickness
Keshavarz, Behrang; Hettinger, Lawrence J.; Kennedy, Robert S.; Campos, Jennifer L.
2014-01-01
Auditory cues can create the illusion of self-motion (vection) in the absence of visual or physical stimulation. The present study aimed to determine whether auditory cues alone can also elicit motion sickness and how auditory cues contribute to motion sickness when added to visual motion stimuli. Twenty participants were seated in front of a curved projection display and were exposed to a virtual scene that constantly rotated around the participant's vertical axis. The virtual scene contained either visual-only, auditory-only, or a combination of corresponding visual and auditory cues. All participants performed all three conditions in a counterbalanced order. Participants tilted their heads alternately towards the right or left shoulder in all conditions during stimulus exposure in order to create pseudo-Coriolis effects and to maximize the likelihood for motion sickness. Measurements of motion sickness (onset, severity), vection (latency, strength, duration), and postural steadiness (center of pressure) were recorded. Results showed that adding auditory cues to the visual stimuli did not, on average, affect motion sickness and postural steadiness, but it did reduce vection onset times and increased vection strength compared to pure visual or pure auditory stimulation. Eighteen of the 20 participants reported at least slight motion sickness in the two conditions including visual stimuli. More interestingly, six participants also reported slight motion sickness during pure auditory stimulation and two of the six participants stopped the pure auditory test session due to motion sickness. The present study is the first to demonstrate that motion sickness may be caused by pure auditory stimulation, which we refer to as “auditorily induced motion sickness”. PMID:24983752
Premotor cortex is sensitive to auditory-visual congruence for biological motion.
Wuerger, Sophie M; Parkes, Laura; Lewis, Penelope A; Crocker-Buque, Alex; Rutschmann, Roland; Meyer, Georg F
2012-03-01
The auditory and visual perception systems have developed special processing strategies for ecologically valid motion stimuli, utilizing some of the statistical properties of the real world. A well-known example is the perception of biological motion, for example, the perception of a human walker. The aim of the current study was to identify the cortical network involved in the integration of auditory and visual biological motion signals. We first determined the cortical regions of auditory and visual coactivation (Experiment 1); a conjunction analysis based on unimodal brain activations identified four regions: middle temporal area, inferior parietal lobule, ventral premotor cortex, and cerebellum. The brain activations arising from bimodal motion stimuli (Experiment 2) were then analyzed within these regions of coactivation. Auditory footsteps were presented concurrently with either an intact visual point-light walker (biological motion) or a scrambled point-light walker; auditory and visual motion in depth (walking direction) could either be congruent or incongruent. Our main finding is that motion incongruency (across modalities) increases the activity in the ventral premotor cortex, but only if the visual point-light walker is intact. Our results extend our current knowledge by providing new evidence consistent with the idea that the premotor area assimilates information across the auditory and visual modalities by comparing the incoming sensory input with an internal representation.
Neural Representation of Motion-In-Depth in Area MT
Sanada, Takahisa M.
2014-01-01
Neural processing of 2D visual motion has been studied extensively, but relatively little is known about how visual cortical neurons represent visual motion trajectories that include a component toward or away from the observer (motion in depth). Psychophysical studies have demonstrated that humans perceive motion in depth based on both changes in binocular disparity over time (CD cue) and interocular velocity differences (IOVD cue). However, evidence for neurons that represent motion in depth has been limited, especially in primates, and it is unknown whether such neurons make use of CD or IOVD cues. We show that approximately one-half of neurons in macaque area MT are selective for the direction of motion in depth, and that this selectivity is driven primarily by IOVD cues, with a small contribution from the CD cue. Our results establish that area MT, a central hub of the primate visual motion processing system, contains a 3D representation of visual motion. PMID:25411481
Contextual effects on motion perception and smooth pursuit eye movements.
Spering, Miriam; Gegenfurtner, Karl R
2008-08-15
Smooth pursuit eye movements are continuous, slow rotations of the eyes that allow us to follow the motion of a visual object of interest. These movements are closely related to sensory inputs from the visual motion processing system. To track a moving object in the natural environment, its motion first has to be segregated from the motion signals provided by surrounding stimuli. Here, we review experiments on the effect of the visual context on motion processing with a focus on the relationship between motion perception and smooth pursuit eye movements. While perception and pursuit are closely linked, we show that they can behave quite distinctly when required by the visual context.
Harvey, Ben M; Dumoulin, Serge O
2016-02-15
Several studies demonstrate that visual stimulus motion affects neural receptive fields and fMRI response amplitudes. Here we unite results of these two approaches and extend them by examining the effects of visual motion on neural position preferences throughout the hierarchy of human visual field maps. We measured population receptive field (pRF) properties using high-field fMRI (7T), characterizing position preferences simultaneously over large regions of the visual cortex. We measured pRFs properties using sine wave gratings in stationary apertures, moving at various speeds in either the direction of pRF measurement or the orthogonal direction. We find direction- and speed-dependent changes in pRF preferred position and size in all visual field maps examined, including V1, V3A, and the MT+ map TO1. These effects on pRF properties increase up the hierarchy of visual field maps. However, both within and between visual field maps the extent of pRF changes was approximately proportional to pRF size. This suggests that visual motion transforms the representation of visual space similarly throughout the visual hierarchy. Visual motion can also produce an illusory displacement of perceived stimulus position. We demonstrate perceptual displacements using the same stimulus configuration. In contrast to effects on pRF properties, perceptual displacements show only weak effects of motion speed, with far larger speed-independent effects. We describe a model where low-level mechanisms could underlie the observed effects on neural position preferences. We conclude that visual motion induces similar transformations of visuo-spatial representations throughout the visual hierarchy, which may arise through low-level mechanisms. Copyright © 2015 Elsevier Inc. All rights reserved.
Motion versus position in the perception of head-centred movement.
Freeman, Tom C A; Sumnall, Jane H
2002-01-01
Abstract. Observers can recover motion with respect to the head during an eye movement by comparing signals encoding retinal motion and the velocity of pursuit. Evidently there is a mismatch between these signals because perceived head-centred motion is not always veridical. One example is the Filehne illusion, in which a stationary object appears to move in the opposite direction to pursuit. Like the motion aftereffect, the phenomenal experience of the Filehne illusion is one in which the stimulus moves but does not seem to go anywhere. This raises problems when measuring the illusion by motion nulling because the more traditional technique confounds perceived motion with changes in perceived position. We devised a new nulling technique using global-motion stimuli that degraded familiar position cues but preserved cues to motion. Stimuli consisted of random-dot patterns comprising signal and noise dots that moved at the same retinal 'base' speed. Noise moved in random directions. In an eye-stationary speed-matching experiment we found noise slowed perceived retinal speed as 'coherence strength' (ie percentage of signal) was reduced. The effect occurred over the two-octave range of base speeds studied and well above direction threshold. When the same stimuli were combined with pursuit, observers were able to null the Filehne illusion by adjusting coherence. A power law relating coherence to retinal base speed fit the data well with a negative exponent. Eye-movement recordings showed that pursuit was quite accurate. We then tested the hypothesis that the stimuli found at the null-points appeared to move at the same retinal speed. Two observers supported the hypothesis, a third partially, and a fourth showed a small linear trend. In addition, the retinal speed found by the traditional Filehne technique was similar to the matches obtained with the global-motion stimuli. The results provide support for the idea that speed is the critical cue in head-centred motion perception.
Inferring the direction of implied motion depends on visual awareness
Faivre, Nathan; Koch, Christof
2014-01-01
Visual awareness of an event, object, or scene is, by essence, an integrated experience, whereby different visual features composing an object (e.g., orientation, color, shape) appear as an unified percept and are processed as a whole. Here, we tested in human observers whether perceptual integration of static motion cues depends on awareness by measuring the capacity to infer the direction of motion implied by a static visible or invisible image under continuous flash suppression. Using measures of directional adaptation, we found that visible but not invisible implied motion adaptors biased the perception of real motion probes. In a control experiment, we found that invisible adaptors implying motion primed the perception of subsequent probes when they were identical (i.e., repetition priming), but not when they only shared the same direction (i.e., direction priming). Furthermore, using a model of visual processing, we argue that repetition priming effects are likely to arise as early as in the primary visual cortex. We conclude that although invisible images implying motion undergo some form of nonconscious processing, visual awareness is necessary to make inferences about motion direction. PMID:24706951
Inferring the direction of implied motion depends on visual awareness.
Faivre, Nathan; Koch, Christof
2014-04-04
Visual awareness of an event, object, or scene is, by essence, an integrated experience, whereby different visual features composing an object (e.g., orientation, color, shape) appear as an unified percept and are processed as a whole. Here, we tested in human observers whether perceptual integration of static motion cues depends on awareness by measuring the capacity to infer the direction of motion implied by a static visible or invisible image under continuous flash suppression. Using measures of directional adaptation, we found that visible but not invisible implied motion adaptors biased the perception of real motion probes. In a control experiment, we found that invisible adaptors implying motion primed the perception of subsequent probes when they were identical (i.e., repetition priming), but not when they only shared the same direction (i.e., direction priming). Furthermore, using a model of visual processing, we argue that repetition priming effects are likely to arise as early as in the primary visual cortex. We conclude that although invisible images implying motion undergo some form of nonconscious processing, visual awareness is necessary to make inferences about motion direction.
Priming with real motion biases visual cortical response to bistable apparent motion
Zhang, Qing-fang; Wen, Yunqing; Zhang, Deng; She, Liang; Wu, Jian-young; Dan, Yang; Poo, Mu-ming
2012-01-01
Apparent motion quartet is an ambiguous stimulus that elicits bistable perception, with the perceived motion alternating between two orthogonal paths. In human psychophysical experiments, the probability of perceiving motion in each path is greatly enhanced by a brief exposure to real motion along that path. To examine the neural mechanism underlying this priming effect, we used voltage-sensitive dye (VSD) imaging to measure the spatiotemporal activity in the primary visual cortex (V1) of awake mice. We found that a brief real motion stimulus transiently biased the cortical response to subsequent apparent motion toward the spatiotemporal pattern representing the real motion. Furthermore, intracellular recording from V1 neurons in anesthetized mice showed a similar increase in subthreshold depolarization in the neurons representing the path of real motion. Such short-term plasticity in early visual circuits may contribute to the priming effect in bistable visual perception. PMID:23188797
Shibai, Atsushi; Arimoto, Tsunehiro; Yoshinaga, Tsukasa; Tsuchizawa, Yuta; Khureltulga, Dashdavaa; Brown, Zuben P; Kakizuka, Taishi; Hosoda, Kazufumi
2018-06-05
Visual recognition of conspecifics is necessary for a wide range of social behaviours in many animals. Medaka (Japanese rice fish), a commonly used model organism, are known to be attracted by the biological motion of conspecifics. However, biological motion is a composite of both body-shape motion and entire-field motion trajectory (i.e., posture or motion-trajectory elements, respectively), and it has not been revealed which element mediates the attractiveness. Here, we show that either posture or motion-trajectory elements alone can attract medaka. We decomposed biological motion of the medaka into the two elements and synthesized visual stimuli that contain both, either, or none of the two elements. We found that medaka were attracted by visual stimuli that contain at least one of the two elements. In the context of other known static visual information regarding the medaka, the potential multiplicity of information regarding conspecific recognition has further accumulated. Our strategy of decomposing biological motion into these partial elements is applicable to other animals, and further studies using this technique will enhance the basic understanding of visual recognition of conspecifics.
Visual/motion cue mismatch in a coordinated roll maneuver
NASA Technical Reports Server (NTRS)
Shirachi, D. K.; Shirley, R. S.
1981-01-01
The effects of bandwidth differences between visual and motion cueing systems on pilot performance for a coordinated roll task were investigated. Visual and motion cue configurations which were acceptable and the effects of reduced motion cue scaling on pilot performance were studied to determine the scale reduction threshold for which pilot performance was significantly different from full scale pilot performance. It is concluded that: (1) the presence or absence of high frequency error information in the visual and/or motion display systems significantly affects pilot performance; and (2) the attenuation of motion scaling while maintaining other display dynamic characteristics constant, affects pilot performance.
A novel role for visual perspective cues in the neural computation of depth
Kim, HyungGoo R.; Angelaki, Dora E.; DeAngelis, Gregory C.
2014-01-01
As we explore a scene, our eye movements add global patterns of motion to the retinal image, complicating visual motion produced by self-motion or moving objects. Conventionally, it has been assumed that extra-retinal signals, such as efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. We consider an alternative possibility: namely, that the visual system can infer eye rotations from global patterns of image motion. We visually simulated combinations of eye translation and rotation, including perspective distortions that change dynamically over time. We demonstrate that incorporating these “dynamic perspective” cues allows the visual system to generate selectivity for depth sign from motion parallax in macaque area MT, a computation that was previously thought to require extra-retinal signals regarding eye velocity. Our findings suggest novel neural mechanisms that analyze global patterns of visual motion to perform computations that require knowledge of eye rotations. PMID:25436667
Aging effect in pattern, motion and cognitive visual evoked potentials.
Kuba, Miroslav; Kremláček, Jan; Langrová, Jana; Kubová, Zuzana; Szanyi, Jana; Vít, František
2012-06-01
An electrophysiological study on the effect of aging on the visual pathway and various levels of visual information processing (primary cortex, associate visual motion processing cortex and cognitive cortical areas) was performed. We examined visual evoked potentials (VEPs) to pattern-reversal, motion-onset (translation and radial motion) and visual stimuli with a cognitive task (cognitive VEPs - P300 wave) at luminance of 17 cd/m(2). The most significant age-related change in a group of 150 healthy volunteers (15-85 years of age) was the increase in the P300 wave latency (2 ms per 1 year of age). Delays of the motion-onset VEPs (0.47 ms/year in translation and 0.46 ms/year in radial motion) and the pattern-reversal VEPs (0.26 ms/year) and the reductions of their amplitudes with increasing subject age (primarily in P300) were also found to be significant. The amplitude of the motion-onset VEPs to radial motion remained the most constant parameter with increasing age. Age-related changes were stronger in males. Our results indicate that cognitive VEPs, despite larger variability of their parameters, could be a useful criterion for an objective evaluation of the aging processes within the CNS. Possible differences in aging between the motion-processing system and the form-processing system within the visual pathway might be indicated by the more pronounced delay in the motion-onset VEPs and by their preserved size for radial motion (a biologically significant variant of motion) compared to the changes in pattern-reversal VEPs. Copyright © 2012 Elsevier Ltd. All rights reserved.
Short, Lindsey A; Hatry, Alexandra J; Mondloch, Catherine J
2011-02-01
The current research investigated the organization of children's face space by examining whether 5- and 8-year-olds show race-contingent aftereffects. Participants read a storybook in which Caucasian and Chinese children's faces were distorted in opposite directions. Before and after adaptation, participants judged the normality/attractiveness of expanded, compressed, and undistorted Caucasian and Chinese faces. The method was validated with adults and then refined to test 8- and 5-year-olds. The 5-year-olds were also tested in a simple aftereffects paradigm. The current research provides the first evidence for simple attractiveness aftereffects in 5-year-olds and for race-contingent aftereffects in both 5- and 8-year-olds. Evidence that adults and 5-year-olds may possess only a weak prototype for Chinese children's faces suggests that Caucasian adults' prototype for Chinese adult faces does not generalize to child faces and that children's face space undergoes a period of increasing differentiation between 5 and 8 years of age. Copyright © 2010 Elsevier Inc. All rights reserved.
Visual motion detection and habitat preference in Anolis lizards.
Steinberg, David S; Leal, Manuel
2016-11-01
The perception of visual stimuli has been a major area of inquiry in sensory ecology, and much of this work has focused on coloration. However, for visually oriented organisms, the process of visual motion detection is often equally crucial to survival and reproduction. Despite the importance of motion detection to many organisms' daily activities, the degree of interspecific variation in the perception of visual motion remains largely unexplored. Furthermore, the factors driving this potential variation (e.g., ecology or evolutionary history) along with the effects of such variation on behavior are unknown. We used a behavioral assay under laboratory conditions to quantify the visual motion detection systems of three species of Puerto Rican Anolis lizard that prefer distinct structural habitat types. We then compared our results to data previously collected for anoles from Cuba, Puerto Rico, and Central America. Our findings indicate that general visual motion detection parameters are similar across species, regardless of habitat preference or evolutionary history. We argue that these conserved sensory properties may drive the evolution of visual communication behavior in this clade.
Teramoto, Wataru; Watanabe, Hiroshi; Umemura, Hiroyuki
2008-01-01
The perceived temporal order of external successive events does not always follow their physical temporal order. We examined the contribution of self-motion mechanisms in the perception of temporal order in the auditory modality. We measured perceptual biases in the judgment of the temporal order of two short sounds presented successively, while participants experienced visually induced self-motion (yaw-axis circular vection) elicited by viewing long-lasting large-field visual motion. In experiment 1, a pair of white-noise patterns was presented to participants at various stimulus-onset asynchronies through headphones, while they experienced visually induced self-motion. Perceived temporal order of auditory events was modulated by the direction of the visual motion (or self-motion). Specifically, the sound presented to the ear in the direction opposite to the visual motion (ie heading direction) was perceived prior to the sound presented to the ear in the same direction. Experiments 2A and 2B were designed to reduce the contributions of decisional and/or response processes. In experiment 2A, the directional cueing of the background (left or right) and the response dimension (high pitch or low pitch) were not spatially associated. In experiment 2B, participants were additionally asked to report which of the two sounds was perceived 'second'. Almost the same results as in experiment 1 were observed, suggesting that the change in temporal order of auditory events during large-field visual motion reflects a change in perceptual processing. Experiment 3 showed that the biases in the temporal-order judgments of auditory events were caused by concurrent actual self-motion with a rotatory chair. In experiment 4, using a small display, we showed that 'pure' long exposure to visual motion without the sensation of self-motion was not responsible for this phenomenon. These results are consistent with previous studies reporting a change in the perceived temporal order of visual or tactile events depending on the direction of self-motion. Hence, large-field induced (ie optic flow) self-motion can affect the temporal order of successive external events across various modalities.
Illusory visual motion stimulus elicits postural sway in migraine patients
Imaizumi, Shu; Honma, Motoyasu; Hibino, Haruo; Koyama, Shinichi
2015-01-01
Although the perception of visual motion modulates postural control, it is unknown whether illusory visual motion elicits postural sway. The present study examined the effect of illusory motion on postural sway in patients with migraine, who tend to be sensitive to it. We measured postural sway for both migraine patients and controls while they viewed static visual stimuli with and without illusory motion. The participants’ postural sway was measured when they closed their eyes either immediately after (Experiment 1), or 30 s after (Experiment 2), viewing the stimuli. The patients swayed more than the controls when they closed their eyes immediately after viewing the illusory motion (Experiment 1), and they swayed less than the controls when they closed their eyes 30 s after viewing it (Experiment 2). These results suggest that static visual stimuli with illusory motion can induce postural sway that may last for at least 30 s in patients with migraine. PMID:25972832
On the Visual Input Driving Human Smooth-Pursuit Eye Movements
NASA Technical Reports Server (NTRS)
Stone, Leland S.; Beutter, Brent R.; Lorenceau, Jean
1996-01-01
Current computational models of smooth-pursuit eye movements assume that the primary visual input is local retinal-image motion (often referred to as retinal slip). However, we show that humans can pursue object motion with considerable accuracy, even in the presence of conflicting local image motion. This finding indicates that the visual cortical area(s) controlling pursuit must be able to perform a spatio-temporal integration of local image motion into a signal related to object motion. We also provide evidence that the object-motion signal that drives pursuit is related to the signal that supports perception. We conclude that current models of pursuit should be modified to include a visual input that encodes perceived object motion and not merely retinal image motion. Finally, our findings suggest that the measurement of eye movements can be used to monitor visual perception, with particular value in applied settings as this non-intrusive approach would not require interrupting ongoing work or training.
Visuotactile motion congruence enhances gamma-band activity in visual and somatosensory cortices.
Krebber, Martin; Harwood, James; Spitzer, Bernhard; Keil, Julian; Senkowski, Daniel
2015-08-15
When touching and viewing a moving surface our visual and somatosensory systems receive congruent spatiotemporal input. Behavioral studies have shown that motion congruence facilitates interplay between visual and tactile stimuli, but the neural mechanisms underlying this interplay are not well understood. Neural oscillations play a role in motion processing and multisensory integration. They may also be crucial for visuotactile motion processing. In this electroencephalography study, we applied linear beamforming to examine the impact of visuotactile motion congruence on beta and gamma band activity (GBA) in visual and somatosensory cortices. Visual and tactile inputs comprised of gratings that moved either in the same or different directions. Participants performed a target detection task that was unrelated to motion congruence. While there were no effects in the beta band (13-21Hz), the power of GBA (50-80Hz) in visual and somatosensory cortices was larger for congruent compared with incongruent motion stimuli. This suggests enhanced bottom-up multisensory processing when visual and tactile gratings moved in the same direction. Supporting its behavioral relevance, GBA was correlated with shorter reaction times in the target detection task. We conclude that motion congruence plays an important role for the integrative processing of visuotactile stimuli in sensory cortices, as reflected by oscillatory responses in the gamma band. Copyright © 2015 Elsevier Inc. All rights reserved.
Adaptation and perceptual norms
NASA Astrophysics Data System (ADS)
Webster, Michael A.; Yasuda, Maiko; Haber, Sara; Leonard, Deanne; Ballardini, Nicole
2007-02-01
We used adaptation to examine the relationship between perceptual norms--the stimuli observers describe as psychologically neutral, and response norms--the stimulus levels that leave visual sensitivity in a neutral or balanced state. Adapting to stimuli on opposite sides of a neutral point (e.g. redder or greener than white) biases appearance in opposite ways. Thus the adapting stimulus can be titrated to find the unique adapting level that does not bias appearance. We compared these response norms to subjectively defined neutral points both within the same observer (at different retinal eccentricities) and between observers. These comparisons were made for visual judgments of color, image focus, and human faces, stimuli that are very different and may depend on very different levels of processing, yet which share the property that for each there is a well defined and perceptually salient norm. In each case the adaptation aftereffects were consistent with an underlying sensitivity basis for the perceptual norm. Specifically, response norms were similar to and thus covaried with the perceptual norm, and under common adaptation differences between subjectively defined norms were reduced. These results are consistent with models of norm-based codes and suggest that these codes underlie an important link between visual coding and visual experience.
Choice-reaction time to visual motion with varied levels of simultaneous rotary motion
NASA Technical Reports Server (NTRS)
Clark, B.; Stewart, J. D.
1974-01-01
Twelve airline pilots were studied to determine the effects of whole-body rotation on choice-reaction time to the horizontal motion of a line on a cathode-ray tube. On each trial, one of five levels of visual acceleration and five corresponding proportions of rotary acceleration were presented simultaneously. Reaction time to the visual motion decreased with increasing levels of visual motion and increased with increasing proportions of rotary acceleration. The results conflict with general theories of facilitation during double stimulation but are consistent with neural-clock model of sensory interaction in choice-reaction time.
ERIC Educational Resources Information Center
Hills, Peter J.; Holland, Andrew M.; Lewis, Michael B.
2010-01-01
Adults can be adapted to a particular facial distortion in which both eyes are shifted symmetrically (Robbins, R., McKone, E., & Edwards, M. (2007). "Aftereffects for face attributes with different natural variability: Adapter position effects and neural models." "Journal of Experimental Psychology: Human Perception and Performance, 33," 570-592),…
A Longitudinal Study of Prism Adaptation in Infants from Six to Nine Months of Age.
ERIC Educational Resources Information Center
McDonnell, Paul M.; Abraham, Wayne C.
1981-01-01
Confirms that aftereffects of prism adaptation can be obtained in infants between 5 and 9 months of age and that the magnitude of these aftereffects is comparable to those found in adult studies. Evidence of a shift in hand preference toward the direction of prism displacement was replicated. (Author/RH)
ERIC Educational Resources Information Center
Walser, Moritz; Fischer, Rico; Goschke, Thomas
2012-01-01
We used a newly developed experimental paradigm to investigate aftereffects of completed intentions on subsequent performance that required the maintenance and execution of new intentions. Participants performed an ongoing number categorization task and an additional prospective memory (PM) task, which required them to respond to PM cues that…
Illusory motion reversal is caused by rivalry, not by perceptual snapshots of the visual field.
Kline, Keith; Holcombe, Alex O; Eagleman, David M
2004-10-01
In stroboscopic conditions--such as motion pictures--rotating objects may appear to rotate in the reverse direction due to under-sampling (aliasing). A seemingly similar phenomenon occurs in constant sunlight, which has been taken as evidence that the visual system processes discrete "snapshots" of the outside world. But if snapshots are indeed taken of the visual field, then when a rotating drum appears to transiently reverse direction, its mirror image should always appeared to reverse direction simultaneously. Contrary to this hypothesis, we found that when observers watched a rotating drum and its mirror image, almost all illusory motion reversals occurred for only one image at a time. This result indicates that the motion reversal illusion cannot be explained by snapshots of the visual field. The same result is found when the two images are presented within one visual hemifield, further ruling out the possibility that discrete sampling of the visual field occurs separately in each hemisphere. The frequency distribution of illusory reversal durations approximates a gamma distribution, suggesting perceptual rivalry as a better explanation for illusory motion reversal. After adaptation of motion detectors coding for the correct direction, the activity of motion-sensitive neurons coding for motion in the reverse direction may intermittently become dominant and drive the perception of motion.
Osaka, Naoyuki; Matsuyoshi, Daisuke; Ikeda, Takashi; Osaka, Mariko
2010-03-10
The recent development of cognitive neuroscience has invited inference about the neurosensory events underlying the experience of visual arts involving implied motion. We report functional magnetic resonance imaging study demonstrating activation of the human extrastriate motion-sensitive cortex by static images showing implied motion because of instability. We used static line-drawing cartoons of humans by Hokusai Katsushika (called 'Hokusai Manga'), an outstanding Japanese cartoonist as well as famous Ukiyoe artist. We found 'Hokusai Manga' with implied motion by depicting human bodies that are engaged in challenging tonic posture significantly activated the motion-sensitive visual cortex including MT+ in the human extrastriate cortex, while an illustration that does not imply motion, for either humans or objects, did not activate these areas under the same tasks. We conclude that motion-sensitive extrastriate cortex would be a critical region for perception of implied motion in instability.
Usage of stereoscopic visualization in the learning contents of rotational motion.
Matsuura, Shu
2013-01-01
Rotational motion plays an essential role in physics even at an introductory level. In addition, the stereoscopic display of three-dimensional graphics includes is advantageous for the presentation of rotational motions, particularly for depth recognition. However, the immersive visualization of rotational motion has been known to lead to dizziness and even nausea for some viewers. Therefore, the purpose of this study is to examine the onset of nausea and visual fatigue when learning rotational motion through the use of a stereoscopic display. The findings show that an instruction method with intermittent exposure of the stereoscopic display and a simplification of its visual components reduced the onset of nausea and visual fatigue for the viewers, which maintained the overall effect of instantaneous spatial recognition.
Prism adaptation in Parkinson disease: comparing reaching to walking and freezers to non-freezers.
Nemanich, Samuel T; Earhart, Gammon M
2015-08-01
Visuomotor adaptation to gaze-shifting prism glasses requires recalibration of the relationship between sensory input and motor output. Healthy individuals flexibly adapt movement patterns to many external perturbations; however, individuals with cerebellar damage do not adapt movements to the same extent. People with Parkinson disease (PD) adapt normally, but exhibit reduced after-effects, which are negative movement errors following the removal of the prism glasses and are indicative of true spatial realignment. Walking is particularly affected in PD, and many individuals experience freezing of gait (FOG), an episodic interruption in walking, that is thought to have a distinct pathophysiology. Here, we examined how individuals with PD with (PD + FOG) and without (PD - FOG) FOG, along with healthy older adults, adapted both reaching and walking patterns to prism glasses. Participants completed a visually guided reaching and walking task with and without rightward-shifting prism glasses. All groups adapted at similar rates during reaching and during walking. However, overall walking adaptation rates were slower compared to reaching rates. The PD - FOG group showed smaller after-effects, particularly during walking, compared to PD + FOG, independent of adaptation magnitude. While FOG did not appear to affect characteristics of prism adaptation, these results support the idea that the distinct neural processes governing visuomotor adaptation and storage are differentially affected by basal ganglia dysfunction in PD.
Dokka, Kalpana; DeAngelis, Gregory C.
2015-01-01
Humans and animals are fairly accurate in judging their direction of self-motion (i.e., heading) from optic flow when moving through a stationary environment. However, an object moving independently in the world alters the optic flow field and may bias heading perception if the visual system cannot dissociate object motion from self-motion. We investigated whether adding vestibular self-motion signals to optic flow enhances the accuracy of heading judgments in the presence of a moving object. Macaque monkeys were trained to report their heading (leftward or rightward relative to straight-forward) when self-motion was specified by vestibular, visual, or combined visual-vestibular signals, while viewing a display in which an object moved independently in the (virtual) world. The moving object induced significant biases in perceived heading when self-motion was signaled by either visual or vestibular cues alone. However, this bias was greatly reduced when visual and vestibular cues together signaled self-motion. In addition, multisensory heading discrimination thresholds measured in the presence of a moving object were largely consistent with the predictions of an optimal cue integration strategy. These findings demonstrate that multisensory cues facilitate the perceptual dissociation of self-motion and object motion, consistent with computational work that suggests that an appropriate decoding of multisensory visual-vestibular neurons can estimate heading while discounting the effects of object motion. SIGNIFICANCE STATEMENT Objects that move independently in the world alter the optic flow field and can induce errors in perceiving the direction of self-motion (heading). We show that adding vestibular (inertial) self-motion signals to optic flow almost completely eliminates the errors in perceived heading induced by an independently moving object. Furthermore, this increased accuracy occurs without a substantial loss in the precision. Our results thus demonstrate that vestibular signals play a critical role in dissociating self-motion from object motion. PMID:26446214
Schroeder, David; Korsakov, Fedor; Knipe, Carissa Mai-Ping; Thorson, Lauren; Ellingson, Arin M; Nuckley, David; Carlis, John; Keefe, Daniel F
2014-12-01
In biomechanics studies, researchers collect, via experiments or simulations, datasets with hundreds or thousands of trials, each describing the same type of motion (e.g., a neck flexion-extension exercise) but under different conditions (e.g., different patients, different disease states, pre- and post-treatment). Analyzing similarities and differences across all of the trials in these collections is a major challenge. Visualizing a single trial at a time does not work, and the typical alternative of juxtaposing multiple trials in a single visual display leads to complex, difficult-to-interpret visualizations. We address this problem via a new strategy that organizes the analysis around motion trends rather than trials. This new strategy matches the cognitive approach that scientists would like to take when analyzing motion collections. We introduce several technical innovations making trend-centric motion visualization possible. First, an algorithm detects a motion collection's trends via time-dependent clustering. Second, a 2D graphical technique visualizes how trials leave and join trends. Third, a 3D graphical technique, using a median 3D motion plus a visual variance indicator, visualizes the biomechanics of the set of trials within each trend. These innovations are combined to create an interactive exploratory visualization tool, which we designed through an iterative process in collaboration with both domain scientists and a traditionally-trained graphic designer. We report on insights generated during this design process and demonstrate the tool's effectiveness via a validation study with synthetic data and feedback from expert musculoskeletal biomechanics researchers who used the tool to analyze the effects of disc degeneration on human spinal kinematics.
Visual Motion Processing Subserves Faster Visuomotor Reaction in Badminton Players.
Hülsdünker, Thorben; Strüder, Heiko K; Mierau, Andreas
2017-06-01
Athletes participating in ball or racquet sports have to respond to visual stimuli under critical time pressure. Previous studies used visual contrast stimuli to determine visual perception and visuomotor reaction in athletes and nonathletes; however, ball and racquet sports are characterized by motion rather than contrast visual cues. Because visual contrast and motion signals are processed in different cortical regions, this study aimed to determine differences in perception and processing of visual motion between athletes and nonathletes. Twenty-five skilled badminton players and 28 age-matched nonathletic controls participated in this study. Using a 64-channel EEG system, we investigated visual motion perception/processing in the motion-sensitive middle temporal (MT) cortical area in response to radial motion of different velocities. In a simple visuomotor reaction task, visuomotor transformation in Brodmann area 6 (BA6) and BA4 as well as muscular activation (EMG onset) and visuomotor reaction time (VMRT) were investigated. Stimulus- and response-locked potentials were determined to differentiate between perceptual and motor-related processes. As compared with nonathletes, athletes showed earlier EMG onset times (217 vs 178 ms, P < 0.001), accompanied by a faster VMRT (274 vs 243 ms, P < 0.001). Furthermore, athletes showed an earlier stimulus-locked peak activation of MT (200 vs 182 ms, P = 0.002) and BA6 (161 vs 137 ms, P = 0.009). Response-locked peak activation in MT was later in athletes (-7 vs 26 ms, P < 0.001), whereas no group differences were observed in BA6 and BA4. Multiple regression analyses with stimulus- and response-locked cortical potentials predicted EMG onset (r = 0.83) and VMRT (r = 0.77). The athletes' superior visuomotor performance in response to visual motion is primarily related to visual perception and, to a minor degree, to motor-related processes.
Schindler, Andreas; Bartels, Andreas
2018-05-15
Our phenomenological experience of the stable world is maintained by continuous integration of visual self-motion with extra-retinal signals. However, due to conventional constraints of fMRI acquisition in humans, neural responses to visuo-vestibular integration have only been studied using artificial stimuli, in the absence of voluntary head-motion. We here circumvented these limitations and let participants to move their heads during scanning. The slow dynamics of the BOLD signal allowed us to acquire neural signal related to head motion after the observer's head was stabilized by inflatable aircushions. Visual stimuli were presented on head-fixed display goggles and updated in real time as a function of head-motion that was tracked using an external camera. Two conditions simulated forward translation of the participant. During physical head rotation, the congruent condition simulated a stable world, whereas the incongruent condition added arbitrary lateral motion. Importantly, both conditions were precisely matched in visual properties and head-rotation. By comparing congruent with incongruent conditions we found evidence consistent with the multi-modal integration of visual cues with head motion into a coherent "stable world" percept in the parietal operculum and in an anterior part of parieto-insular cortex (aPIC). In the visual motion network, human regions MST, a dorsal part of VIP, the cingulate sulcus visual area (CSv) and a region in precuneus (Pc) showed differential responses to the same contrast. The results demonstrate for the first time neural multimodal interactions between precisely matched congruent versus incongruent visual and non-visual cues during physical head-movement in the human brain. The methodological approach opens the path to a new class of fMRI studies with unprecedented temporal and spatial control over visuo-vestibular stimulation. Copyright © 2018 Elsevier Inc. All rights reserved.
Filling gaps in visual motion for target capture
Bosco, Gianfranco; Delle Monache, Sergio; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka; Lacquaniti, Francesco
2015-01-01
A remarkable challenge our brain must face constantly when interacting with the environment is represented by ambiguous and, at times, even missing sensory information. This is particularly compelling for visual information, being the main sensory system we rely upon to gather cues about the external world. It is not uncommon, for example, that objects catching our attention may disappear temporarily from view, occluded by visual obstacles in the foreground. Nevertheless, we are often able to keep our gaze on them throughout the occlusion or even catch them on the fly in the face of the transient lack of visual motion information. This implies that the brain can fill the gaps of missing sensory information by extrapolating the object motion through the occlusion. In recent years, much experimental evidence has been accumulated that both perceptual and motor processes exploit visual motion extrapolation mechanisms. Moreover, neurophysiological and neuroimaging studies have identified brain regions potentially involved in the predictive representation of the occluded target motion. Within this framework, ocular pursuit and manual interceptive behavior have proven to be useful experimental models for investigating visual extrapolation mechanisms. Studies in these fields have pointed out that visual motion extrapolation processes depend on manifold information related to short-term memory representations of the target motion before the occlusion, as well as to longer term representations derived from previous experience with the environment. We will review recent oculomotor and manual interception literature to provide up-to-date views on the neurophysiological underpinnings of visual motion extrapolation. PMID:25755637
Filling gaps in visual motion for target capture.
Bosco, Gianfranco; Monache, Sergio Delle; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka; Lacquaniti, Francesco
2015-01-01
A remarkable challenge our brain must face constantly when interacting with the environment is represented by ambiguous and, at times, even missing sensory information. This is particularly compelling for visual information, being the main sensory system we rely upon to gather cues about the external world. It is not uncommon, for example, that objects catching our attention may disappear temporarily from view, occluded by visual obstacles in the foreground. Nevertheless, we are often able to keep our gaze on them throughout the occlusion or even catch them on the fly in the face of the transient lack of visual motion information. This implies that the brain can fill the gaps of missing sensory information by extrapolating the object motion through the occlusion. In recent years, much experimental evidence has been accumulated that both perceptual and motor processes exploit visual motion extrapolation mechanisms. Moreover, neurophysiological and neuroimaging studies have identified brain regions potentially involved in the predictive representation of the occluded target motion. Within this framework, ocular pursuit and manual interceptive behavior have proven to be useful experimental models for investigating visual extrapolation mechanisms. Studies in these fields have pointed out that visual motion extrapolation processes depend on manifold information related to short-term memory representations of the target motion before the occlusion, as well as to longer term representations derived from previous experience with the environment. We will review recent oculomotor and manual interception literature to provide up-to-date views on the neurophysiological underpinnings of visual motion extrapolation.
NASA Technical Reports Server (NTRS)
Daunton, N. G.; Fox, R. A.; Crampton, G. H.
1984-01-01
Experiments in which the susceptibility of both cats and squirrel monkeys to motion sickness induced by visual stimulation are documented. In addition, it is shown that in both species those individual subjects most highly susceptible to sickness induced by passive motion are also those most likely to become motion sick from visual (optokinetic) stimulation alone.
Gaglianese, A; Costagli, M; Ueno, K; Ricciardi, E; Bernardi, G; Pietrini, P; Cheng, K
2015-01-22
The main visual pathway that conveys motion information to the middle temporal complex (hMT+) originates from the primary visual cortex (V1), which, in turn, receives spatial and temporal features of the perceived stimuli from the lateral geniculate nucleus (LGN). In addition, visual motion information reaches hMT+ directly from the thalamus, bypassing the V1, through a direct pathway. We aimed at elucidating whether this direct route between LGN and hMT+ represents a 'fast lane' reserved to high-speed motion, as proposed previously, or it is merely involved in processing motion information irrespective of speeds. We evaluated functional magnetic resonance imaging (fMRI) responses elicited by moving visual stimuli and applied connectivity analyses to investigate the effect of motion speed on the causal influence between LGN and hMT+, independent of V1, using the Conditional Granger Causality (CGC) in the presence of slow and fast visual stimuli. Our results showed that at least part of the visual motion information from LGN reaches hMT+, bypassing V1, in response to both slow and fast motion speeds of the perceived stimuli. We also investigated whether motion speeds have different effects on the connections between LGN and functional subdivisions within hMT+: direct connections between LGN and MT-proper carry mainly slow motion information, while connections between LGN and MST carry mainly fast motion information. The existence of a parallel pathway that connects the LGN directly to hMT+ in response to both slow and fast speeds may explain why MT and MST can still respond in the presence of V1 lesions. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.
Viewpoint and pose in body-form adaptation.
Sekunova, Alla; Black, Michael; Parkinson, Laura; Barton, Jason J S
2013-01-01
Faces and bodies are complex structures, perception of which can play important roles in person identification and inference of emotional state. Face representations have been explored using behavioural adaptation: in particular, studies have shown that face aftereffects show relatively broad tuning for viewpoint, consistent with origin in a high-level structural descriptor far removed from the retinal image. Our goals were to determine first, if body aftereffects also showed a degree of viewpoint invariance, and second if they also showed pose invariance, given that changes in pose create even more dramatic changes in the 2-D retinal image. We used a 3-D model of the human body to generate headless body images, whose parameters could be varied to generate different body forms, viewpoints, and poses. In the first experiment, subjects adapted to varying viewpoints of either slim or heavy bodies in a neutral stance, followed by test stimuli that were all front-facing. In the second experiment, we used the same front-facing bodies in neutral stance as test stimuli, but compared adaptation from bodies in the same neutral stance to adaptation with the same bodies in different poses. We found that body aftereffects were obtained over substantial viewpoint changes, with no significant decline in aftereffect magnitude with increasing viewpoint difference between adapting and test images. Aftereffects also showed transfer across one change in pose but not across another. We conclude that body representations may have more viewpoint invariance than faces, and demonstrate at least some transfer across pose, consistent with a high-level structural description.
Independent Deficits of Visual Word and Motion Processing in Aging and Early Alzheimer's Disease
Velarde, Carla; Perelstein, Elizabeth; Ressmann, Wendy; Duffy, Charles J.
2013-01-01
We tested whether visual processing impairments in aging and Alzheimer's disease (AD) reflect uniform posterior cortical decline, or independent disorders of visual processing for reading and navigation. Young and older normal controls were compared to early AD patients using psychophysical measures of visual word and motion processing. We find elevated perceptual thresholds for letters and word discrimination from young normal controls, to older normal controls, to early AD patients. Across subject groups, visual motion processing showed a similar pattern of increasing thresholds, with the greatest impact on radial pattern motion perception. Combined analyses show that letter, word, and motion processing impairments are independent of each other. Aging and AD may be accompanied by independent impairments of visual processing for reading and navigation. This suggests separate underlying disorders and highlights the need for comprehensive evaluations to detect early deficits. PMID:22647256
ERIC Educational Resources Information Center
Robbins, Rachel; McKone, Elinor; Edwards, Mark
2007-01-01
Adaptation to distorted faces is commonly interpreted as a shift in the face-space norm for the adapted attribute. This article shows that the size of the aftereffect varies as a function of the distortion level of the adapter. The pattern differed for different facial attributes, increasing with distortion level for symmetric deviations of eye…
NASA Technical Reports Server (NTRS)
Parris, B. L.; Cook, A. M.
1978-01-01
Data are presented that show the effects of visual and motion during cueing on pilot performance during takeoffs with engine failures. Four groups of USAF pilots flew a simulated KC-135 using four different cueing systems. The most basic of these systems was of the instrument-only type. Visual scene simulation and/or motion simulation was added to produce the other systems. Learning curves, mean performance, and subjective data are examined. The results show that the addition of visual cueing results in significant improvement in pilot performance, but the combined use of visual and motion cueing results in far better performance.
Startle stimuli reduce the internal model control in discrete movements.
Wright, Zachary A; Rogers, Mark W; MacKinnon, Colum D; Patton, James L
2009-01-01
A well known and major component of movement control is the feedforward component, also known as the internal model. This model predicts and compensates for expected forces seen during a movement, based on recent experience, so that a well-learned task such as reaching to a target can be executed in a smooth straight manner. It has recently been shown that the state of preparation of planned movements can be tested using a startling acoustic stimulus (SAS). SAS, presented 500, 250 or 0 ms before the expected "go" cue resulted in the early release of the movement trajectory associated with the after-effects of the force field training (i.e. the internal model). In a typical motor adaptation experiment with a robot-applied force field, we tested if a SAS stimulus influences the size of after-effects that are typically seen. We found that in all subjects the after-effect magnitudes were significantly reduced when movements were released by SAS, although this effect was not further modulated by the timing of SAS. Reduced after-effects reveal at least partial existence of learned preparatory control, and identify startle effects that could influence performance in tasks such as piloting, teleoperation, and sports.
Verspui, Remko; Gray, John R
2009-10-01
Animals rely on multimodal sensory integration for proper orientation within their environment. For example, odour-guided behaviours often require appropriate integration of concurrent visual cues. To gain a further understanding of mechanisms underlying sensory integration in odour-guided behaviour, our study examined the effects of visual stimuli induced by self-motion and object-motion on odour-guided flight in male M. sexta. By placing stationary objects (pillars) on either side of a female pheromone plume, moths produced self-induced visual motion during odour-guided flight. These flights showed a reduction in both ground and flight speeds and inter-turn interval when compared with flight tracks without stationary objects. Presentation of an approaching 20 cm disc, to simulate object-motion, resulted in interrupted odour-guided flight and changes in flight direction away from the pheromone source. Modifications of odour-guided flight behaviour in the presence of stationary objects suggest that visual information, in conjunction with olfactory cues, can be used to control the rate of counter-turning. We suggest that the behavioural responses to visual stimuli induced by object-motion indicate the presence of a neural circuit that relays visual information to initiate escape responses. These behavioural responses also suggest the presence of a sensory conflict requiring a trade-off between olfactory and visually driven behaviours. The mechanisms underlying olfactory and visual integration are discussed in the context of these behavioural responses.
Motion Direction Biases and Decoding in Human Visual Cortex
Wang, Helena X.; Merriam, Elisha P.; Freeman, Jeremy
2014-01-01
Functional magnetic resonance imaging (fMRI) studies have relied on multivariate analysis methods to decode visual motion direction from measurements of cortical activity. Above-chance decoding has been commonly used to infer the motion-selective response properties of the underlying neural populations. Moreover, patterns of reliable response biases across voxels that underlie decoding have been interpreted to reflect maps of functional architecture. Using fMRI, we identified a direction-selective response bias in human visual cortex that: (1) predicted motion-decoding accuracy; (2) depended on the shape of the stimulus aperture rather than the absolute direction of motion, such that response amplitudes gradually decreased with distance from the stimulus aperture edge corresponding to motion origin; and 3) was present in V1, V2, V3, but not evident in MT+, explaining the higher motion-decoding accuracies reported previously in early visual cortex. These results demonstrate that fMRI-based motion decoding has little or no dependence on the underlying functional organization of motion selectivity. PMID:25209297
Effects of attention and laterality on motion and orientation discrimination in deaf signers.
Bosworth, Rain G; Petrich, Jennifer A F; Dobkins, Karen R
2013-06-01
Previous studies have asked whether visual sensitivity and attentional processing in deaf signers are enhanced or altered as a result of their different sensory experiences during development, i.e., auditory deprivation and exposure to a visual language. In particular, deaf and hearing signers have been shown to exhibit a right visual field/left hemisphere advantage for motion processing, while hearing nonsigners do not. To examine whether this finding extends to other aspects of visual processing, we compared deaf signers and hearing nonsigners on motion, form, and brightness discrimination tasks. Secondly, to examine whether hemispheric lateralities are affected by attention, we employed a dual-task paradigm to measure form and motion thresholds under "full" vs. "poor" attention conditions. Deaf signers, but not hearing nonsigners, exhibited a right visual field advantage for motion processing. This effect was also seen for form processing and not for the brightness task. Moreover, no group differences were observed in attentional effects, and the motion and form visual field asymmetries were not modulated by attention, suggesting they occur at early levels of sensory processing. In sum, the results show that processing of motion and form, believed to be mediated by dorsal and ventral visual pathways, respectively, are left-hemisphere dominant in deaf signers. Published by Elsevier Inc.
Lencer, Rebekka; Keedy, Sarah K.; Reilly, James L.; McDonough, Bruce E.; Harris, Margret S. H.; Sprenger, Andreas; Sweeney, John A.
2011-01-01
Visual motion processing and its use for pursuit eye movement control represent a valuable model for studying the use of sensory input for action planning. In psychotic disorders, alterations of visual motion perception have been suggested to cause pursuit eye tracking deficits. We evaluated this system in functional neuroimaging studies of untreated first-episode schizophrenia (N=24), psychotic bipolar disorder patients (N=13) and healthy controls (N=20). During a passive visual motion processing task, both patient groups showed reduced activation in the posterior parietal projection fields of motion-sensitive extrastriate area V5, but not in V5 itself. This suggests reduced bottom-up transfer of visual motion information from extrastriate cortex to perceptual systems in parietal association cortex. During active pursuit, activation was enhanced in anterior intraparietal sulcus and insula in both patient groups, and in dorsolateral prefrontal cortex and dorsomedial thalamus in schizophrenia patients. This may result from increased demands on sensorimotor systems for pursuit control due to the limited availability of perceptual motion information about target speed and tracking error. Visual motion information transfer deficits to higher -level association cortex may contribute to well-established pursuit tracking abnormalities, and perhaps to a wider array of alterations in perception and action planning in psychotic disorders. PMID:21873035
Stronger Neural Modulation by Visual Motion Intensity in Autism Spectrum Disorders
Peiker, Ina; Schneider, Till R.; Milne, Elizabeth; Schöttle, Daniel; Vogeley, Kai; Münchau, Alexander; Schunke, Odette; Siegel, Markus; Engel, Andreas K.; David, Nicole
2015-01-01
Theories of autism spectrum disorders (ASD) have focused on altered perceptual integration of sensory features as a possible core deficit. Yet, there is little understanding of the neuronal processing of elementary sensory features in ASD. For typically developed individuals, we previously established a direct link between frequency-specific neural activity and the intensity of a specific sensory feature: Gamma-band activity in the visual cortex increased approximately linearly with the strength of visual motion. Using magnetoencephalography (MEG), we investigated whether in individuals with ASD neural activity reflect the coherence, and thus intensity, of visual motion in a similar fashion. Thirteen adult participants with ASD and 14 control participants performed a motion direction discrimination task with increasing levels of motion coherence. A polynomial regression analysis revealed that gamma-band power increased significantly stronger with motion coherence in ASD compared to controls, suggesting excessive visual activation with increasing stimulus intensity originating from motion-responsive visual areas V3, V6 and hMT/V5. Enhanced neural responses with increasing stimulus intensity suggest an enhanced response gain in ASD. Response gain is controlled by excitatory-inhibitory interactions, which also drive high-frequency oscillations in the gamma-band. Thus, our data suggest that a disturbed excitatory-inhibitory balance underlies enhanced neural responses to coherent motion in ASD. PMID:26147342
Toschi, Nicola; Kim, Jieun; Sclocco, Roberta; Duggento, Andrea; Barbieri, Riccardo; Kuo, Braden; Napadow, Vitaly
2017-01-01
The brain networks supporting nausea not yet understood. We previously found that while visual stimulation activated primary (V1) and extrastriate visual cortices (MT+/V5, coding for visual motion), increasing nausea was associated with increasing sustained activation in several brain areas, with significant co-activation for anterior insula (aIns) and mid-cingulate (MCC) cortices. Here, we hypothesized that motion sickness also alters functional connectivity between visual motion and previously identified nausea-processing brain regions. Subjects prone to motion sickness and controls completed a motion sickness provocation task during fMRI/ECG acquisition. We studied changes in connectivity between visual processing areas activated by the stimulus (MT+/V5, V1), right aIns and MCC when comparing rest (BASELINE) to peak nausea state (NAUSEA). Compared to BASELINE, NAUSEA reduced connectivity between right and left V1 and increased connectivity between right MT+/V5 and aIns and between left MT+/V5 and MCC. Additionally, the change in MT+/V5 to insula connectivity was significantly associated with a change in sympathovagal balance, assessed by heart rate variability analysis. No state-related connectivity changes were noted for the control group. Increased connectivity between a visual motion processing region and nausea/salience brain regions may reflect increased transfer of visual/vestibular mismatch information to brain regions supporting nausea perception and autonomic processing. We conclude that vection-induced nausea increases connectivity between nausea-processing regions and those activated by the nauseogenic stimulus. This enhanced low-frequency coupling may support continual, slowly evolving nausea perception and shifts toward sympathetic dominance. Disengaging this coupling may be a target for biobehavioral interventions aimed at reducing motion sickness severity. Copyright © 2016 Elsevier B.V. All rights reserved.
The relationship of global form and motion detection to reading fluency.
Englund, Julia A; Palomares, Melanie
2012-08-15
Visual motion processing in typical and atypical readers has suggested aspects of reading and motion processing share a common cortical network rooted in dorsal visual areas. Few studies have examined the relationship between reading performance and visual form processing, which is mediated by ventral cortical areas. We investigated whether reading fluency correlates with coherent motion detection thresholds in typically developing children using random dot kinematograms. As a comparison, we also evaluated the correlation between reading fluency and static form detection thresholds. Results show that both dorsal and ventral visual functions correlated with components of reading fluency, but that they have different developmental characteristics. Motion coherence thresholds correlated with reading rate and accuracy, which both improved with chronological age. Interestingly, when controlling for non-verbal abilities and age, reading accuracy significantly correlated with thresholds for coherent form detection but not coherent motion detection in typically developing children. Dorsal visual functions that mediate motion coherence seem to be related maturation of broad cognitive functions including non-verbal abilities and reading fluency. However, ventral visual functions that mediate form coherence seem to be specifically related to accurate reading in typically developing children. Copyright © 2012 Elsevier Ltd. All rights reserved.
Schroeder, David; Korsakov, Fedor; Knipe, Carissa Mai-Ping; Thorson, Lauren; Ellingson, Arin M.; Nuckley, David; Carlis, John; Keefe, Daniel F
2017-01-01
In biomechanics studies, researchers collect, via experiments or simulations, datasets with hundreds or thousands of trials, each describing the same type of motion (e.g., a neck flexion-extension exercise) but under different conditions (e.g., different patients, different disease states, pre- and post-treatment). Analyzing similarities and differences across all of the trials in these collections is a major challenge. Visualizing a single trial at a time does not work, and the typical alternative of juxtaposing multiple trials in a single visual display leads to complex, difficult-to-interpret visualizations. We address this problem via a new strategy that organizes the analysis around motion trends rather than trials. This new strategy matches the cognitive approach that scientists would like to take when analyzing motion collections. We introduce several technical innovations making trend-centric motion visualization possible. First, an algorithm detects a motion collection’s trends via time-dependent clustering. Second, a 2D graphical technique visualizes how trials leave and join trends. Third, a 3D graphical technique, using a median 3D motion plus a visual variance indicator, visualizes the biomechanics of the set of trials within each trend. These innovations are combined to create an interactive exploratory visualization tool, which we designed through an iterative process in collaboration with both domain scientists and a traditionally-trained graphic designer. We report on insights generated during this design process and demonstrate the tool’s effectiveness via a validation study with synthetic data and feedback from expert musculoskeletal biomechanics researchers who used the tool to analyze the effects of disc degeneration on human spinal kinematics. PMID:26356978
On the Integration of Medium Wave Infrared Cameras for Vision-Based Navigation
2015-03-01
SWIR Short Wave Infrared VisualSFM Visual Structure from Motion WPAFB Wright Patterson Air Force Base xi ON THE INTEGRATION OF MEDIUM WAVE INFRARED...Structure from Motion Visual Structure from Motion ( VisualSFM ) is an application that performs incremental SfM using images fed into it of a scene [20...too drastically in between frames. When this happens, VisualSFM will begin creating a new model with images that do not fit to the old one. These new
A Role for Mouse Primary Visual Cortex in Motion Perception.
Marques, Tiago; Summers, Mathew T; Fioreze, Gabriela; Fridman, Marina; Dias, Rodrigo F; Feller, Marla B; Petreanu, Leopoldo
2018-06-04
Visual motion is an ethologically important stimulus throughout the animal kingdom. In primates, motion perception relies on specific higher-order cortical regions. Although mouse primary visual cortex (V1) and higher-order visual areas show direction-selective (DS) responses, their role in motion perception remains unknown. Here, we tested whether V1 is involved in motion perception in mice. We developed a head-fixed discrimination task in which mice must report their perceived direction of motion from random dot kinematograms (RDKs). After training, mice made around 90% correct choices for stimuli with high coherence and performed significantly above chance for 16% coherent RDKs. Accuracy increased with both stimulus duration and visual field coverage of the stimulus, suggesting that mice in this task integrate motion information in time and space. Retinal recordings showed that thalamically projecting On-Off DS ganglion cells display DS responses when stimulated with RDKs. Two-photon calcium imaging revealed that neurons in layer (L) 2/3 of V1 display strong DS tuning in response to this stimulus. Thus, RDKs engage motion-sensitive retinal circuits as well as downstream visual cortical areas. Contralateral V1 activity played a key role in this motion direction discrimination task because its reversible inactivation with muscimol led to a significant reduction in performance. Neurometric-psychometric comparisons showed that an ideal observer could solve the task with the information encoded in DS L2/3 neurons. Motion discrimination of RDKs presents a powerful behavioral tool for dissecting the role of retino-forebrain circuits in motion processing. Copyright © 2018 Elsevier Ltd. All rights reserved.
Effects of translational and rotational motions and display polarity on visual performance.
Feng, Wen-Yang; Tseng, Feng-Yi; Chao, Chin-Jung; Lin, Chiuhsiang Joe
2008-10-01
This study investigated effects of both translational and rotational motion and display polarity on a visual identification task. Three different motion types--heave, roll, and pitch--were compared with the static (no motion) condition. The visual task was presented on two display polarities, black-on-white and white-on-black. The experiment was a 4 (motion conditions) x 2 (display polarities) within-subjects design with eight subjects (six men and two women; M age = 25.6 yr., SD = 3.2). The dependent variables used to assess the performance on the visual task were accuracy and reaction time. Motion environments, especially the roll condition, had statistically significant effects on the decrement of accuracy and reaction time. The display polarity was significant only in the static condition.
Streepey, Jefferson W; Kenyon, Robert V; Keshner, Emily A
2007-01-01
We previously reported responses to induced postural instability in young healthy individuals viewing visual motion with a narrow (25 degrees in both directions) and wide (90 degrees and 55 degrees in the horizontal and vertical directions) field of view (FOV) as they stood on different sized blocks. Visual motion was achieved using an immersive virtual environment that moved realistically with head motion (natural motion) and translated sinusoidally at 0.1 Hz in the fore-aft direction (augmented motion). We observed that a subset of the subjects (steppers) could not maintain continuous stance on the smallest block when the virtual environment was in motion. We completed a posteriori analyses on the postural responses of the steppers and non-steppers that may inform us about the mechanisms underlying these differences in stability. We found that when viewing augmented motion with a wide FOV, there was a greater effect on the head and whole body center of mass and ankle angle root mean square (RMS) values of the steppers than of the non-steppers. FFT analyses revealed greater power at the frequency of the visual stimulus in the steppers compared to the non-steppers. Whole body COM time lags relative to the augmented visual scene revealed that the time-delay between the scene and the COM was significantly increased in the steppers. The increased responsiveness to visual information suggests a greater visual field-dependency of the steppers and suggests that the thresholds for shifting from a reliance on visual information to somatosensory information can differ even within a healthy population.
Effects of simulator motion and visual characteristics on rotorcraft handling qualities evaluations
NASA Technical Reports Server (NTRS)
Mitchell, David G.; Hart, Daniel C.
1993-01-01
The pilot's perceptions of aircraft handling qualities are influenced by a combination of the aircraft dynamics, the task, and the environment under which the evaluation is performed. When the evaluation is performed in a groundbased simulator, the characteristics of the simulation facility also come into play. Two studies were conducted on NASA Ames Research Center's Vertical Motion Simulator to determine the effects of simulator characteristics on perceived handling qualities. Most evaluations were conducted with a baseline set of rotorcraft dynamics, using a simple transfer-function model of an uncoupled helicopter, under different conditions of visual time delays and motion command washout filters. Differences in pilot opinion were found as the visual and motion parameters were changed, reflecting a change in the pilots' perceptions of handling qualities, rather than changes in the aircraft model itself. The results indicate a need for tailoring the motion washout dynamics to suit the task. Visual-delay data are inconclusive but suggest that it may be better to allow some time delay in the visual path to minimize the mismatch between visual and motion, rather than eliminate the visual delay entirely through lead compensation.
Pavan, Andrea; Boyce, Matthew; Ghin, Filippo
2016-10-01
Playing action video games enhances visual motion perception. However, there is psychophysical evidence that action video games do not improve motion sensitivity for translational global moving patterns presented in fovea. This study investigates global motion perception in action video game players and compares their performance to that of non-action video game players and non-video game players. Stimuli were random dot kinematograms presented in the parafovea. Observers discriminated the motion direction of a target random dot kinematogram presented in one of the four visual quadrants. Action video game players showed lower motion coherence thresholds than the other groups. However, when the task was performed at threshold, we did not find differences between groups in terms of distributions of reaction times. These results suggest that action video games improve visual motion sensitivity in the near periphery of the visual field, rather than speed response. © The Author(s) 2016.
MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data.
Jang, Sujin; Elmqvist, Niklas; Ramani, Karthik
2016-01-01
Pattern analysis of human motions, which is useful in many research areas, requires understanding and comparison of different styles of motion patterns. However, working with human motion tracking data to support such analysis poses great challenges. In this paper, we propose MotionFlow, a visual analytics system that provides an effective overview of various motion patterns based on an interactive flow visualization. This visualization formulates a motion sequence as transitions between static poses, and aggregates these sequences into a tree diagram to construct a set of motion patterns. The system also allows the users to directly reflect the context of data and their perception of pose similarities in generating representative pose states. We provide local and global controls over the partition-based clustering process. To support the users in organizing unstructured motion data into pattern groups, we designed a set of interactions that enables searching for similar motion sequences from the data, detailed exploration of data subsets, and creating and modifying the group of motion patterns. To evaluate the usability of MotionFlow, we conducted a user study with six researchers with expertise in gesture-based interaction design. They used MotionFlow to explore and organize unstructured motion tracking data. Results show that the researchers were able to easily learn how to use MotionFlow, and the system effectively supported their pattern analysis activities, including leveraging their perception and domain knowledge.
Integrated evaluation of visually induced motion sickness in terms of autonomic nervous regulation.
Kiryu, Tohru; Tada, Gen; Toyama, Hiroshi; Iijima, Atsuhiko
2008-01-01
To evaluate visually-induced motion sickness, we integrated subjective and objective responses in terms of autonomic nervous regulation. Twenty-seven subjects viewed a 2-min-long first-person-view video section five times (total 10 min) continuously. Measured biosignals, the RR interval, respiration, and blood pressure, were used to estimate the indices related to autonomic nervous activity (ANA). Then we determined the trigger points and some sensation sections based on the time-varying behavior of ANA-related indices. We found that there was a suitable combination of biosignals to present the symptoms of visually-induced motion sickness. Based on the suitable combination, integrating trigger points and subjective scores allowed us to represent the time-distribution of subjective responses during visual exposure, and helps us to understand what types of camera motions will cause visually-induced motion sickness.
Weinstein, Joel M; Gilmore, Rick O; Shaikh, Sumera M; Kunselman, Allen R; Trescher, William V; Tashima, Lauren M; Boltz, Marianne E; McAuliffe, Matthew B; Cheung, Albert; Fesi, Jeremy D
2012-07-01
We sought to characterize visual motion processing in children with cerebral visual impairment (CVI) due to periventricular white matter damage caused by either hydrocephalus (eight individuals) or periventricular leukomalacia (PVL) associated with prematurity (11 individuals). Using steady-state visually evoked potentials (ssVEP), we measured cortical activity related to motion processing for two distinct types of visual stimuli: 'local' motion patterns thought to activate mainly primary visual cortex (V1), and 'global' or coherent patterns thought to activate higher cortical visual association areas (V3, V5, etc.). We studied three groups of children: (1) 19 children with CVI (mean age 9y 6mo [SD 3y 8mo]; 9 male; 10 female); (2) 40 neurologically and visually normal comparison children (mean age 9y 6mo [SD 3y 1mo]; 18 male; 22 female); and (3) because strabismus and amblyopia are common in children with CVI, a group of 41 children without neurological problems who had visual deficits due to amblyopia and/or strabismus (mean age 7y 8mo [SD 2y 8mo]; 28 male; 13 female). We found that the processing of global as opposed to local motion was preferentially impaired in individuals with CVI, especially for slower target velocities (p=0.028). Motion processing is impaired in children with CVI. ssVEP may provide useful and objective information about the development of higher visual function in children at risk for CVI. © The Authors. Journal compilation © Mac Keith Press 2011.
Kasten, Florian H.; Herrmann, Christoph S.
2017-01-01
Transcranial alternating current stimulation (tACS) has been repeatedly demonstrated to modulate endogenous brain oscillations in a frequency specific manner. Thus, it is a promising tool to uncover causal relationships between brain oscillations and behavior or perception. While tACS has been shown to elicit a physiological aftereffect for up to 70 min, it remains unclear whether the effect can still be elicited if subjects perform a complex task interacting with the stimulated frequency band. In addition, it has not yet been investigated whether the aftereffect is behaviorally relevant. In the current experiment, participants performed a Shepard-like mental rotation task for 80 min. After 10 min of baseline measurement, participants received either 20 min of tACS at their individual alpha frequency (IAF) or sham stimulation (30 s tACS in the beginning of the stimulation period). Afterwards another 50 min of post-stimulation EEG were recorded. Task performance and EEG were acquired during the whole experiment. While there were no effects of tACS on reaction times or event-related-potentials (ERPs), results revealed an increase in mental rotation performance in the stimulation group as compared to sham both during and after stimulation. This was accompanied by increased ongoing alpha power and coherence as well as event-related-desynchronization (ERD) in the alpha band in the stimulation group. The current study demonstrates a behavioral and physiological aftereffect of tACS in parallel. This indicates that it is possible to elicit aftereffects of tACS during tasks interacting with the alpha band. Therefore, the tACS aftereffect is suitable to achieve an experimental manipulation. PMID:28197084
Dileone, Michele; Ranieri, Federico; Florio, Lucia; Capone, Fioravante; Musumeci, Gabriella; Leoni, Chiara; Mordillo-Mateos, Laura; Tartaglia, Marco; Zampino, Giuseppe; Di Lazzaro, Vincenzo
2016-01-01
Costello syndrome (CS) is a rare congenital disorder due to a G12S amino acid substitution in HRAS protoncogene. Previous studies have shown that Paired Associative Stimulation (PAS), a repetitive brain stimulation protocol inducing motor cortex plasticity by coupling peripheral nerve stimulation with brain stimulation, leads to an extremely pronounced motor cortex excitability increase in CS patients. Intermittent Theta Burst Stimulation (iTBS) represents a protocol able to induce motor cortex plasticity by trains of stimuli at 50 Hz. In healthy subjects PAS and iTBS produce similar after-effects in motor cortex excitability. Experimental models showed that HRAS-dependent signalling pathways differently affect LTP induced by different patterns of repetitive synaptic stimulation. We aimed to compare iTBS-induced after-effects on motor cortex excitability with those produced by PAS in CS patients and to observe whether HRAS mutation differentially affects two different forms of neuromodulation protocols. We evaluated in vivo after-effects induced by PAS and iTBS applied over the right motor cortex in 4 CS patients and in 21 healthy age-matched controls. Our findings confirmed HRAS-dependent extremely pronounced PAS-induced after-effects and showed for the first time that iTBS induces no change in MEP amplitude in CS patients whereas both protocols lead to an increase of about 50% in controls. CS patients are characterized by an impairment of iTBS-related LTP-like phenomena besides enhanced PAS-induced after-effects, suggesting that HRAS-dependent signalling pathways have a differential influence on PAS- and iTBS-induced plasticity in humans. Copyright © 2015 Elsevier Inc. All rights reserved.
Walter, Armin; Murguialday, Ander R.; Rosenstiel, Wolfgang; Birbaumer, Niels; Bogdan, Martin
2012-01-01
Brain-state-dependent stimulation (BSDS) combines brain-computer interfaces (BCIs) and cortical stimulation into one paradigm that allows the online decoding for example of movement intention from brain signals while simultaneously applying stimulation. If the BCI decoding is performed by spectral features, stimulation after-effects such as artefacts and evoked activity present a challenge for a successful implementation of BSDS because they can impair the detection of targeted brain states. Therefore, efficient and robust methods are needed to minimize the influence of the stimulation-induced effects on spectral estimation without violating the real-time constraints of the BCI. In this work, we compared four methods for spectral estimation with autoregressive (AR) models in the presence of pulsed cortical stimulation. Using combined EEG-TMS (electroencephalography-transcranial magnetic stimulation) as well as combined electrocorticography (ECoG) and epidural electrical stimulation, three patients performed a motor task using a sensorimotor-rhythm BCI. Three stimulation paradigms were varied between sessions: (1) no stimulation, (2) single stimulation pulses applied independently (open-loop), or (3) coupled to the BCI output (closed-loop) such that stimulation was given only while an intention to move was detected using neural data. We found that removing the stimulation after-effects by linear interpolation can introduce a bias in the estimation of the spectral power of the sensorimotor rhythm, leading to an overestimation of decoding performance in the closed-loop setting. We propose the use of the Burg algorithm for segmented data to deal with stimulation after-effects. This work shows that the combination of BCIs controlled with spectral features and cortical stimulation in a closed-loop fashion is possible when the influence of stimulation after-effects on spectral estimation is minimized. PMID:23162436
Kawashima, Takayuki; Sato, Takao
2012-01-01
When a second sound follows a long first sound, its location appears to be perceived away from the first one (the localization/lateralization aftereffect). This aftereffect has often been considered to reflect an efficient neural coding of sound locations in the auditory system. To understand determinants of the localization aftereffect, the current study examined whether it is induced by an interaural temporal difference (ITD) in the amplitude envelope of high frequency transposed tones (over 2 kHz), which is known to function as a sound localization cue. In Experiment 1, participants were required to adjust the position of a pointer to the perceived location of test stimuli before and after adaptation. Test and adapter stimuli were amplitude modulated (AM) sounds presented at high frequencies and their positional differences were manipulated solely by the envelope ITD. Results showed that the adapter's ITD systematically affected the perceived position of test sounds to the directions expected from the localization/lateralization aftereffect when the adapter was presented at ±600 µs ITD; a corresponding significant effect was not observed for a 0 µs ITD adapter. In Experiment 2, the observed adapter effect was confirmed using a forced-choice task. It was also found that adaptation to the AM sounds at high frequencies did not significantly change the perceived position of pure-tone test stimuli in the low frequency region (128 and 256 Hz). The findings in the current study indicate that ITD in the envelope at high frequencies induces the localization aftereffect. This suggests that ITD in the high frequency region is involved in adaptive plasticity of auditory localization processing.
Monte-Silva, Katia; Kuo, Min-Fang; Hessenthaler, Silvia; Fresnoza, Shane; Liebetanz, David; Paulus, Walter; Nitsche, Michael A
2013-05-01
Non-invasive brain stimulation enables the induction of neuroplasticity in humans, however, with so far restricted duration of the respective cortical excitability modifications. Conventional anodal transcranial direct current stimulation (tDCS) protocols including one stimulation session induce NMDA receptor-dependent excitability enhancements lasting for about 1 h. We aimed to extend the duration of tDCS effects by periodic stimulation, consisting of two stimulation sessions, since periodic stimulation protocols are able to induce neuroplastic excitability alterations stable for days or weeks, termed late phase long term potentiation (l-LTP), in animal slice preparations. Since both, l-LTP and long term memory formation, require gene expression and protein synthesis, and glutamatergic receptor activity modifications, l-LTP might be a candidate mechanism for the formation of long term memory. The impact of two consecutive tDCS sessions on cortical excitability was probed in the motor cortex of healthy humans, and compared to that of a single tDCS session. The second stimulation was applied without an interval (temporally contiguous tDCS), during the after-effects of the first stimulation (during after-effects; 3, or 20 min interval), or after the after-effects of the first stimulation had vanished (post after-effects; 3 or 24 h interval). The during after-effects condition resulted in an initially reduced, but then relevantly prolonged excitability enhancement, which was blocked by an NMDA receptor antagonist. The other conditions resulted in an abolishment, or a calcium channel-dependent reversal of neuroplasticity. Repeated tDCS within a specific time window is able to induce l-LTP-like plasticity in the human motor cortex. Copyright © 2013 Elsevier Inc. All rights reserved.
Visual Acuity Using Head-fixed Displays During Passive Self and Surround Motion
NASA Technical Reports Server (NTRS)
Wood, Scott J.; Black, F. Owen; Stallings, Valerie; Peters, Brian
2007-01-01
The ability to read head-fixed displays on various motion platforms requires the suppression of vestibulo-ocular reflexes. This study examined dynamic visual acuity while viewing a head-fixed display during different self and surround rotation conditions. Twelve healthy subjects were asked to report the orientation of Landolt C optotypes presented on a micro-display fixed to a rotating chair at 50 cm distance. Acuity thresholds were determined by the lowest size at which the subjects correctly identified 3 of 5 optotype orientations at peak velocity. Visual acuity was compared across four different conditions, each tested at 0.05 and 0.4 Hz (peak amplitude of 57 deg/s). The four conditions included: subject rotated in semi-darkness (i.e., limited to background illumination of the display), subject stationary while visual scene rotated, subject rotated around a stationary visual background, and both subject and visual scene rotated together. Visual acuity performance was greatest when the subject rotated around a stationary visual background; i.e., when both vestibular and visual inputs provided concordant information about the motion. Visual acuity performance was most reduced when the subject and visual scene rotated together; i.e., when the visual scene provided discordant information about the motion. Ranges of 4-5 logMAR step sizes across the conditions indicated the acuity task was sufficient to discriminate visual performance levels. The background visual scene can influence the ability to read head-fixed displays during passive motion disturbances. Dynamic visual acuity using head-fixed displays can provide an operationally relevant screening tool for visual performance during exposure to novel acceleration environments.
Estimation of bio-signal based on human motion for integrated visualization of daily-life.
Umetani, Tomohiro; Matsukawa, Tsuyoshi; Yokoyama, Kiyoko
2007-01-01
This paper describes a method for the estimation of bio-signals based on human motion in daily life for an integrated visualization system. The recent advancement of computers and measurement technology has facilitated the integrated visualization of bio-signals and human motion data. It is desirable to obtain a method to understand the activities of muscles based on human motion data and evaluate the change in physiological parameters according to human motion for visualization applications. We suppose that human motion is generated by the activities of muscles reflected from the brain to bio-signals such as electromyograms. This paper introduces a method for the estimation of bio-signals based on neural networks. This method can estimate the other physiological parameters based on the same procedure. The experimental results show the feasibility of the proposed method.
Disappearance of the inversion effect during memory-guided tracking of scrambled biological motion.
Jiang, Changhao; Yue, Guang H; Chen, Tingting; Ding, Jinhong
2016-08-01
The human visual system is highly sensitive to biological motion. Even when a point-light walker is temporarily occluded from view by other objects, our eyes are still able to maintain tracking continuity. To investigate how the visual system establishes a correspondence between the biological-motion stimuli visible before and after the disruption, we used the occlusion paradigm with biological-motion stimuli that were intact or scrambled. The results showed that during visually guided tracking, both the observers' predicted times and predictive smooth pursuit were more accurate for upright biological motion (intact and scrambled) than for inverted biological motion. During memory-guided tracking, however, the processing advantage for upright as compared with inverted biological motion was not found in the scrambled condition, but in the intact condition only. This suggests that spatial location information alone is not sufficient to build and maintain the representational continuity of the biological motion across the occlusion, and that the object identity may act as an important information source in visual tracking. The inversion effect disappeared when the scrambled biological motion was occluded, which indicates that when biological motion is temporarily occluded and there is a complete absence of visual feedback signals, an oculomotor prediction is executed to maintain the tracking continuity, which is established not only by updating the target's spatial location, but also by the retrieval of identity information stored in long-term memory.
Visualizing the ground motions of the 1906 San Francisco earthquake
Chourasia, A.; Cutchin, S.; Aagaard, Brad T.
2008-01-01
With advances in computational capabilities and refinement of seismic wave-propagation models in the past decade large three-dimensional simulations of earthquake ground motion have become possible. The resulting datasets from these simulations are multivariate, temporal and multi-terabyte in size. Past visual representations of results from seismic studies have been largely confined to static two-dimensional maps. New visual representations provide scientists with alternate ways of viewing and interacting with these results potentially leading to new and significant insight into the physical phenomena. Visualizations can also be used for pedagogic and general dissemination purposes. We present a workflow for visual representation of the data from a ground motion simulation of the great 1906 San Francisco earthquake. We have employed state of the art animation tools for visualization of the ground motions with a high degree of accuracy and visual realism. ?? 2008 Elsevier Ltd.
Recovery of biological motion perception and network plasticity after cerebellar tumor removal.
Sokolov, Arseny A; Erb, Michael; Grodd, Wolfgang; Tatagiba, Marcos S; Frackowiak, Richard S J; Pavlova, Marina A
2014-10-01
Visual perception of body motion is vital for everyday activities such as social interaction, motor learning or car driving. Tumors to the left lateral cerebellum impair visual perception of body motion. However, compensatory potential after cerebellar damage and underlying neural mechanisms remain unknown. In the present study, visual sensitivity to point-light body motion was psychophysically assessed in patient SL with dysplastic gangliocytoma (Lhermitte-Duclos disease) to the left cerebellum before and after neurosurgery, and in a group of healthy matched controls. Brain activity during processing of body motion was assessed by functional magnetic resonance imaging (MRI). Alterations in underlying cerebro-cerebellar circuitry were studied by psychophysiological interaction (PPI) analysis. Visual sensitivity to body motion in patient SL before neurosurgery was substantially lower than in controls, with significant improvement after neurosurgery. Functional MRI in patient SL revealed a similar pattern of cerebellar activation during biological motion processing as in healthy participants, but located more medially, in the left cerebellar lobules III and IX. As in normalcy, PPI analysis showed cerebellar communication with a region in the superior temporal sulcus, but located more anteriorly. The findings demonstrate a potential for recovery of visual body motion processing after cerebellar damage, likely mediated by topographic shifts within the corresponding cerebro-cerebellar circuitry induced by cerebellar reorganization. The outcome is of importance for further understanding of cerebellar plasticity and neural circuits underpinning visual social cognition.
Ventura, Joel; DiZio, Paul; Lackner, James R.
2013-01-01
In a rotating environment, goal-oriented voluntary movements are initially disrupted in trajectory and endpoint, due to movement-contingent Coriolis forces, but accuracy is regained with additional movements. We studied whether adaptation acquired in a voluntary, goal-oriented postural swaying task performed during constant-velocity counterclockwise rotation (10 RPM) carries over to recovery from falling induced using a hold and release (H&R) paradigm. In H&R, standing subjects actively resist a force applied to their chest, which when suddenly released results in a forward fall and activation of an automatic postural correction. We tested H&R postural recovery in subjects (n = 11) before and after they made voluntary fore-aft swaying movements during 20 trials of 25 s each, in a counterclockwise rotating room. Their voluntary sway about their ankles generated Coriolis forces that initially induced clockwise deviations of the intended body sway paths, but fore-aft sway was gradually restored over successive per-rotation trials, and a counterclockwise aftereffect occurred during postrotation attempts to sway fore-aft. In H&R trials, we examined the initial 10- to 150-ms periods of movement after release from the hold force, when voluntary corrections of movement path are not possible. Prerotation subjects fell directly forward, whereas postrotation their forward motion was deviated significantly counterclockwise. The postrotation deviations were in a direction consistent with an aftereffect reflecting persistence of a compensation acquired per-rotation for voluntary swaying movements. These findings show that control and adaptation mechanisms adjusting voluntary postural sway to the demands of a new force environment also influence the automatic recovery of posture. PMID:24304863
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Houck, J. A.; Martin, D. J., Jr.
1977-01-01
Combined visual, motion, and aural cues for a helicopter engaged in visually conducted slalom runs at low altitude were studied. The evaluation of the visual and aural cues was subjective, whereas the motion cues were evaluated both subjectively and objectively. Subjective and objective results coincided in the area of control activity. Generally, less control activity is present under motion conditions than under fixed-base conditions, a fact attributed subjectively to the feeling of realistic limitations of a machine (helicopter) given by the addition of motion cues. The objective data also revealed that the slalom runs were conducted at significantly higher altitudes under motion conditions than under fixed-base conditions.
Implied motion language can influence visual spatial memory.
Vinson, David W; Engelen, Jan; Zwaan, Rolf A; Matlock, Teenie; Dale, Rick
2017-07-01
How do language and vision interact? Specifically, what impact can language have on visual processing, especially related to spatial memory? What are typically considered errors in visual processing, such as remembering the location of an object to be farther along its motion trajectory than it actually is, can be explained as perceptual achievements that are driven by our ability to anticipate future events. In two experiments, we tested whether the prior presentation of motion language influences visual spatial memory in ways that afford greater perceptual prediction. Experiment 1 showed that motion language influenced judgments for the spatial memory of an object beyond the known effects of implied motion present in the image itself. Experiment 2 replicated this finding. Our findings support a theory of perception as prediction.
Burnham, Joy J.; Hooper, Lisa M.
2012-01-01
Researchers have reported how Hurricane Katrina has affected teachers who work with Kindergarten to Grade 12 (K-12), yet little is known about how the natural disaster has affected other important K-12 faculty and staff (e.g., coaches, librarians, school counselors, and cafeteria workers). Missing from the literature is the impact that this natural disaster has had on these formal (school counselors) and informal (coaches, librarians) helpers of K-12 students. Using a focus group methodology, the authors examined the aftereffects of Hurricane Katrina on 12 school employees in New Orleans, Louisiana, 18 months after the hurricane. Informed by qualitative content analysis, three emergent themes were identified: emotion-focused aftereffects, positive coping, and worry and fear. The implications for future research and promoting hope in mental health counseling are discussed. PMID:22629217
Burnham, Joy J; Hooper, Lisa M
2012-01-01
Researchers have reported how Hurricane Katrina has affected teachers who work with Kindergarten to Grade 12 (K-12), yet little is known about how the natural disaster has affected other important K-12 faculty and staff (e.g., coaches, librarians, school counselors, and cafeteria workers). Missing from the literature is the impact that this natural disaster has had on these formal (school counselors) and informal (coaches, librarians) helpers of K-12 students. Using a focus group methodology, the authors examined the aftereffects of Hurricane Katrina on 12 school employees in New Orleans, Louisiana, 18 months after the hurricane. Informed by qualitative content analysis, three emergent themes were identified: emotion-focused aftereffects, positive coping, and worry and fear. The implications for future research and promoting hope in mental health counseling are discussed.
Perceived state of self during motion can differentially modulate numerical magnitude allocation.
Arshad, Q; Nigmatullina, Y; Roberts, R E; Goga, U; Pikovsky, M; Khan, S; Lobo, R; Flury, A-S; Pettorossi, V E; Cohen-Kadosh, R; Malhotra, P A; Bronstein, A M
2016-09-01
Although a direct relationship between numerical allocation and spatial attention has been proposed, recent research suggests that these processes are not directly coupled. In keeping with this, spatial attention shifts induced either via visual or vestibular motion can modulate numerical allocation in some circumstances but not in others. In addition to shifting spatial attention, visual or vestibular motion paradigms also (i) elicit compensatory eye movements which themselves can influence numerical processing and (ii) alter the perceptual state of 'self', inducing changes in bodily self-consciousness impacting upon cognitive mechanisms. Thus, the precise mechanism by which motion modulates numerical allocation remains unknown. We sought to investigate the influence that different perceptual experiences of motion have upon numerical magnitude allocation while controlling for both eye movements and task-related effects. We first used optokinetic visual motion stimulation (OKS) to elicit the perceptual experience of either 'visual world' or 'self'-motion during which eye movements were identical. In a second experiment, we used a vestibular protocol examining the effects of perceived and subliminal angular rotations in darkness, which also provoked identical eye movements. We observed that during the perceptual experience of 'visual world' motion, rightward OKS-biased judgments towards smaller numbers, whereas leftward OKS-biased judgments towards larger numbers. During the perceptual experience of 'self-motion', judgments were biased towards larger numbers irrespective of the OKS direction. Contrastingly, vestibular motion perception was found not to modulate numerical magnitude allocation, nor was there any differential modulation when comparing 'perceived' vs. 'subliminal' rotations. We provide a novel demonstration that numerical magnitude allocation can be differentially modulated by the perceptual state of self during visual but not vestibular mediated motion. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Visual Features Involving Motion Seen from Airport Control Towers
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Liston, Dorion
2010-01-01
Visual motion cues are used by tower controllers to support both visual and anticipated separation. Some of these cues are tabulated as part of the overall set of visual features used in towers to separate aircraft. An initial analyses of one motion cue, landing deceleration, is provided as a basis for evaluating how controllers detect and use it for spacing aircraft on or near the surface. Understanding cues like it will help determine if they can be safely used in a remote/virtual tower in which their presentation may be visually degraded.
Visualization of Heart Sounds and Motion Using Multichannel Sensor
NASA Astrophysics Data System (ADS)
Nogata, Fumio; Yokota, Yasunari; Kawamura, Yoko
2010-06-01
As there are various difficulties associated with auscultation techniques, we have devised a technique for visualizing heart motion in order to assist in the understanding of heartbeat for both doctors and patients. Auscultatory sounds were first visualized using FFT and Wavelet analysis to visualize heart sounds. Next, to show global and simultaneous heart motions, a new technique for visualization was established. The visualization system consists of a 64-channel unit (63 acceleration sensors and one ECG sensor) and a signal/image analysis unit. The acceleration sensors were arranged in a square array (8×8) with a 20-mm pitch interval, which was adhered to the chest surface. The heart motion of one cycle was visualized at a sampling frequency of 3 kHz and quantization of 12 bits. The visualized results showed a typical waveform motion of the strong pressure shock due to closing tricuspid valve and mitral valve of the cardiac apex (first sound), and the closing aortic and pulmonic valve (second sound) in sequence. To overcome difficulties in auscultation, the system can be applied to the detection of heart disease and to the digital database management of the auscultation examination in medical areas.
Visual gravitational motion and the vestibular system in humans
Lacquaniti, Francesco; Bosco, Gianfranco; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Moscatelli, Alessandro; Zago, Myrka
2013-01-01
The visual system is poorly sensitive to arbitrary accelerations, but accurately detects the effects of gravity on a target motion. Here we review behavioral and neuroimaging data about the neural mechanisms for dealing with object motion and egomotion under gravity. The results from several experiments show that the visual estimates of a target motion under gravity depend on the combination of a prior of gravity effects with on-line visual signals on target position and velocity. These estimates are affected by vestibular inputs, and are encoded in a visual-vestibular network whose core regions lie within or around the Sylvian fissure, and are represented by the posterior insula/retroinsula/temporo-parietal junction. This network responds both to target motions coherent with gravity and to vestibular caloric stimulation in human fMRI studies. Transient inactivation of the temporo-parietal junction selectively disrupts the interception of targets accelerated by gravity. PMID:24421761
Visual gravitational motion and the vestibular system in humans.
Lacquaniti, Francesco; Bosco, Gianfranco; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Moscatelli, Alessandro; Zago, Myrka
2013-12-26
The visual system is poorly sensitive to arbitrary accelerations, but accurately detects the effects of gravity on a target motion. Here we review behavioral and neuroimaging data about the neural mechanisms for dealing with object motion and egomotion under gravity. The results from several experiments show that the visual estimates of a target motion under gravity depend on the combination of a prior of gravity effects with on-line visual signals on target position and velocity. These estimates are affected by vestibular inputs, and are encoded in a visual-vestibular network whose core regions lie within or around the Sylvian fissure, and are represented by the posterior insula/retroinsula/temporo-parietal junction. This network responds both to target motions coherent with gravity and to vestibular caloric stimulation in human fMRI studies. Transient inactivation of the temporo-parietal junction selectively disrupts the interception of targets accelerated by gravity.
Normal form from biological motion despite impaired ventral stream function.
Gilaie-Dotan, S; Bentin, S; Harel, M; Rees, G; Saygin, A P
2011-04-01
We explored the extent to which biological motion perception depends on ventral stream integration by studying LG, an unusual case of developmental visual agnosia. LG has significant ventral stream processing deficits but no discernable structural cortical abnormality. LG's intermediate visual areas and object-sensitive regions exhibit abnormal activation during visual object perception, in contrast to area V5/MT+ which responds normally to visual motion (Gilaie-Dotan, Perry, Bonneh, Malach, & Bentin, 2009). Here, in three studies we used point light displays, which require visual integration, in adaptive threshold experiments to examine LG's ability to detect form from biological and non-biological motion cues. LG's ability to detect and discriminate form from biological motion was similar to healthy controls. In contrast, he was significantly deficient in processing form from non-biological motion. Thus, LG can rely on biological motion cues to perceive human forms, but is considerably impaired in extracting form from non-biological motion. Finally, we found that while LG viewed biological motion, activity in a network of brain regions associated with processing biological motion was functionally correlated with his V5/MT+ activity, indicating that normal inputs from V5/MT+ might suffice to activate his action perception system. These results indicate that processing of biologically moving form can dissociate from other form processing in the ventral pathway. Furthermore, the present results indicate that integrative ventral stream processing is necessary for uncompromised processing of non-biological form from motion. Copyright © 2011 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Pfeiffer, Mark G.; Scott, Paul G.
A fly-only group (N=16) of Navy replacement pilots undergoing fleet readiness training in the SH-3 helicopter was compared with groups pre-trained on Device 2F64C with: (1) visual only (N=13); (2) no visual/no motion (N=14); and (3) one visual plus motion group (N=19). Groups were compared for their SH-3 helicopter performance in the transition…
Guzman-Lopez, Jessica; Arshad, Qadeer; Schultz, Simon R; Walsh, Vincent; Yousif, Nada
2013-01-01
Head movement imposes the additional burdens on the visual system of maintaining visual acuity and determining the origin of retinal image motion (i.e., self-motion vs. object-motion). Although maintaining visual acuity during self-motion is effected by minimizing retinal slip via the brainstem vestibular-ocular reflex, higher order visuovestibular mechanisms also contribute. Disambiguating self-motion versus object-motion also invokes higher order mechanisms, and a cortical visuovestibular reciprocal antagonism is propounded. Hence, one prediction is of a vestibular modulation of visual cortical excitability and indirect measures have variously suggested none, focal or global effects of activation or suppression in human visual cortex. Using transcranial magnetic stimulation-induced phosphenes to probe cortical excitability, we observed decreased V5/MT excitability versus increased early visual cortex (EVC) excitability, during vestibular activation. In order to exclude nonspecific effects (e.g., arousal) on cortical excitability, response specificity was assessed using information theory, specifically response entropy. Vestibular activation significantly modulated phosphene response entropy for V5/MT but not EVC, implying a specific vestibular effect on V5/MT responses. This is the first demonstration that vestibular activation modulates human visual cortex excitability. Furthermore, using information theory, not previously used in phosphene response analysis, we could distinguish between a specific vestibular modulation of V5/MT excitability from a nonspecific effect at EVC. PMID:22291031
Audiovisual temporal recalibration: space-based versus context-based.
Yuan, Xiangyong; Li, Baolin; Bi, Cuihua; Yin, Huazhan; Huang, Xiting
2012-01-01
Recalibration of perceived simultaneity has been widely accepted to minimise delay between multisensory signals owing to different physical and neural conduct times. With concurrent exposure, temporal recalibration is either contextually or spatially based. Context-based recalibration was recently described in detail, but evidence for space-based recalibration is scarce. In addition, the competition between these two reference frames is unclear. Here, we examined participants who watched two distinct blob-and-tone couples that laterally alternated with one asynchronous and the other synchronous and then judged their perceived simultaneity and sequence when they swapped positions and varied in timing. For low-level stimuli with abundant auditory location cues space-based aftereffects were significantly more apparent (8.3%) than context-based aftereffects (4.2%), but without such auditory cues space-based aftereffects were less apparent (4.4%) and were numerically smaller than context-based aftereffects (6.0%). These results suggested that stimulus level and auditory location cues were both determinants of the recalibration frame. Through such joint judgments and the simple reaction time task, our results further revealed that criteria from perceived simultaneity to successiveness profoundly shifted without accompanying perceptual latency changes across adaptations, hence implying that criteria shifts, rather than perceptual latency changes, accounted for space-based and context-based temporal recalibration.
Savin, Douglas N.; Morton, Susanne M.; Whitall, Jill
2013-01-01
Objectives Determine whether adaptation to a swing phase perturbation during gait transferred from treadmill to overground walking, the rate of overground deadaptation, and whether overground aftereffects improved step length asymmetry in persons with hemiparetic stroke and gait asymmetry. Methods Ten participants with stroke and hemiparesis and 10 controls walked overground on an instrumented gait mat, adapted gait to a swing phase perturbation on a treadmill, then walked overground on the gait mat again. Outcome measures, primary: overground step length symmetry, rates of treadmill step length symmetry adaptation and overground step length symmetry deadaptation; secondary: overground gait velocity, stride length, and stride cycle duration. Results Step length symmetry aftereffects generalized to overground walking and adapted at a similar rate on the treadmill in both groups. Aftereffects decayed at a slower rate overground in participants with stroke and temporarily improved overground step length asymmetry. Both groups’ overground gait velocity increased post adaptation due to increased stride length and decreased stride duration. Conclusions Stroke and hemiparesis do not impair generalization of step length symmetry changes from adapted treadmill to overground walking, but prolong overground aftereffects. Significance Motor adaptation during treadmill walking may be an effective treatment for improving overground gait asymmetries post-stroke. PMID:24286858
Acoustic facilitation of object movement detection during self-motion
Calabro, F. J.; Soto-Faraco, S.; Vaina, L. M.
2011-01-01
In humans, as well as most animal species, perception of object motion is critical to successful interaction with the surrounding environment. Yet, as the observer also moves, the retinal projections of the various motion components add to each other and extracting accurate object motion becomes computationally challenging. Recent psychophysical studies have demonstrated that observers use a flow-parsing mechanism to estimate and subtract self-motion from the optic flow field. We investigated whether concurrent acoustic cues for motion can facilitate visual flow parsing, thereby enhancing the detection of moving objects during simulated self-motion. Participants identified an object (the target) that moved either forward or backward within a visual scene containing nine identical textured objects simulating forward observer translation. We found that spatially co-localized, directionally congruent, moving auditory stimuli enhanced object motion detection. Interestingly, subjects who performed poorly on the visual-only task benefited more from the addition of moving auditory stimuli. When auditory stimuli were not co-localized to the visual target, improvements in detection rates were weak. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow can operate on multisensory object representations. PMID:21307050
MotionExplorer: exploratory search in human motion capture data based on hierarchical aggregation.
Bernard, Jürgen; Wilhelm, Nils; Krüger, Björn; May, Thorsten; Schreck, Tobias; Kohlhammer, Jörn
2013-12-01
We present MotionExplorer, an exploratory search and analysis system for sequences of human motion in large motion capture data collections. This special type of multivariate time series data is relevant in many research fields including medicine, sports and animation. Key tasks in working with motion data include analysis of motion states and transitions, and synthesis of motion vectors by interpolation and combination. In the practice of research and application of human motion data, challenges exist in providing visual summaries and drill-down functionality for handling large motion data collections. We find that this domain can benefit from appropriate visual retrieval and analysis support to handle these tasks in presence of large motion data. To address this need, we developed MotionExplorer together with domain experts as an exploratory search system based on interactive aggregation and visualization of motion states as a basis for data navigation, exploration, and search. Based on an overview-first type visualization, users are able to search for interesting sub-sequences of motion based on a query-by-example metaphor, and explore search results by details on demand. We developed MotionExplorer in close collaboration with the targeted users who are researchers working on human motion synthesis and analysis, including a summative field study. Additionally, we conducted a laboratory design study to substantially improve MotionExplorer towards an intuitive, usable and robust design. MotionExplorer enables the search in human motion capture data with only a few mouse clicks. The researchers unanimously confirm that the system can efficiently support their work.
Localized direction selective responses in the dendrites of visual interneurons of the fly
2010-01-01
Background The various tasks of visual systems, including course control, collision avoidance and the detection of small objects, require at the neuronal level the dendritic integration and subsequent processing of many spatially distributed visual motion inputs. While much is known about the pooled output in these systems, as in the medial superior temporal cortex of monkeys or in the lobula plate of the insect visual system, the motion tuning of the elements that provide the input has yet received little attention. In order to visualize the motion tuning of these inputs we examined the dendritic activation patterns of neurons that are selective for the characteristic patterns of wide-field motion, the lobula-plate tangential cells (LPTCs) of the blowfly. These neurons are known to sample direction-selective motion information from large parts of the visual field and combine these signals into axonal and dendro-dendritic outputs. Results Fluorescence imaging of intracellular calcium concentration allowed us to take a direct look at the local dendritic activity and the resulting local preferred directions in LPTC dendrites during activation by wide-field motion in different directions. These 'calcium response fields' resembled a retinotopic dendritic map of local preferred directions in the receptive field, the layout of which is a distinguishing feature of different LPTCs. Conclusions Our study reveals how neurons acquire selectivity for distinct visual motion patterns by dendritic integration of the local inputs with different preferred directions. With their spatial layout of directional responses, the dendrites of the LPTCs we investigated thus served as matched filters for wide-field motion patterns. PMID:20384983
Evidence for multisensory spatial-to-motor transformations in aiming movements of children.
King, Bradley R; Kagerer, Florian A; Contreras-Vidal, Jose L; Clark, Jane E
2009-01-01
The extant developmental literature investigating age-related differences in the execution of aiming movements has predominantly focused on visuomotor coordination, despite the fact that additional sensory modalities, such as audition and somatosensation, may contribute to motor planning, execution, and learning. The current study investigated the execution of aiming movements toward both visual and acoustic stimuli. In addition, we examined the interaction between visuomotor and auditory-motor coordination as 5- to 10-yr-old participants executed aiming movements to visual and acoustic stimuli before and after exposure to a visuomotor rotation. Children in all age groups demonstrated significant improvement in performance under the visuomotor perturbation, as indicated by decreased initial directional and root mean squared errors. Moreover, children in all age groups demonstrated significant visual aftereffects during the postexposure phase, suggesting a successful update of their spatial-to-motor transformations. Interestingly, these updated spatial-to-motor transformations also influenced auditory-motor performance, as indicated by distorted movement trajectories during the auditory postexposure phase. The distorted trajectories were present during auditory postexposure even though the auditory-motor relationship was not manipulated. Results suggest that by the age of 5 yr, children have developed a multisensory spatial-to-motor transformation for the execution of aiming movements toward both visual and acoustic targets.
Visual motion perception predicts driving hazard perception ability.
Lacherez, Philippe; Au, Sandra; Wood, Joanne M
2014-02-01
To examine the basis of previous findings of an association between indices of driving safety and visual motion sensitivity and to examine whether this association could be explained by low-level changes in visual function. A total of 36 visually normal participants (aged 19-80 years) completed a battery of standard vision tests including visual acuity, contrast sensitivity and automated visual fields and two tests of motion perception including sensitivity for movement of a drifting Gabor stimulus and sensitivity for displacement in a random dot kinematogram (Dmin ). Participants also completed a hazard perception test (HPT), which measured participants' response times to hazards embedded in video recordings of real-world driving, which has been shown to be linked to crash risk. Dmin for the random dot stimulus ranged from -0.88 to -0.12 log minutes of arc, and the minimum drift rate for the Gabor stimulus ranged from 0.01 to 0.35 cycles per second. Both measures of motion sensitivity significantly predicted response times on the HPT. In addition, while the relationship involving the HPT and motion sensitivity for the random dot kinematogram was partially explained by the other visual function measures, the relationship with sensitivity for detection of the drifting Gabor stimulus remained significant even after controlling for these variables. These findings suggest that motion perception plays an important role in the visual perception of driving-relevant hazards independent of other areas of visual function and should be further explored as a predictive test of driving safety. Future research should explore the causes of reduced motion perception to develop better interventions to improve road safety. © 2012 The Authors. Acta Ophthalmologica © 2012 Acta Ophthalmologica Scandinavica Foundation.
Chakraborty, Arijit; Anstice, Nicola S.; Jacobs, Robert J.; Paudel, Nabin; LaGasse, Linda L.; Lester, Barry M.; McKinlay, Christopher J. D.; Harding, Jane E.; Wouldes, Trecia A.; Thompson, Benjamin
2017-01-01
Global motion perception is often used as an index of dorsal visual stream function in neurodevelopmental studies. However, the relationship between global motion perception and visuomotor control, a primary function of the dorsal stream, is unclear. We measured global motion perception (motion coherence threshold; MCT) and performance on standardized measures of motor function in 606 4.5-year-old children born at risk of abnormal neurodevelopment. Visual acuity, stereoacuity and verbal IQ were also assessed. After adjustment for verbal IQ or both visual acuity and stereoacuity, MCT was modestly, but significantly, associated with all components of motor function with the exception of gross motor scores. In a separate analysis, stereoacuity, but not visual acuity, was significantly associated with both gross and fine motor scores. These results indicate that the development of motion perception and stereoacuity are associated with motor function in pre-school children. PMID:28435122
Receptive fields for smooth pursuit eye movements and motion perception.
Debono, Kurt; Schütz, Alexander C; Spering, Miriam; Gegenfurtner, Karl R
2010-12-01
Humans use smooth pursuit eye movements to track moving objects of interest. In order to track an object accurately, motion signals from the target have to be integrated and segmented from motion signals in the visual context. Most studies on pursuit eye movements used small visual targets against a featureless background, disregarding the requirements of our natural visual environment. Here, we tested the ability of the pursuit and the perceptual system to integrate motion signals across larger areas of the visual field. Stimuli were random-dot kinematograms containing a horizontal motion signal, which was perturbed by a spatially localized, peripheral motion signal. Perturbations appeared in a gaze-contingent coordinate system and had a different direction than the main motion including a vertical component. We measured pursuit and perceptual direction discrimination decisions and found that both steady-state pursuit and perception were influenced most by perturbation angles close to that of the main motion signal and only in regions close to the center of gaze. The narrow direction bandwidth (26 angular degrees full width at half height) and small spatial extent (8 degrees of visual angle standard deviation) correspond closely to tuning parameters of neurons in the middle temporal area (MT). Copyright © 2010 Elsevier Ltd. All rights reserved.
Indovina, Iole; Maffei, Vincenzo; Pauwels, Karl; Macaluso, Emiliano; Orban, Guy A; Lacquaniti, Francesco
2013-05-01
Multiple visual signals are relevant to perception of heading direction. While the role of optic flow and depth cues has been studied extensively, little is known about the visual effects of gravity on heading perception. We used fMRI to investigate the contribution of gravity-related visual cues on the processing of vertical versus horizontal apparent self-motion. Participants experienced virtual roller-coaster rides in different scenarios, at constant speed or 1g-acceleration/deceleration. Imaging results showed that vertical self-motion coherent with gravity engaged the posterior insula and other brain regions that have been previously associated with vertical object motion under gravity. This selective pattern of activation was also found in a second experiment that included rectilinear motion in tunnels, whose direction was cued by the preceding open-air curves only. We argue that the posterior insula might perform high-order computations on visual motion patterns, combining different sensory cues and prior information about the effects of gravity. Medial-temporal regions including para-hippocampus and hippocampus were more activated by horizontal motion, preferably at constant speed, consistent with a role in inertial navigation. Overall, the results suggest partially distinct neural representations of the cardinal axes of self-motion (horizontal and vertical). Copyright © 2013 Elsevier Inc. All rights reserved.
Audio–visual interactions for motion perception in depth modulate activity in visual area V3A
Ogawa, Akitoshi; Macaluso, Emiliano
2013-01-01
Multisensory signals can enhance the spatial perception of objects and events in the environment. Changes of visual size and auditory intensity provide us with the main cues about motion direction in depth. However, frequency changes in audition and binocular disparity in vision also contribute to the perception of motion in depth. Here, we presented subjects with several combinations of auditory and visual depth-cues to investigate multisensory interactions during processing of motion in depth. The task was to discriminate the direction of auditory motion in depth according to increasing or decreasing intensity. Rising or falling auditory frequency provided an additional within-audition cue that matched or did not match the intensity change (i.e. intensity-frequency (IF) “matched vs. unmatched” conditions). In two-thirds of the trials, a task-irrelevant visual stimulus moved either in the same or opposite direction of the auditory target, leading to audio–visual “congruent vs. incongruent” between-modalities depth-cues. Furthermore, these conditions were presented either with or without binocular disparity. Behavioral data showed that the best performance was observed in the audio–visual congruent condition with IF matched. Brain imaging results revealed maximal response in visual area V3A when all cues provided congruent and reliable depth information (i.e. audio–visual congruent, IF-matched condition including disparity cues). Analyses of effective connectivity revealed increased coupling from auditory cortex to V3A specifically in audio–visual congruent trials. We conclude that within- and between-modalities cues jointly contribute to the processing of motion direction in depth, and that they do so via dynamic changes of connectivity between visual and auditory cortices. PMID:23333414
Cerebellar inactivation impairs memory of learned prism gaze-reach calibrations.
Norris, Scott A; Hathaway, Emily N; Taylor, Jordan A; Thach, W Thomas
2011-05-01
Three monkeys performed a visually guided reach-touch task with and without laterally displacing prisms. The prisms offset the normally aligned gaze/reach and subsequent touch. Naive monkeys showed adaptation, such that on repeated prism trials the gaze-reach angle widened and touches hit nearer the target. On the first subsequent no-prism trial the monkeys exhibited an aftereffect, such that the widened gaze-reach angle persisted and touches missed the target in the direction opposite that of initial prism-induced error. After 20-30 days of training, monkeys showed long-term learning and storage of the prism gaze-reach calibration: they switched between prism and no-prism and touched the target on the first trials without adaptation or aftereffect. Injections of lidocaine into posterolateral cerebellar cortex or muscimol or lidocaine into dentate nucleus temporarily inactivated these structures. Immediately after injections into cortex or dentate, reaches were displaced in the direction of prism-displaced gaze, but no-prism reaches were relatively unimpaired. There was little or no adaptation on the day of injection. On days after injection, there was no adaptation and both prism and no-prism reaches were horizontally, and often vertically, displaced. A single permanent lesion (kainic acid) in the lateral dentate nucleus of one monkey immediately impaired only the learned prism gaze-reach calibration and in subsequent days disrupted both learning and performance. This effect persisted for the 18 days of observation, with little or no adaptation.
Cerebellar inactivation impairs memory of learned prism gaze-reach calibrations
Hathaway, Emily N.; Taylor, Jordan A.; Thach, W. Thomas
2011-01-01
Three monkeys performed a visually guided reach-touch task with and without laterally displacing prisms. The prisms offset the normally aligned gaze/reach and subsequent touch. Naive monkeys showed adaptation, such that on repeated prism trials the gaze-reach angle widened and touches hit nearer the target. On the first subsequent no-prism trial the monkeys exhibited an aftereffect, such that the widened gaze-reach angle persisted and touches missed the target in the direction opposite that of initial prism-induced error. After 20–30 days of training, monkeys showed long-term learning and storage of the prism gaze-reach calibration: they switched between prism and no-prism and touched the target on the first trials without adaptation or aftereffect. Injections of lidocaine into posterolateral cerebellar cortex or muscimol or lidocaine into dentate nucleus temporarily inactivated these structures. Immediately after injections into cortex or dentate, reaches were displaced in the direction of prism-displaced gaze, but no-prism reaches were relatively unimpaired. There was little or no adaptation on the day of injection. On days after injection, there was no adaptation and both prism and no-prism reaches were horizontally, and often vertically, displaced. A single permanent lesion (kainic acid) in the lateral dentate nucleus of one monkey immediately impaired only the learned prism gaze-reach calibration and in subsequent days disrupted both learning and performance. This effect persisted for the 18 days of observation, with little or no adaptation. PMID:21389311
Tracking without perceiving: a dissociation between eye movements and motion perception.
Spering, Miriam; Pomplun, Marc; Carrasco, Marisa
2011-02-01
Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept.
Tracking Without Perceiving: A Dissociation Between Eye Movements and Motion Perception
Spering, Miriam; Pomplun, Marc; Carrasco, Marisa
2011-01-01
Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept. PMID:21189353
fMRI response during visual motion stimulation in patients with late whiplash syndrome.
Freitag, P; Greenlee, M W; Wachter, K; Ettlin, T M; Radue, E W
2001-01-01
After whiplash trauma, up to one fourth of patients develop chronic symptoms including head and neck pain and cognitive disturbances. Resting perfusion single-photon-emission computed tomography (SPECT) found decreased temporoparietooccipital tracer uptake among these long-term symptomatic patients with late whiplash syndrome. As MT/MST (V5/V5a) are located in that area, this study addressed the question whether these patients show impairments in visual motion perception. We examined five symptomatic patients with late whiplash syndrome, five asymptomatic patients after whiplash trauma, and a control group of seven volunteers without the history of trauma. Tests for visual motion perception and functional magnetic resonance imaging (fMRI) measurements during visual motion stimulation were performed. Symptomatic patients showed a significant reduction in their ability to perceive coherent visual motion compared with controls, whereas the asymptomatic patients did not show this effect. fMRI activation was similar during random dot motion in all three groups, but was significantly decreased during coherent dot motion in the symptomatic patients compared with the other two groups. Reduced psychophysical motion performance and reduced fMRI responses in symptomatic patients with late whiplash syndrome both point to a functional impairment in cortical areas sensitive to coherent motion. Larger studies are needed to confirm these clinical and functional imaging results to provide a possible additional diagnostic criterion for the evaluation of patients with late whiplash syndrome.
Seeing Circles and Drawing Ellipses: When Sound Biases Reproduction of Visual Motion
Aramaki, Mitsuko; Bringoux, Lionel; Ystad, Sølvi; Kronland-Martinet, Richard
2016-01-01
The perception and production of biological movements is characterized by the 1/3 power law, a relation linking the curvature and the velocity of an intended action. In particular, motions are perceived and reproduced distorted when their kinematics deviate from this biological law. Whereas most studies dealing with this perceptual-motor relation focused on visual or kinaesthetic modalities in a unimodal context, in this paper we show that auditory dynamics strikingly biases visuomotor processes. Biologically consistent or inconsistent circular visual motions were used in combination with circular or elliptical auditory motions. Auditory motions were synthesized friction sounds mimicking those produced by the friction of the pen on a paper when someone is drawing. Sounds were presented diotically and the auditory motion velocity was evoked through the friction sound timbre variations without any spatial cues. Remarkably, when subjects were asked to reproduce circular visual motion while listening to sounds that evoked elliptical kinematics without seeing their hand, they drew elliptical shapes. Moreover, distortion induced by inconsistent elliptical kinematics in both visual and auditory modalities added up linearly. These results bring to light the substantial role of auditory dynamics in the visuo-motor coupling in a multisensory context. PMID:27119411
Dynamic visual attention: motion direction versus motion magnitude
NASA Astrophysics Data System (ADS)
Bur, A.; Wurtz, P.; Müri, R. M.; Hügli, H.
2008-02-01
Defined as an attentive process in the context of visual sequences, dynamic visual attention refers to the selection of the most informative parts of video sequence. This paper investigates the contribution of motion in dynamic visual attention, and specifically compares computer models designed with the motion component expressed either as the speed magnitude or as the speed vector. Several computer models, including static features (color, intensity and orientation) and motion features (magnitude and vector) are considered. Qualitative and quantitative evaluations are performed by comparing the computer model output with human saliency maps obtained experimentally from eye movement recordings. The model suitability is evaluated in various situations (synthetic and real sequences, acquired with fixed and moving camera perspective), showing advantages and inconveniences of each method as well as preferred domain of application.
Visualization of Kepler's Laws of Planetary Motion
ERIC Educational Resources Information Center
Lu, Meishu; Su, Jun; Wang, Weiguo; Lu, Jianlong
2017-01-01
For this article, we use a 3D printer to print a surface similar to universal gravitation for demonstrating and investigating Kepler's laws of planetary motion describing the motion of a small ball on the surface. This novel experimental method allows Kepler's laws of planetary motion to be visualized and will contribute to improving the…
Visual fatigue modeling for stereoscopic video shot based on camera motion
NASA Astrophysics Data System (ADS)
Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing
2014-11-01
As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.
Afterimage induced neural activity during emotional face perception.
Cheal, Jenna L; Heisz, Jennifer J; Walsh, Jennifer A; Shedden, Judith M; Rutherford, M D
2014-02-26
The N170 response differs when positive versus negative facial expressions are viewed. This neural response could be associated with the perception of emotions, or some feature of the stimulus. We used an aftereffect paradigm to clarify. Consistent with previous reports of emotional aftereffects, a neutral face was more likely to be described as happy following a sad face adaptation, and more likely to be described as sad following a happy face adaptation. In addition, similar to previous observations with actual emotional faces, we found differences in the latency of the N170 elicited by the neutral face following sad versus happy face adaptation, demonstrating that the emotion-specific effect on the N170 emerges even when emotion expressions are perceptually different but physically identical. The re-entry of emotional information from other brain regions may be driving the emotional aftereffects and the N170 latency differences. Copyright © 2014 Elsevier B.V. All rights reserved.
Adaptation and visual salience
McDermott, Kyle C.; Malkoc, Gokhan; Mulligan, Jeffrey B.; Webster, Michael A.
2011-01-01
We examined how the salience of color is affected by adaptation to different color distributions. Observers searched for a color target on a dense background of distractors varying along different directions in color space. Prior adaptation to the backgrounds enhanced search on the same background while adaptation to orthogonal background directions slowed detection. Advantages of adaptation were seen for both contrast adaptation (to different color axes) and chromatic adaptation (to different mean chromaticities). Control experiments, including analyses of eye movements during the search, suggest that these aftereffects are unlikely to reflect simple learning or changes in search strategies on familiar backgrounds, and instead result from how adaptation alters the relative salience of the target and background colors. Comparable effects were observed along different axes in the chromatic plane or for axes defined by different combinations of luminance and chromatic contrast, consistent with visual search and adaptation mediated by multiple color mechanisms. Similar effects also occurred for color distributions characteristic of natural environments with strongly selective color gamuts. Our results are consistent with the hypothesis that adaptation may play an important functional role in highlighting the salience of novel stimuli by discounting ambient properties of the visual environment. PMID:21106682
Adaptation to implied tilt: extensive spatial extrapolation of orientation gradients
Roach, Neil W.; Webb, Ben S.
2013-01-01
To extract the global structure of an image, the visual system must integrate local orientation estimates across space. Progress is being made toward understanding this integration process, but very little is known about whether the presence of structure exerts a reciprocal influence on local orientation coding. We have previously shown that adaptation to patterns containing circular or radial structure induces tilt-aftereffects (TAEs), even in locations where the adapting pattern was occluded. These spatially “remote” TAEs have novel tuning properties and behave in a manner consistent with adaptation to the local orientation implied by the circular structure (but not physically present) at a given test location. Here, by manipulating the spatial distribution of local elements in noisy circular textures, we demonstrate that remote TAEs are driven by the extrapolation of orientation structure over remarkably large regions of visual space (more than 20°). We further show that these effects are not specific to adapting stimuli with polar orientation structure, but require a gradient of orientation change across space. Our results suggest that mechanisms of visual adaptation exploit orientation gradients to predict the local pattern content of unfilled regions of space. PMID:23882243
Vingilis-Jaremko, Larissa; Maurer, Daphne; Rhodes, Gillian; Jeffery, Linda
2016-08-03
Adults who missed early visual input because of congenital cataracts later have deficits in many aspects of face processing. Here we investigated whether they make normal judgments of facial attractiveness. In particular, we studied whether their perceptions are affected normally by a face's proximity to the population mean, as is true of typically developing adults, who find average faces to be more attractive than most other faces. We compared the judgments of facial attractiveness of 12 cataract-reversal patients to norms established from 36 adults with normal vision. Participants viewed pairs of adult male and adult female faces that had been transformed 50% toward and 50% away from their respective group averages, and selected which face was more attractive. Averageness influenced patients' judgments of attractiveness, but to a lesser extent than controls. The results suggest that cataract-reversal patients are able to develop a system for representing faces with a privileged position for an average face, consistent with evidence from identity aftereffects. However, early visual experience is necessary to set up the neural architecture necessary for averageness to influence perceptions of attractiveness with its normal potency. © The Author(s) 2016.
Dual processing of visual rotation for bipedal stance control.
Day, Brian L; Muller, Timothy; Offord, Joanna; Di Giulio, Irene
2016-10-01
When standing, the gain of the body-movement response to a sinusoidally moving visual scene has been shown to get smaller with faster stimuli, possibly through changes in the apportioning of visual flow to self-motion or environment motion. We investigated whether visual-flow speed similarly influences the postural response to a discrete, unidirectional rotation of the visual scene in the frontal plane. Contrary to expectation, the evoked postural response consisted of two sequential components with opposite relationships to visual motion speed. With faster visual rotation the early component became smaller, not through a change in gain but by changes in its temporal structure, while the later component grew larger. We propose that the early component arises from the balance control system minimising apparent self-motion, while the later component stems from the postural system realigning the body with gravity. The source of visual motion is inherently ambiguous such that movement of objects in the environment can evoke self-motion illusions and postural adjustments. Theoretically, the brain can mitigate this problem by combining visual signals with other types of information. A Bayesian model that achieves this was previously proposed and predicts a decreasing gain of postural response with increasing visual motion speed. Here we test this prediction for discrete, unidirectional, full-field visual rotations in the frontal plane of standing subjects. The speed (0.75-48 deg s(-1) ) and direction of visual rotation was pseudo-randomly varied and mediolateral responses were measured from displacements of the trunk and horizontal ground reaction forces. The behaviour evoked by this visual rotation was more complex than has hitherto been reported, consisting broadly of two consecutive components with respective latencies of ∼190 ms and >0.7 s. Both components were sensitive to visual rotation speed, but with diametrically opposite relationships. Thus, the early component decreased with faster visual rotation, while the later component increased. Furthermore, the decrease in size of the early component was not achieved by a simple attenuation of gain, but by a change in its temporal structure. We conclude that the two components represent expressions of different motor functions, both pertinent to the control of bipedal stance. We propose that the early response stems from the balance control system attempting to minimise unintended body motion, while the later response arises from the postural control system attempting to align the body with gravity. © 2016 The Authors. The Journal of Physiology published by John Wiley & Sons Ltd on behalf of The Physiological Society.
Tilt and Translation Motion Perception during Pitch Tilt with Visual Surround Translation
NASA Technical Reports Server (NTRS)
O'Sullivan, Brita M.; Harm, Deborah L.; Reschke, Millard F.; Wood, Scott J.
2006-01-01
The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive an accurate representation of spatial orientation. Previous studies suggest that multisensory integration is critical for discriminating linear accelerations arising from tilt and translation head motion. Visual input is especially important at low frequencies where canal input is declining. The NASA Tilt Translation Device (TTD) was designed to recreate postflight orientation disturbances by exposing subjects to matching tilt self motion with conflicting visual surround translation. Previous studies have demonstrated that brief exposures to pitch tilt with foreaft visual surround translation produced changes in compensatory vertical eye movement responses, postural equilibrium, and motion sickness symptoms. Adaptation appeared greatest with visual scene motion leading (versus lagging) the tilt motion, and the adaptation time constant appeared to be approximately 30 min. The purpose of this study was to compare motion perception when the visual surround translation was inphase versus outofphase with pitch tilt. The inphase stimulus presented visual surround motion one would experience if the linear acceleration was due to foreaft self translation within a stationary surround, while the outofphase stimulus had the visual scene motion leading the tilt by 90 deg as previously used. The tilt stimuli in these conditions were asymmetrical, ranging from an upright orientation to 10 deg pitch back. Another objective of the study was to compare motion perception with the inphase stimulus when the tilts were asymmetrical relative to upright (0 to 10 deg back) versus symmetrical (10 deg forward to 10 deg back). Twelve subjects (6M, 6F, 22-55 yrs) were tested during 3 sessions separated by at least one week. During each of the three sessions (out-of-phase asymmetrical, in-phase asymmetrical, inphase symmetrical), subjects were exposed to visual surround translation synchronized with pitch tilt at 0.1 Hz for a total of 30 min. Tilt and translation motion perception was obtained from verbal reports and a joystick mounted on a linear stage. Horizontal vergence and vertical eye movements were obtained with a binocular video system. Responses were also obtained during darkness before and following 15 min and 30 min of visual surround translation. Each of the three stimulus conditions involving visual surround translation elicited a significantly reduced sense of perceived tilt and strong linear vection (perceived translation) compared to pre-exposure tilt stimuli in darkness. This increase in perceived translation with reduction in tilt perception was also present in darkness following 15 and 30 min exposures, provided the tilt stimuli were not interrupted. Although not significant, there was a trend for the inphase asymmetrical stimulus to elicit a stronger sense of both translation and tilt than the out-of-phase asymmetrical stimulus. Surprisingly, the inphase asymmetrical stimulus also tended to elicit a stronger sense of peak-to-peak translation than the inphase symmetrical stimulus, even though the range of linear acceleration during the symmetrical stimulus was twice that of the asymmetrical stimulus. These results are consistent with the hypothesis that the central nervous system resolves the ambiguity of inertial motion sensory cues by integrating inputs from visual, vestibular, and somatosensory systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
George, Rohini; Department of Biomedical Engineering, Virginia Commonwealth University, Richmond, VA; Chung, Theodore D.
2006-07-01
Purpose: Respiratory gating is a commercially available technology for reducing the deleterious effects of motion during imaging and treatment. The efficacy of gating is dependent on the reproducibility within and between respiratory cycles during imaging and treatment. The aim of this study was to determine whether audio-visual biofeedback can improve respiratory reproducibility by decreasing residual motion and therefore increasing the accuracy of gated radiotherapy. Methods and Materials: A total of 331 respiratory traces were collected from 24 lung cancer patients. The protocol consisted of five breathing training sessions spaced about a week apart. Within each session the patients initially breathedmore » without any instruction (free breathing), with audio instructions and with audio-visual biofeedback. Residual motion was quantified by the standard deviation of the respiratory signal within the gating window. Results: Audio-visual biofeedback significantly reduced residual motion compared with free breathing and audio instruction. Displacement-based gating has lower residual motion than phase-based gating. Little reduction in residual motion was found for duty cycles less than 30%; for duty cycles above 50% there was a sharp increase in residual motion. Conclusions: The efficiency and reproducibility of gating can be improved by: incorporating audio-visual biofeedback, using a 30-50% duty cycle, gating during exhalation, and using displacement-based gating.« less
Vection and visually induced motion sickness: how are they related?
Keshavarz, Behrang; Riecke, Bernhard E.; Hettinger, Lawrence J.; Campos, Jennifer L.
2015-01-01
The occurrence of visually induced motion sickness has been frequently linked to the sensation of illusory self-motion (vection), however, the precise nature of this relationship is still not fully understood. To date, it is still a matter of debate as to whether vection is a necessary prerequisite for visually induced motion sickness (VIMS). That is, can there be VIMS without any sensation of self-motion? In this paper, we will describe the possible nature of this relationship, review the literature that addresses this relationship (including theoretical accounts of vection and VIMS), and offer suggestions with respect to operationally defining and reporting these phenomena in future. PMID:25941509
NASA Technical Reports Server (NTRS)
Kennedy, Robert S.; Jones, Marshall B.; Lilienthal, Michael G.; Harm, Deborah L.
1994-01-01
Motion sickness symptoms are an unwanted by-product of exposure to virtual environments. This problem is not new and was reported in the early flight simulators and experiments on ego motions and vection. The cardinal symptom of motion sickness is, of course, vomiting, but this symptom is ordinarily preceded by a variety of other symptoms. In his classic studies of motion sickness conducted before and during World War II, G. R. Wendt introduced a three point scale to score motion sickness beyond a vomit/no vomit dichotomy. Later, Navy scientists developed a Motion Sickness Questionnaire (MSQ), originally for use in a slowly rotating room. In the last 20 years the MSQ has been used in a series of studies of air, sea, and space sickness. Only recently, however, has it been appreciated that symptom patterns in the MSQ are not uniform but vary with the way sickness is induced. In seasickness, for example, nausea is the most prominent symptom. In Navy simulators, however, the most common symptom is eye strain, especially when cathode ray tubes are employed in the simulation. The latter result was obtained in a survey of over 1,500 pilot exposures. Using this database, Essex scientists conducted a factor analysis of the MSQ. We found that signs and symptoms of motion sickness fell mainly into three clusters: 1) oculomotor disturbance, 2) nausea and related neurovegetative problems, and 3) disorientation, ataxia, and vertigo. We have since rescored the MSQ results obtained in Navy simulators in terms of these three components. We have also compared these and other profiles obtained from three different vitual reality systems to profiles obtained in sea sickness, space sickness, and alcohol intoxication. We will show examples of these various profiles and point out simularities and differences among them which indicate aspects of what might be called 'virtual-reality sickness'.
NASA Astrophysics Data System (ADS)
Baka, N.; Lelieveldt, B. P. F.; Schultz, C.; Niessen, W.; van Walsum, T.
2015-05-01
During percutaneous coronary interventions (PCI) catheters and arteries are visualized by x-ray angiography (XA) sequences, using brief contrast injections to show the coronary arteries. If we could continue visualizing the coronary arteries after the contrast agent passed (thus in non-contrast XA frames), we could potentially lower contrast use, which is advantageous due to the toxicity of the contrast agent. This paper explores the possibility of such visualization in mono-plane XA acquisitions with a special focus on respiratory based coronary artery motion estimation. We use the patient specific coronary artery centerlines from pre-interventional 3D CTA images to project on the XA sequence for artery visualization. To achieve this, a framework for registering the 3D centerlines with the mono-plane 2D + time XA sequences is presented. During the registration the patient specific cardiac and respiratory motion is learned. We investigate several respiratory motion estimation strategies with respect to accuracy, plausibility and ease of use for motion prediction in XA frames with and without contrast. The investigated strategies include diaphragm motion based prediction, and respiratory motion extraction from the guiding catheter tip motion. We furthermore compare translational and rigid respiratory based heart motion. We validated the accuracy of the 2D/3D registration and the respiratory and cardiac motion estimations on XA sequences of 12 interventions. The diaphragm based motion model and the catheter tip derived motion achieved 1.58 mm and 1.83 mm median 2D accuracy, respectively. On a subset of four interventions we evaluated the artery visualization accuracy for non-contrast cases. Both diaphragm, and catheter tip based prediction performed similarly, with about half of the cases providing satisfactory accuracy (median error < 2 mm).
Contrast and assimilation in motion perception and smooth pursuit eye movements.
Spering, Miriam; Gegenfurtner, Karl R
2007-09-01
The analysis of visual motion serves many different functions ranging from object motion perception to the control of self-motion. The perception of visual motion and the oculomotor tracking of a moving object are known to be closely related and are assumed to be controlled by shared brain areas. We compared perceived velocity and the velocity of smooth pursuit eye movements in human observers in a paradigm that required the segmentation of target object motion from context motion. In each trial, a pursuit target and a visual context were independently perturbed simultaneously to briefly increase or decrease in speed. Observers had to accurately track the target and estimate target speed during the perturbation interval. Here we show that the same motion signals are processed in fundamentally different ways for perception and steady-state smooth pursuit eye movements. For the computation of perceived velocity, motion of the context was subtracted from target motion (motion contrast), whereas pursuit velocity was determined by the motion average (motion assimilation). We conclude that the human motion system uses these computations to optimally accomplish different functions: image segmentation for object motion perception and velocity estimation for the control of smooth pursuit eye movements.
Representation of visual gravitational motion in the human vestibular cortex.
Indovina, Iole; Maffei, Vincenzo; Bosco, Gianfranco; Zago, Myrka; Macaluso, Emiliano; Lacquaniti, Francesco
2005-04-15
How do we perceive the visual motion of objects that are accelerated by gravity? We propose that, because vision is poorly sensitive to accelerations, an internal model that calculates the effects of gravity is derived from graviceptive information, is stored in the vestibular cortex, and is activated by visual motion that appears to be coherent with natural gravity. The acceleration of visual targets was manipulated while brain activity was measured using functional magnetic resonance imaging. In agreement with the internal model hypothesis, we found that the vestibular network was selectively engaged when acceleration was consistent with natural gravity. These findings demonstrate that predictive mechanisms of physical laws of motion are represented in the human brain.
Multiplexing in the primate motion pathway.
Huk, Alexander C
2012-06-01
This article begins by reviewing recent work on 3D motion processing in the primate visual system. Some of these results suggest that 3D motion signals may be processed in the same circuitry already known to compute 2D motion signals. Such "multiplexing" has implications for the study of visual cortical circuits and neural signals. A more explicit appreciation of multiplexing--and the computations required for demultiplexing--may enrich the study of the visual system by emphasizing the importance of a structured and balanced "encoding/decoding" framework. In addition to providing a fresh perspective on how successive stages of visual processing might be approached, multiplexing also raises caveats about the value of "neural correlates" for understanding neural computation.
A model for the pilot's use of motion cues in roll-axis tracking tasks
NASA Technical Reports Server (NTRS)
Levison, W. H.; Junker, A. M.
1977-01-01
Simulated target-following and disturbance-regulation tasks were explored with subjects using visual-only and combined visual and motion cues. The effects of motion cues on task performance and pilot response behavior were appreciably different for the two task configurations and were consistent with data reported in earlier studies for similar task configurations. The optimal-control model for pilot/vehicle systems provided a task-independent framework for accounting for the pilot's use of motion cues. Specifically, the availability of motion cues was modeled by augmenting the set of perceptual variables to include position, rate, acceleration, and accleration-rate of the motion simulator, and results were consistent with the hypothesis of attention-sharing between visual and motion variables. This straightforward informational model allowed accurate model predictions of the effects of motion cues on a variety of response measures for both the target-following and disturbance-regulation tasks.
Multisensory and Modality-Specific Influences on Adaptation to Optical Prisms
Calzolari, Elena; Albini, Federica; Bolognini, Nadia; Vallar, Giuseppe
2017-01-01
Visuo-motor adaptation to optical prisms displacing the visual scene (prism adaptation, PA) is a method used for investigating visuo-motor plasticity in healthy individuals and, in clinical settings, for the rehabilitation of unilateral spatial neglect. In the standard paradigm, the adaptation phase involves repeated pointings to visual targets, while wearing optical prisms displacing the visual scene laterally. Here we explored differences in PA, and its aftereffects (AEs), as related to the sensory modality of the target. Visual, auditory, and multisensory – audio-visual – targets in the adaptation phase were used, while participants wore prisms displacing the visual field rightward by 10°. Proprioceptive, visual, visual-proprioceptive, auditory-proprioceptive straight-ahead shifts were measured. Pointing to auditory and to audio-visual targets in the adaptation phase produces proprioceptive, visual-proprioceptive, and auditory-proprioceptive AEs, as the typical visual targets did. This finding reveals that cross-modal plasticity effects involve both the auditory and the visual modality, and their interactions (Experiment 1). Even a shortened PA phase, requiring only 24 pointings to visual and audio-visual targets (Experiment 2), is sufficient to bring about AEs, as compared to the standard 92-pointings procedure. Finally, pointings to auditory targets cause AEs, although PA with a reduced number of pointings (24) to auditory targets brings about smaller AEs, as compared to the 92-pointings procedure (Experiment 3). Together, results from the three experiments extend to the auditory modality the sensorimotor plasticity underlying the typical AEs produced by PA to visual targets. Importantly, PA to auditory targets appears characterized by less accurate pointings and error correction, suggesting that the auditory component of the PA process may be less central to the building up of the AEs, than the sensorimotor pointing activity per se. These findings highlight both the effectiveness of a reduced number of pointings for bringing about AEs, and the possibility of inducing PA with auditory targets, which may be used as a compensatory route in patients with visual deficits. PMID:29213233
NASA Astrophysics Data System (ADS)
Hayashi, Yoshikatsu; Tamura, Yurie; Sase, Kazuya; Sugawara, Ken; Sawada, Yasuji
Prediction mechanism is necessary for human visual motion to compensate a delay of sensory-motor system. In a previous study, “proactive control” was discussed as one example of predictive function of human beings, in which motion of hands preceded the virtual moving target in visual tracking experiments. To study the roles of the positional-error correction mechanism and the prediction mechanism, we carried out an intermittently-visual tracking experiment where a circular orbit is segmented into the target-visible regions and the target-invisible regions. Main results found in this research were following. A rhythmic component appeared in the tracer velocity when the target velocity was relatively high. The period of the rhythm in the brain obtained from environmental stimuli is shortened more than 10%. The shortening of the period of rhythm in the brain accelerates the hand motion as soon as the visual information is cut-off, and causes the precedence of hand motion to the target motion. Although the precedence of the hand in the blind region is reset by the environmental information when the target enters the visible region, the hand motion precedes the target in average when the predictive mechanism dominates the error-corrective mechanism.
Integrative cortical dysfunction and pervasive motion perception deficit in fragile X syndrome.
Kogan, C S; Bertone, A; Cornish, K; Boutet, I; Der Kaloustian, V M; Andermann, E; Faubert, J; Chaudhuri, A
2004-11-09
Fragile X syndrome (FXS) is associated with neurologic deficits recently attributed to the magnocellular pathway of the lateral geniculate nucleus. To test the hypotheses that FXS individuals 1) have a pervasive visual motion perception impairment affecting neocortical circuits in the parietal lobe and 2) have deficits in integrative neocortical mechanisms necessary for perception of complex stimuli. Psychophysical tests of visual motion and form perception defined by either first-order (luminance) or second-order (texture) attributes were used to probe early and later occipito-temporal and occipito-parietal functioning. When compared to developmental- and age-matched controls, FXS individuals displayed severe impairments in first- and second-order motion perception. This deficit was accompanied by near normal perception for first-order form stimuli but not second-order form stimuli. Impaired visual motion processing for first- and second-order stimuli suggests that both early- and later-level neurologic function of the parietal lobe are affected in Fragile X syndrome (FXS). Furthermore, this deficit likely stems from abnormal input from the magnocellular compartment of the lateral geniculate nucleus. Impaired visual form and motion processing for complex visual stimuli with normal processing for simple (i.e., first-order) form stimuli suggests that FXS individuals have normal early form processing accompanied by a generalized impairment in neurologic mechanisms necessary for integrating all early visual input.
Multiple-stage ambiguity in motion perception reveals global computation of local motion directions.
Rider, Andrew T; Nishida, Shin'ya; Johnston, Alan
2016-12-01
The motion of a 1D image feature, such as a line, seen through a small aperture, or the small receptive field of a neural motion sensor, is underconstrained, and it is not possible to derive the true motion direction from a single local measurement. This is referred to as the aperture problem. How the visual system solves the aperture problem is a fundamental question in visual motion research. In the estimation of motion vectors through integration of ambiguous local motion measurements at different positions, conventional theories assume that the object motion is a rigid translation, with motion signals sharing a common motion vector within the spatial region over which the aperture problem is solved. However, this strategy fails for global rotation. Here we show that the human visual system can estimate global rotation directly through spatial pooling of locally ambiguous measurements, without an intervening step that computes local motion vectors. We designed a novel ambiguous global flow stimulus, which is globally as well as locally ambiguous. The global ambiguity implies that the stimulus is simultaneously consistent with both a global rigid translation and an infinite number of global rigid rotations. By the standard view, the motion should always be seen as a global translation, but it appears to shift from translation to rotation as observers shift fixation. This finding indicates that the visual system can estimate local vectors using a global rotation constraint, and suggests that local motion ambiguity may not be resolved until consistencies with multiple global motion patterns are assessed.
Decoding conjunctions of direction-of-motion and binocular disparity from human visual cortex.
Seymour, Kiley J; Clifford, Colin W G
2012-05-01
Motion and binocular disparity are two features in our environment that share a common correspondence problem. Decades of psychophysical research dedicated to understanding stereopsis suggest that these features interact early in human visual processing to disambiguate depth. Single-unit recordings in the monkey also provide evidence for the joint encoding of motion and disparity across much of the dorsal visual stream. Here, we used functional MRI and multivariate pattern analysis to examine where in the human brain conjunctions of motion and disparity are encoded. Subjects sequentially viewed two stimuli that could be distinguished only by their conjunctions of motion and disparity. Specifically, each stimulus contained the same feature information (leftward and rightward motion and crossed and uncrossed disparity) but differed exclusively in the way these features were paired. Our results revealed that a linear classifier could accurately decode which stimulus a subject was viewing based on voxel activation patterns throughout the dorsal visual areas and as early as V2. This decoding success was conditional on some voxels being individually sensitive to the unique conjunctions comprising each stimulus, thus a classifier could not rely on independent information about motion and binocular disparity to distinguish these conjunctions. This study expands on evidence that disparity and motion interact at many levels of human visual processing, particularly within the dorsal stream. It also lends support to the idea that stereopsis is subserved by early mechanisms also tuned to direction of motion.
Modeling a space-variant cortical representation for apparent motion.
Wurbs, Jeremy; Mingolla, Ennio; Yazdanbakhsh, Arash
2013-08-06
Receptive field sizes of neurons in early primate visual areas increase with eccentricity, as does temporal processing speed. The fovea is evidently specialized for slow, fine movements while the periphery is suited for fast, coarse movements. In either the fovea or periphery discrete flashes can produce motion percepts. Grossberg and Rudd (1989) used traveling Gaussian activity profiles to model long-range apparent motion percepts. We propose a neural model constrained by physiological data to explain how signals from retinal ganglion cells to V1 affect the perception of motion as a function of eccentricity. Our model incorporates cortical magnification, receptive field overlap and scatter, and spatial and temporal response characteristics of retinal ganglion cells for cortical processing of motion. Consistent with the finding of Baker and Braddick (1985), in our model the maximum flash distance that is perceived as an apparent motion (Dmax) increases linearly as a function of eccentricity. Baker and Braddick (1985) made qualitative predictions about the functional significance of both stimulus and visual system parameters that constrain motion perception, such as an increase in the range of detectable motions as a function of eccentricity and the likely role of higher visual processes in determining Dmax. We generate corresponding quantitative predictions for those functional dependencies for individual aspects of motion processing. Simulation results indicate that the early visual pathway can explain the qualitative linear increase of Dmax data without reliance on extrastriate areas, but that those higher visual areas may serve as a modulatory influence on the exact Dmax increase.
Indovina, Iole; Maffei, Vincenzo; Lacquaniti, Francesco
2013-09-01
By simulating self-motion on a virtual rollercoaster, we investigated whether acceleration cued by the optic flow affected the estimate of time-to-passage (TTP) to a target. In particular, we studied the role of a visual acceleration (1 g = 9.8 m/s(2)) simulating the effects of gravity in the scene, by manipulating motion law (accelerated or decelerated at 1 g, constant speed) and motion orientation (vertical, horizontal). Thus, 1-g-accelerated motion in the downward direction or decelerated motion in the upward direction was congruent with the effects of visual gravity. We found that acceleration (positive or negative) is taken into account but is overestimated in module in the calculation of TTP, independently of orientation. In addition, participants signaled TTP earlier when the rollercoaster accelerated downward at 1 g (as during free fall), with respect to when the same acceleration occurred along the horizontal orientation. This time shift indicates an influence of the orientation relative to visual gravity on response timing that could be attributed to the anticipation of the effects of visual gravity on self-motion along the vertical, but not the horizontal orientation. Finally, precision in TTP estimates was higher during vertical fall than when traveling at constant speed along the vertical orientation, consistent with a higher noise in TTP estimates when the motion violates gravity constraints.
Distinct fMRI Responses to Self-Induced versus Stimulus Motion during Free Viewing in the Macaque
Kaneko, Takaaki; Saleem, Kadharbatcha S.; Berman, Rebecca A.; Leopold, David A.
2016-01-01
Visual motion responses in the brain are shaped by two distinct sources: the physical movement of objects in the environment and motion resulting from one's own actions. The latter source, termed visual reafference, stems from movements of the head and body, and in primates from the frequent saccadic eye movements that mark natural vision. To study the relative contribution of reafferent and stimulus motion during natural vision, we measured fMRI activity in the brains of two macaques as they freely viewed >50 hours of naturalistic video footage depicting dynamic social interactions. We used eye movements obtained during scanning to estimate the level of reafferent retinal motion at each moment in time. We also estimated the net stimulus motion by analyzing the video content during the same time periods. Mapping the responses to these distinct sources of retinal motion, we found a striking dissociation in the distribution of visual responses throughout the brain. Reafferent motion drove fMRI activity in the early retinotopic areas V1, V2, V3, and V4, particularly in their central visual field representations, as well as lateral aspects of the caudal inferotemporal cortex (area TEO). However, stimulus motion dominated fMRI responses in the superior temporal sulcus, including areas MT, MST, and FST as well as more rostral areas. We discuss this pronounced separation of motion processing in the context of natural vision, saccadic suppression, and the brain's utilization of corollary discharge signals. SIGNIFICANCE STATEMENT Visual motion arises not only from events in the external world, but also from the movements of the observer. For example, even if objects are stationary in the world, the act of walking through a room or shifting one's eyes causes motion on the retina. This “reafferent” motion propagates into the brain as signals that must be interpreted in the context of real object motion. The delineation of whole-brain responses to stimulus versus self-generated retinal motion signals is critical for understanding visual perception and is of pragmatic importance given the increasing use of naturalistic viewing paradigms. The present study uses fMRI to demonstrate that the brain exhibits a fundamentally different pattern of responses to these two sources of retinal motion. PMID:27629710
Distinct fMRI Responses to Self-Induced versus Stimulus Motion during Free Viewing in the Macaque.
Russ, Brian E; Kaneko, Takaaki; Saleem, Kadharbatcha S; Berman, Rebecca A; Leopold, David A
2016-09-14
Visual motion responses in the brain are shaped by two distinct sources: the physical movement of objects in the environment and motion resulting from one's own actions. The latter source, termed visual reafference, stems from movements of the head and body, and in primates from the frequent saccadic eye movements that mark natural vision. To study the relative contribution of reafferent and stimulus motion during natural vision, we measured fMRI activity in the brains of two macaques as they freely viewed >50 hours of naturalistic video footage depicting dynamic social interactions. We used eye movements obtained during scanning to estimate the level of reafferent retinal motion at each moment in time. We also estimated the net stimulus motion by analyzing the video content during the same time periods. Mapping the responses to these distinct sources of retinal motion, we found a striking dissociation in the distribution of visual responses throughout the brain. Reafferent motion drove fMRI activity in the early retinotopic areas V1, V2, V3, and V4, particularly in their central visual field representations, as well as lateral aspects of the caudal inferotemporal cortex (area TEO). However, stimulus motion dominated fMRI responses in the superior temporal sulcus, including areas MT, MST, and FST as well as more rostral areas. We discuss this pronounced separation of motion processing in the context of natural vision, saccadic suppression, and the brain's utilization of corollary discharge signals. Visual motion arises not only from events in the external world, but also from the movements of the observer. For example, even if objects are stationary in the world, the act of walking through a room or shifting one's eyes causes motion on the retina. This "reafferent" motion propagates into the brain as signals that must be interpreted in the context of real object motion. The delineation of whole-brain responses to stimulus versus self-generated retinal motion signals is critical for understanding visual perception and is of pragmatic importance given the increasing use of naturalistic viewing paradigms. The present study uses fMRI to demonstrate that the brain exhibits a fundamentally different pattern of responses to these two sources of retinal motion. Copyright © 2016 the authors 0270-6474/16/369580-10$15.00/0.
Magnani, Barbara; Frassinetti, Francesca; Ditye, Thomas; Oliveri, Massimiliano; Costantini, Marcello; Walsh, Vincent
2014-05-15
Prismatic adaptation (PA) has been shown to affect left-to-right spatial representations of temporal durations. A leftward aftereffect usually distorts time representation toward an underestimation, while rightward aftereffect usually results in an overestimation of temporal durations. Here, we used functional magnetic resonance imaging (fMRI) to study the neural mechanisms that underlie PA effects on time perception. Additionally, we investigated whether the effect of PA on time is transient or stable and, in the case of stability, which cortical areas are responsible of its maintenance. Functional brain images were acquired while participants (n=17) performed a time reproduction task and a control-task before, immediately after and 30 min after PA inducing a leftward aftereffect, administered outside the scanner. The leftward aftereffect induced an underestimation of time intervals that lasted for at least 30 min. The left anterior insula and the left superior frontal gyrus showed increased functional activation immediately after versus before PA in the time versus the control-task, suggesting these brain areas to be involved in the executive spatial manipulation of the representation of time. The left middle frontal gyrus showed an increase of activation after 30 min with respect to before PA. This suggests that this brain region may play a key role in the maintenance of the PA effect over time. Copyright © 2014. Published by Elsevier Inc.
Ukezono, Masatoshi; Nakashima, Satoshi F; Sudo, Ryunosuke; Yamazaki, Akira; Takano, Yuji
2015-01-01
Zajonc's drive theory postulates that arousal enhanced through the perception of the presence of other individuals plays a crucial role in social facilitation (Zajonc, 1965). Here, we conducted two experiments to examine whether the elevation of arousal through a stepping exercise performed in front of others as an exogenous factor causes social facilitation of a cognitive task in a condition where the presence of others does not elevate the arousal level. In the main experiment, as an "aftereffect of social stimulus," we manipulated the presence or absence of others and arousal enhancement before participants conducted the primary cognitive task. The results showed that the strongest social facilitation was induced by the combination of the perception of others and arousal enhancement. In a supplementary experiment, we manipulated these factors by adding the presence of another person during the task. The results showed that the effect of the presence of the other during the primary task is enough on its own to produce facilitation of task performance regardless of the arousal enhancement as an aftereffect of social stimulus. Our study therefore extends the framework of Zajonc's drive theory in that the combination of the perception of others and enhanced arousal as an "aftereffect" was found to induce social facilitation especially when participants did not experience the presence of others while conducting the primary task.
An aftereffect of global warming on tropical Pacific decadal variability
NASA Astrophysics Data System (ADS)
Zheng, Jian; Liu, Qinyu; Wang, Chuanyang
2018-03-01
Studies have shown that global warming over the past six decades can weaken the tropical Pacific Walker circulation and maintain the positive phase of the Interdecadal Pacific Oscillation (IPO). Based on observations and model simulations, another aftereffect of global warming on IPO is found. After removing linear trends (global warming signals) from observations, however, the tropical Pacific climate still exhibited some obvious differences between two IPO negative phases. The boreal winter (DJF) equatorial central-eastern Pacific sea surface temperature (SST) was colder during the 1999-2014 period (P2) than that during 1961-1976 (P1). This difference may have been a result of global warming nonlinear modulation of precipitation; i.e., in the climatological rainy region, the core area of the tropical Indo-western Pacific warm pool receives more precipitation through the "wet-get-wetter" mechanism. Positive precipitation anomalies in the warm pool during P2 are much stronger than those during P1, even after subtracting the linear trend. Corresponding to the differences of precipitation, the Pacific Walker circulation is stronger in P2 than in P1. Consequent easterly winds over the equatorial Pacific led to a colder equatorial eastern-central Pacific during P2. Therefore, tropical Pacific climate differences between the two negative IPO phases are aftereffects of global warming. These aftereffects are supported by the results of coupled climate model experiments, with and without global warming.
Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction
Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta
2018-01-01
The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research. PMID:29599739
Amano, Kaoru; Kimura, Toshitaka; Nishida, Shin'ya; Takeda, Tsunehiro; Gomi, Hiroaki
2009-02-01
Human brain uses visual motion inputs not only for generating subjective sensation of motion but also for directly guiding involuntary actions. For instance, during arm reaching, a large-field visual motion is quickly and involuntarily transformed into a manual response in the direction of visual motion (manual following response, MFR). Previous attempts to correlate motion-evoked cortical activities, revealed by brain imaging techniques, with conscious motion perception have resulted only in partial success. In contrast, here we show a surprising degree of similarity between the MFR and the population neural activity measured by magnetoencephalography (MEG). We measured the MFR and MEG induced by the same motion onset of a large-field sinusoidal drifting grating with changing the spatiotemporal frequency of the grating. The initial transient phase of these two responses had very similar spatiotemporal tunings. Specifically, both the MEG and MFR amplitudes increased as the spatial frequency was decreased to, at most, 0.05 c/deg, or as the temporal frequency was increased to, at least, 10 Hz. We also found in peak latency a quantitative agreement (approximately 100-150 ms) and correlated changes against spatiotemporal frequency changes between MEG and MFR. In comparison with these two responses, conscious visual motion detection is known to be most sensitive (i.e., have the lowest detection threshold) at higher spatial frequencies and have longer and more variable response latencies. Our results suggest a close relationship between the properties of involuntary motor responses and motion-evoked cortical activity as reflected by the MEG.
Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction.
Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta
2018-01-01
The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research.
Coherent modulation of stimulus colour can affect visually induced self-motion perception.
Nakamura, Shinji; Seno, Takeharu; Ito, Hiroyuki; Sunaga, Shoji
2010-01-01
The effects of dynamic colour modulation on vection were investigated to examine whether perceived variation of illumination affects self-motion perception. Participants observed expanding optic flow which simulated their forward self-motion. Onset latency, accumulated duration, and estimated magnitude of the self-motion were measured as indices of vection strength. Colour of the dots in the visual stimulus was modulated between white and red (experiment 1), white and grey (experiment 2), and grey and red (experiment 3). The results indicated that coherent colour oscillation in the visual stimulus significantly suppressed the strength of vection, whereas incoherent or static colour modulation did not affect vection. There was no effect of the types of the colour modulation; both achromatic and chromatic modulations turned out to be effective in inhibiting self-motion perception. Moreover, in a situation where the simulated direction of a spotlight was manipulated dynamically, vection strength was also suppressed (experiment 4). These results suggest that observer's perception of illumination is critical for self-motion perception, and rapid variation of perceived illumination would impair the reliabilities of visual information in determining self-motion.
Impaired Velocity Processing Reveals an Agnosia for Motion in Depth.
Barendregt, Martijn; Dumoulin, Serge O; Rokers, Bas
2016-11-01
Many individuals with normal visual acuity are unable to discriminate the direction of 3-D motion in a portion of their visual field, a deficit previously referred to as a stereomotion scotoma. The origin of this visual deficit has remained unclear. We hypothesized that the impairment is due to a failure in the processing of one of the two binocular cues to motion in depth: changes in binocular disparity over time or interocular velocity differences. We isolated the contributions of these two cues and found that sensitivity to interocular velocity differences, but not changes in binocular disparity, varied systematically with observers' ability to judge motion direction. We therefore conclude that the inability to interpret motion in depth is due to a failure in the neural mechanisms that combine velocity signals from the two eyes. Given these results, we argue that the deficit should be considered a prevalent but previously unrecognized agnosia specific to the perception of visual motion. © The Author(s) 2016.
Chakraborty, Arijit; Anstice, Nicola S; Jacobs, Robert J; Paudel, Nabin; LaGasse, Linda L; Lester, Barry M; McKinlay, Christopher J D; Harding, Jane E; Wouldes, Trecia A; Thompson, Benjamin
2017-06-01
Global motion perception is often used as an index of dorsal visual stream function in neurodevelopmental studies. However, the relationship between global motion perception and visuomotor control, a primary function of the dorsal stream, is unclear. We measured global motion perception (motion coherence threshold; MCT) and performance on standardized measures of motor function in 606 4.5-year-old children born at risk of abnormal neurodevelopment. Visual acuity, stereoacuity and verbal IQ were also assessed. After adjustment for verbal IQ or both visual acuity and stereoacuity, MCT was modestly, but significantly, associated with all components of motor function with the exception of fine motor scores. In a separate analysis, stereoacuity, but not visual acuity, was significantly associated with both gross and fine motor scores. These results indicate that the development of motion perception and stereoacuity are associated with motor function in pre-school children. Copyright © 2017 Elsevier Ltd. All rights reserved.
Effects of Vibrotactile Feedback on Human Learning of Arm Motions
Bark, Karlin; Hyman, Emily; Tan, Frank; Cha, Elizabeth; Jax, Steven A.; Buxbaum, Laurel J.; Kuchenbecker, Katherine J.
2015-01-01
Tactile cues generated from lightweight, wearable actuators can help users learn new motions by providing immediate feedback on when and how to correct their movements. We present a vibrotactile motion guidance system that measures arm motions and provides vibration feedback when the user deviates from a desired trajectory. A study was conducted to test the effects of vibrotactile guidance on a subject’s ability to learn arm motions. Twenty-six subjects learned motions of varying difficulty with both visual (V), and visual and vibrotactile (VVT) feedback over the course of four days of training. After four days of rest, subjects returned to perform the motions from memory with no feedback. We found that augmenting visual feedback with vibrotactile feedback helped subjects reduce the root mean square (rms) angle error of their limb significantly while they were learning the motions, particularly for 1DOF motions. Analysis of the retention data showed no significant difference in rms angle errors between feedback conditions. PMID:25486644
Decoding Reveals Plasticity in V3A as a Result of Motion Perceptual Learning
Shibata, Kazuhisa; Chang, Li-Hung; Kim, Dongho; Náñez, José E.; Kamitani, Yukiyasu; Watanabe, Takeo; Sasaki, Yuka
2012-01-01
Visual perceptual learning (VPL) is defined as visual performance improvement after visual experiences. VPL is often highly specific for a visual feature presented during training. Such specificity is observed in behavioral tuning function changes with the highest improvement centered on the trained feature and was originally thought to be evidence for changes in the early visual system associated with VPL. However, results of neurophysiological studies have been highly controversial concerning whether the plasticity underlying VPL occurs within the visual cortex. The controversy may be partially due to the lack of observation of neural tuning function changes in multiple visual areas in association with VPL. Here using human subjects we systematically compared behavioral tuning function changes after global motion detection training with decoded tuning function changes for 8 visual areas using pattern classification analysis on functional magnetic resonance imaging (fMRI) signals. We found that the behavioral tuning function changes were extremely highly correlated to decoded tuning function changes only in V3A, which is known to be highly responsive to global motion with human subjects. We conclude that VPL of a global motion detection task involves plasticity in a specific visual cortical area. PMID:22952849
Visual event-related potentials to biological motion stimuli in autism spectrum disorders
Bletsch, Anke; Krick, Christoph; Siniatchkin, Michael; Jarczok, Tomasz A.; Freitag, Christine M.; Bender, Stephan
2014-01-01
Atypical visual processing of biological motion contributes to social impairments in autism spectrum disorders (ASD). However, the exact temporal sequence of deficits of cortical biological motion processing in ASD has not been studied to date. We used 64-channel electroencephalography to study event-related potentials associated with human motion perception in 17 children and adolescents with ASD and 21 typical controls. A spatio-temporal source analysis was performed to assess the brain structures involved in these processes. We expected altered activity already during early stimulus processing and reduced activity during subsequent biological motion specific processes in ASD. In response to both, random and biological motion, the P100 amplitude was decreased suggesting unspecific deficits in visual processing, and the occipito-temporal N200 showed atypical lateralization in ASD suggesting altered hemispheric specialization. A slow positive deflection after 400 ms, reflecting top-down processes, and human motion-specific dipole activation differed slightly between groups, with reduced and more diffuse activation in the ASD-group. The latter could be an indicator of a disrupted neuronal network for biological motion processing in ADS. Furthermore, early visual processing (P100) seems to be correlated to biological motion-specific activation. This emphasizes the relevance of early sensory processing for higher order processing deficits in ASD. PMID:23887808
ERIC Educational Resources Information Center
Samar, Vincent J.; Parasnis, Ila
2007-01-01
Studies have reported a right visual field (RVF) advantage for coherent motion detection by deaf and hearing signers but not non-signers. Yet two studies [Bosworth R. G., & Dobkins, K. R. (2002). Visual field asymmetries for motion processing in deaf and hearing signers. "Brain and Cognition," 49, 170-181; Samar, V. J., & Parasnis, I. (2005).…
Orientation of selective effects of body tilt on visually induced perception of self-motion.
Nakamura, S; Shimojo, S
1998-10-01
We examined the effect of body posture upon visually induced perception of self-motion (vection) with various angles of observer's tilt. The experiment indicated that the tilted body of observer could enhance perceived strength of vertical vection, while there was no effect of body tilt on horizontal vection. This result suggests that there is an interaction between the effects of visual and vestibular information on perception of self-motion.
Re-examining overlap between tactile and visual motion responses within hMT+ and STS
Jiang, Fang; Beauchamp, Michael S.; Fine, Ione
2015-01-01
Here we examine overlap between tactile and visual motion BOLD responses within the human MT+ complex. Although several studies have reported tactile responses overlapping with hMT+, many used group average analyses, leaving it unclear whether these responses were restricted to sub-regions of hMT+. Moreover, previous studies either employed a tactile task or passive stimulation, leaving it unclear whether or not tactile responses in hMT+ are simply the consequence of visual imagery. Here we carried out a replication of one of the classic papers finding tactile responses in hMT+ (Hagen et al. 2002). We mapped MT and MST in individual subjects using visual field localizers. We then examined responses to tactile motion on the arm, either presented passively or in the presence of a visual task performed at fixation designed to minimize visualization of the concurrent tactile stimulation. To our surprise, without a visual task, we found only weak tactile motion responses in MT (6% of voxels showing tactile responses) and MST (2% of voxels). With an unrelated visual task designed to withdraw attention from the tactile modality, responses in MST reduced to almost nothing (<1% regions). Consistent with previous results, we did observe tactile responses in STS regions superior and anterior to hMT+. Despite the lack of individual overlap, group averaged responses produced strong spurious overlap between tactile and visual motion responses within hMT+ that resembled those observed in previous studies. The weak nature of tactile responses in hMT+ (and their abolition by withdrawal of attention) suggests that hMT+ may not serve as a supramodal motion processing module. PMID:26123373
Larcombe, Stephanie J.; Kennard, Chris
2017-01-01
Abstract Repeated practice of a specific task can improve visual performance, but the neural mechanisms underlying this improvement in performance are not yet well understood. Here we trained healthy participants on a visual motion task daily for 5 days in one visual hemifield. Before and after training, we used functional magnetic resonance imaging (fMRI) to measure the change in neural activity. We also imaged a control group of participants on two occasions who did not receive any task training. While in the MRI scanner, all participants completed the motion task in the trained and untrained visual hemifields separately. Following training, participants improved their ability to discriminate motion direction in the trained hemifield and, to a lesser extent, in the untrained hemifield. The amount of task learning correlated positively with the change in activity in the medial superior temporal (MST) area. MST is the anterior portion of the human motion complex (hMT+). MST changes were localized to the hemisphere contralateral to the region of the visual field, where perceptual training was delivered. Visual areas V2 and V3a showed an increase in activity between the first and second scan in the training group, but this was not correlated with performance. The contralateral anterior hippocampus and bilateral dorsolateral prefrontal cortex (DLPFC) and frontal pole showed changes in neural activity that also correlated with the amount of task learning. These findings emphasize the importance of MST in perceptual learning of a visual motion task. Hum Brain Mapp 39:145–156, 2018. © 2017 Wiley Periodicals, Inc. PMID:28963815
Slushy weightings for the optimal pilot model. [considering visual tracking task
NASA Technical Reports Server (NTRS)
Dillow, J. D.; Picha, D. G.; Anderson, R. O.
1975-01-01
A pilot model is described which accounts for the effect of motion cues in a well defined visual tracking task. The effect of visual and motion cues are accounted for in the model in two ways. First, the observation matrix in the pilot model is structured to account for the visual and motion inputs presented to the pilot. Secondly, the weightings in the quadratic cost function associated with the pilot model are modified to account for the pilot's perception of the variables he considers important in the task. Analytic results obtained using the pilot model are compared to experimental results and in general good agreement is demonstrated. The analytic model yields small improvements in tracking performance with the addition of motion cues for easily controlled task dynamics and large improvements in tracking performance with the addition of motion cues for difficult task dynamics.
Curvilinear approach to an intersection and visual detection of a collision.
Berthelon, C; Mestre, D
1993-09-01
Visual motion perception plays a fundamental role in vehicle control. Recent studies have shown that the pattern of optical flow resulting from the observer's self-motion through a stable environment is used by the observer to accurately control his or her movements. However, little is known about the perception of another vehicle during self-motion--for instance, when a car driver approaches an intersection with traffic. In a series of experiments using visual simulations of car driving, we show that observers are able to detect the presence of a moving object during self-motion. However, the perception of the other car's trajectory appears to be strongly dependent on environmental factors, such as the presence of a road sign near the intersection or the shape of the road. These results suggest that local and global visual factors determine the perception of a car's trajectory during self-motion.
Malinina, E S
2014-01-01
The spatial specificity of auditory aftereffect was studied after a short-time adaptation (5 s) to the broadband noise (20-20000 Hz). Adapting stimuli were sequences of noise impulses with the constant amplitude, test stimuli--with the constant and changing amplitude: an increase of amplitude of impulses in sequence was perceived by listeners as approach of the sound source, while a decrease of amplitude--as its withdrawal. The experiments were performed in an anechoic chamber. The auditory aftereffect was estimated under the following conditions: the adapting and test stimuli were presented from the loudspeaker located at a distance of 1.1 m from the listeners (the subjectively near spatial domain) or 4.5 m from the listeners (the subjectively near spatial domain) or 4.5 m from the listeners (the subjectively far spatial domain); the adapting and test stimuli were presented from different distances. The obtained data showed that perception of the imitated movement of the sound source in both spatial domains had the common characteristic peculiarities that manifested themselves both under control conditions without adaptation and after adaptation to noise. In the absence of adaptation for both distances, an asymmetry of psychophysical curves was observed: the listeners estimated the test stimuli more often as approaching. The overestimation by listeners of test stimuli as the approaching ones was more pronounced at their presentation from the distance of 1.1 m, i. e., from the subjectively near spatial domain. After adaptation to noise the aftereffects showed spatial specificity in both spatial domains: they were observed only at the spatial coincidence of adapting and test stimuli and were absent at their separation. The aftereffects observed in two spatial domains were similar in direction and value: the listeners estimated the test stimuli more often as withdrawing as compared to control. The result of such aftereffect was restoration of the symmetry of psychometric curves and of the equiprobable estimation of direction of movement of test signals.
Vossen, Alexandra; Gross, Joachim; Thut, Gregor
2015-01-01
Background Periodic stimulation of occipital areas using transcranial alternating current stimulation (tACS) at alpha (α) frequency (8–12 Hz) enhances electroencephalographic (EEG) α-oscillation long after tACS-offset. Two mechanisms have been suggested to underlie these changes in oscillatory EEG activity: tACS-induced entrainment of brain oscillations and/or tACS-induced changes in oscillatory circuits by spike-timing dependent plasticity. Objective We tested to what extent plasticity can account for tACS-aftereffects when controlling for entrainment “echoes.” To this end, we used a novel, intermittent tACS protocol and investigated the strength of the aftereffect as a function of phase continuity between successive tACS episodes, as well as the match between stimulation frequency and endogenous α-frequency. Methods 12 healthy participants were stimulated at around individual α-frequency for 11–15 min in four sessions using intermittent tACS or sham. Successive tACS events were either phase-continuous or phase-discontinuous, and either 3 or 8 s long. EEG α-phase and power changes were compared after and between episodes of α-tACS across conditions and against sham. Results α-aftereffects were successfully replicated after intermittent stimulation using 8-s but not 3-s trains. These aftereffects did not reveal any of the characteristics of entrainment echoes in that they were independent of tACS phase-continuity and showed neither prolonged phase alignment nor frequency synchronization to the exact stimulation frequency. Conclusion Our results indicate that plasticity mechanisms are sufficient to explain α-aftereffects in response to α-tACS, and inform models of tACS-induced plasticity in oscillatory circuits. Modifying brain oscillations with tACS holds promise for clinical applications in disorders involving abnormal neural synchrony. PMID:25648377
Vossen, Alexandra; Gross, Joachim; Thut, Gregor
2015-01-01
Periodic stimulation of occipital areas using transcranial alternating current stimulation (tACS) at alpha (α) frequency (8-12 Hz) enhances electroencephalographic (EEG) α-oscillation long after tACS-offset. Two mechanisms have been suggested to underlie these changes in oscillatory EEG activity: tACS-induced entrainment of brain oscillations and/or tACS-induced changes in oscillatory circuits by spike-timing dependent plasticity. We tested to what extent plasticity can account for tACS-aftereffects when controlling for entrainment "echoes." To this end, we used a novel, intermittent tACS protocol and investigated the strength of the aftereffect as a function of phase continuity between successive tACS episodes, as well as the match between stimulation frequency and endogenous α-frequency. 12 healthy participants were stimulated at around individual α-frequency for 11-15 min in four sessions using intermittent tACS or sham. Successive tACS events were either phase-continuous or phase-discontinuous, and either 3 or 8 s long. EEG α-phase and power changes were compared after and between episodes of α-tACS across conditions and against sham. α-aftereffects were successfully replicated after intermittent stimulation using 8-s but not 3-s trains. These aftereffects did not reveal any of the characteristics of entrainment echoes in that they were independent of tACS phase-continuity and showed neither prolonged phase alignment nor frequency synchronization to the exact stimulation frequency. Our results indicate that plasticity mechanisms are sufficient to explain α-aftereffects in response to α-tACS, and inform models of tACS-induced plasticity in oscillatory circuits. Modifying brain oscillations with tACS holds promise for clinical applications in disorders involving abnormal neural synchrony. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Perceptual Training Strongly Improves Visual Motion Perception in Schizophrenia
ERIC Educational Resources Information Center
Norton, Daniel J.; McBain, Ryan K.; Ongur, Dost; Chen, Yue
2011-01-01
Schizophrenia patients exhibit perceptual and cognitive deficits, including in visual motion processing. Given that cognitive systems depend upon perceptual inputs, improving patients' perceptual abilities may be an effective means of cognitive intervention. In healthy people, motion perception can be enhanced through perceptual learning, but it…
Visualization of 3D elbow kinematics using reconstructed bony surfaces
NASA Astrophysics Data System (ADS)
Lalone, Emily A.; McDonald, Colin P.; Ferreira, Louis M.; Peters, Terry M.; King, Graham J. W.; Johnson, James A.
2010-02-01
An approach for direct visualization of continuous three-dimensional elbow kinematics using reconstructed surfaces has been developed. Simulation of valgus motion was achieved in five cadaveric specimens using an upper arm simulator. Direct visualization of the motion of the ulna and humerus at the ulnohumeral joint was obtained using a contact based registration technique. Employing fiducial markers, the rendered humerus and ulna were positioned according to the simulated motion. The specific aim of this study was to investigate the effect of radial head arthroplasty on restoring elbow joint stability after radial head excision. The position of the ulna and humerus was visualized for the intact elbow and following radial head excision and replacement. Visualization of the registered humerus/ulna indicated an increase in valgus angulation of the ulna with respect to the humerus after radial head excision. This increase in valgus angulation was restored to that of an elbow with a native radial head following radial head arthroplasty. These findings were consistent with previous studies investigating elbow joint stability following radial head excision and arthroplasty. The current technique was able to visualize a change in ulnar position in a single DoF. Using this approach, the coupled motion of ulna undergoing motion in all 6 degrees-of-freedom can also be visualized.
Examining the Effect of Age on Visual-Vestibular Self-Motion Perception Using a Driving Paradigm.
Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L
2017-05-01
Previous psychophysical research has examined how younger adults and non-human primates integrate visual and vestibular cues to perceive self-motion. However, there is much to be learned about how multisensory self-motion perception changes with age, and how these changes affect performance on everyday tasks involving self-motion. Evidence suggests that older adults display heightened multisensory integration compared with younger adults; however, few previous studies have examined this for visual-vestibular integration. To explore age differences in the way that visual and vestibular cues contribute to self-motion perception, we had younger and older participants complete a basic driving task containing visual and vestibular cues. We compared their performance against a previously established control group that experienced visual cues alone. Performance measures included speed, speed variability, and lateral position. Vestibular inputs resulted in more precise speed control among older adults, but not younger adults, when traversing curves. Older adults demonstrated more variability in lateral position when vestibular inputs were available versus when they were absent. These observations align with previous evidence of age-related differences in multisensory integration and demonstrate that they may extend to visual-vestibular integration. These findings may have implications for vehicle and simulator design when considering older users.
Lobjois, Régis; Dagonneau, Virginie; Isableu, Brice
2016-11-01
Compared with driving or flight simulation, little is known about self-motion perception in riding simulation. The goal of this study was to examine whether or not continuous roll motion supports the sensation of leaning into bends in dynamic motorcycle simulation. To this end, riders were able to freely tune the visual scene and/or motorcycle simulator roll angle to find a pattern that matched their prior knowledge. Our results revealed idiosyncrasy in the combination of visual and proprioceptive information. Some subjects relied more on the visual dimension, but reported increased sickness symptoms with the visual roll angle. Others relied more on proprioceptive information, tuning the direction of the visual scenery to match three possible patterns. Our findings also showed that these two subgroups tuned the motorcycle simulator roll angle in a similar way. This suggests that sustained inertially specified roll motion have contributed to the sensation of leaning in spite of the occurrence of unexpected gravito-inertial stimulation during the tilt. Several hypotheses are discussed. Practitioner Summary: Self-motion perception in motorcycle simulation is a relatively new research area. We examined how participants combined visual and proprioceptive information. Findings revealed individual differences in the visual dimension. However, participants tuned the simulator roll angle similarly, supporting the hypothesis that sustained inertially specified roll motion contributes to a leaning sensation.
Fan, Zhao; Harris, John
2010-10-12
In a recent study (Fan, Z., & Harris, J. (2008). Perceived spatial displacement of motion-defined contours in peripheral vision. Vision Research, 48(28), 2793-2804), we demonstrated that virtual contours defined by two regions of dots moving in opposite directions were displaced perceptually in the direction of motion of the dots in the more eccentric region when the contours were viewed in the right visual field. Here, we show that the magnitude and/or direction of these displacements varies in different quadrants of the visual field. When contours were presented in the lower visual field, the direction of perceived contour displacement was consistent with that when both contours were presented in the right visual field. However, this illusory motion-induced spatial displacement disappeared when both contours were presented in the upper visual field. Also, perceived contour displacement in the direction of the more eccentric dots was larger in the right than in the left visual field, perhaps because of a hemispheric asymmetry in attentional allocation. Quadrant-based analyses suggest that the pattern of results arises from opposite directions of perceived contour displacement in the upper-left and lower-right visual quadrants, which depend on the relative strengths of two effects: a greater sensitivity to centripetal motion, and an asymmetry in the allocation of spatial attention. Copyright © 2010 Elsevier Ltd. All rights reserved.
Effects of visual motion consistent or inconsistent with gravity on postural sway.
Balestrucci, Priscilla; Daprati, Elena; Lacquaniti, Francesco; Maffei, Vincenzo
2017-07-01
Vision plays an important role in postural control, and visual perception of the gravity-defined vertical helps maintaining upright stance. In addition, the influence of the gravity field on objects' motion is known to provide a reference for motor and non-motor behavior. However, the role of dynamic visual cues related to gravity in the control of postural balance has been little investigated. In order to understand whether visual cues about gravitational acceleration are relevant for postural control, we assessed the relation between postural sway and visual motion congruent or incongruent with gravity acceleration. Postural sway of 44 healthy volunteers was recorded by means of force platforms while they watched virtual targets moving in different directions and with different accelerations. Small but significant differences emerged in sway parameters with respect to the characteristics of target motion. Namely, for vertically accelerated targets, gravitational motion (GM) was associated with smaller oscillations of the center of pressure than anti-GM. The present findings support the hypothesis that not only static, but also dynamic visual cues about direction and magnitude of the gravitational field are relevant for balance control during upright stance.
Wada, Atsushi; Sakano, Yuichi; Ando, Hiroshi
2016-01-01
Vision is important for estimating self-motion, which is thought to involve optic-flow processing. Here, we investigated the fMRI response profiles in visual area V6, the precuneus motion area (PcM), and the cingulate sulcus visual area (CSv)—three medial brain regions recently shown to be sensitive to optic-flow. We used wide-view stereoscopic stimulation to induce robust self-motion processing. Stimuli included static, randomly moving, and coherently moving dots (simulating forward self-motion). We varied the stimulus size and the presence of stereoscopic information. A combination of univariate and multi-voxel pattern analyses (MVPA) revealed that fMRI responses in the three regions differed from each other. The univariate analysis identified optic-flow selectivity and an effect of stimulus size in V6, PcM, and CSv, among which only CSv showed a significantly lower response to random motion stimuli compared with static conditions. Furthermore, MVPA revealed an optic-flow specific multi-voxel pattern in the PcM and CSv, where the discrimination of coherent motion from both random motion and static conditions showed above-chance prediction accuracy, but that of random motion from static conditions did not. Additionally, while area V6 successfully classified different stimulus sizes regardless of motion pattern, this classification was only partial in PcM and was absent in CSv. This may reflect the known retinotopic representation in V6 and the absence of such clear visuospatial representation in CSv. We also found significant correlations between the strength of subjective self-motion and univariate activation in all examined regions except for primary visual cortex (V1). This neuro-perceptual correlation was significantly higher for V6, PcM, and CSv when compared with V1, and higher for CSv when compared with the visual motion area hMT+. Our convergent results suggest the significant involvement of CSv in self-motion processing, which may give rise to its percept. PMID:26973588
NASA Technical Reports Server (NTRS)
Hosman, R. J. A. W.; Vandervaart, J. C.
1984-01-01
An experiment to investigate visual roll attitude and roll rate perception is described. The experiment was also designed to assess the improvements of perception due to cockpit motion. After the onset of the motion, subjects were to make accurate and quick estimates of the final magnitude of the roll angle step response by pressing the appropriate button of a keyboard device. The differing time-histories of roll angle, roll rate and roll acceleration caused by a step response stimulate the different perception processes related the central visual field, peripheral visual field and vestibular organs in different, yet exactly known ways. Experiments with either of the visual displays or cockpit motion and some combinations of these were run to asses the roles of the different perception processes. Results show that the differences in response time are much more pronounced than the differences in perception accuracy.
High-level, but not low-level, motion perception is impaired in patients with schizophrenia.
Kandil, Farid I; Pedersen, Anya; Wehnes, Jana; Ohrmann, Patricia
2013-01-01
Smooth pursuit eye movements are compromised in patients with schizophrenia and their first-degree relatives. Although research has demonstrated that the motor components of smooth pursuit eye movements are intact, motion perception has been shown to be impaired. In particular, studies have consistently revealed deficits in performance on tasks specific to the high-order motion area V5 (middle temporal area, MT) in patients with schizophrenia. In contrast, data from low-level motion detectors in the primary visual cortex (V1) have been inconsistent. To differentiate between low-level and high-level visual motion processing, we applied a temporal-order judgment task for motion events and a motion-defined figure-ground segregation task using patients with schizophrenia and healthy controls. Successful judgments in both tasks rely on the same low-level motion detectors in the V1; however, the first task is further processed in the higher-order motion area MT in the magnocellular (dorsal) pathway, whereas the second task requires subsequent computations in the parvocellular (ventral) pathway in visual area V4 and the inferotemporal cortex (IT). These latter structures are supposed to be intact in schizophrenia. Patients with schizophrenia revealed a significantly impaired temporal resolution on the motion-based temporal-order judgment task but only mild impairment in the motion-based segregation task. These results imply that low-level motion detection in V1 is not, or is only slightly, compromised; furthermore, our data restrain the locus of the well-known deficit in motion detection to areas beyond the primary visual cortex.
ERIC Educational Resources Information Center
Monaghan, James M.; Clement, John
1999-01-01
Presents evidence for students' qualitative and quantitative difficulties with apparently simple one-dimensional relative-motion problems, students' spontaneous visualization of relative-motion problems, the visualizations facilitating solution of these problems, and students' memories of the online computer simulation used as a framework for…
Sunglasses with thick temples and frame constrict temporal visual field.
Denion, Eric; Dugué, Audrey Emmanuelle; Augy, Sylvain; Coffin-Pichonnet, Sophie; Mouriaux, Frédéric
2013-12-01
Our aim was to compare the impact of two types of sunglasses on visual field and glare: one ("thick sunglasses") with a thick plastic frame and wide temples and one ("thin sunglasses") with a thin metal frame and thin temples. Using the Goldmann perimeter, visual field surface areas (cm²) were calculated as projections on a 30-cm virtual cupola. A V4 test object was used, from seen to unseen, in 15 healthy volunteers in the primary position of gaze ("base visual field"), then allowing eye motion ("eye motion visual field") without glasses, then with "thin sunglasses," followed by "thick sunglasses." Visual field surface area differences greater than the 14% reproducibility error of the method and having a p < 0.05 were considered significant. A glare test was done using a surgical lighting system pointed at the eye(s) at different incidence angles. No significant "base visual field" or "eye motion visual field" surface area variations were noted when comparing tests done without glasses and with the "thin sunglasses." In contrast, a 22% "eye motion visual field" surface area decrease (p < 0.001) was noted when comparing tests done without glasses and with "thick sunglasses." This decrease was most severe in the temporal quadrant (-33%; p < 0.001). All subjects reported less lateral glare with the "thick sunglasses" than with the "thin sunglasses" (p < 0.001). The better protection from lateral glare offered by "thick sunglasses" is offset by the much poorer ability to use lateral space exploration; this results in a loss of most, if not all, of the additional visual field gained through eye motion.
The search for instantaneous vection: An oscillating visual prime reduces vection onset latency.
Palmisano, Stephen; Riecke, Bernhard E
2018-01-01
Typically it takes up to 10 seconds or more to induce a visual illusion of self-motion ("vection"). However, for this vection to be most useful in virtual reality and vehicle simulation, it needs to be induced quickly, if not immediately. This study examined whether vection onset latency could be reduced towards zero using visual display manipulations alone. In the main experiments, visual self-motion simulations were presented to observers via either a large external display or a head-mounted display (HMD). Priming observers with visually simulated viewpoint oscillation for just ten seconds before the main self-motion display was found to markedly reduce vection onset latencies (and also increase ratings of vection strength) in both experiments. As in earlier studies, incorporating this simulated viewpoint oscillation into the self-motion displays themselves was also found to improve vection. Average onset latencies were reduced from 8-9s in the no oscillating control condition to as little as 4.6 s (for external displays) or 1.7 s (for HMDs) in the combined oscillation condition (when both the visual prime and the main self-motion display were oscillating). As these display manipulations did not appear to increase the likelihood or severity of motion sickness in the current study, they could possibly be used to enhance computer generated simulation experiences and training in the future, at no additional cost.
The search for instantaneous vection: An oscillating visual prime reduces vection onset latency
Riecke, Bernhard E.
2018-01-01
Typically it takes up to 10 seconds or more to induce a visual illusion of self-motion (“vection”). However, for this vection to be most useful in virtual reality and vehicle simulation, it needs to be induced quickly, if not immediately. This study examined whether vection onset latency could be reduced towards zero using visual display manipulations alone. In the main experiments, visual self-motion simulations were presented to observers via either a large external display or a head-mounted display (HMD). Priming observers with visually simulated viewpoint oscillation for just ten seconds before the main self-motion display was found to markedly reduce vection onset latencies (and also increase ratings of vection strength) in both experiments. As in earlier studies, incorporating this simulated viewpoint oscillation into the self-motion displays themselves was also found to improve vection. Average onset latencies were reduced from 8-9s in the no oscillating control condition to as little as 4.6 s (for external displays) or 1.7 s (for HMDs) in the combined oscillation condition (when both the visual prime and the main self-motion display were oscillating). As these display manipulations did not appear to increase the likelihood or severity of motion sickness in the current study, they could possibly be used to enhance computer generated simulation experiences and training in the future, at no additional cost. PMID:29791445
Applications of Phase-Based Motion Processing
NASA Technical Reports Server (NTRS)
Branch, Nicholas A.; Stewart, Eric C.
2018-01-01
Image pyramids provide useful information in determining structural response at low cost using commercially available cameras. The current effort applies previous work on the complex steerable pyramid to analyze and identify imperceptible linear motions in video. Instead of implicitly computing motion spectra through phase analysis of the complex steerable pyramid and magnifying the associated motions, instead present a visual technique and the necessary software to display the phase changes of high frequency signals within video. The present technique quickly identifies regions of largest motion within a video with a single phase visualization and without the artifacts of motion magnification, but requires use of the computationally intensive Fourier transform. While Riesz pyramids present an alternative to the computationally intensive complex steerable pyramid for motion magnification, the Riesz formulation contains significant noise, and motion magnification still presents large amounts of data that cannot be quickly assessed by the human eye. Thus, user-friendly software is presented for quickly identifying structural response through optical flow and phase visualization in both Python and MATLAB.
Postural and Spatial Orientation Driven by Virtual Reality
Keshner, Emily A.; Kenyon, Robert V.
2009-01-01
Orientation in space is a perceptual variable intimately related to postural orientation that relies on visual and vestibular signals to correctly identify our position relative to vertical. We have combined a virtual environment with motion of a posture platform to produce visual-vestibular conditions that allow us to explore how motion of the visual environment may affect perception of vertical and, consequently, affect postural stabilizing responses. In order to involve a higher level perceptual process, we needed to create a visual environment that was immersive. We did this by developing visual scenes that possess contextual information using color, texture, and 3-dimensional structures. Update latency of the visual scene was close to physiological latencies of the vestibulo-ocular reflex. Using this system we found that even when healthy young adults stand and walk on a stable support surface, they are unable to ignore wide field of view visual motion and they adapt their postural orientation to the parameters of the visual motion. Balance training within our environment elicited measurable rehabilitation outcomes. Thus we believe that virtual environments can serve as a clinical tool for evaluation and training of movement in situations that closely reflect conditions found in the physical world. PMID:19592796
Ageing vision and falls: a review.
Saftari, Liana Nafisa; Kwon, Oh-Sang
2018-04-23
Falls are the leading cause of accidental injury and death among older adults. One of three adults over the age of 65 years falls annually. As the size of elderly population increases, falls become a major concern for public health and there is a pressing need to understand the causes of falls thoroughly. While it is well documented that visual functions such as visual acuity, contrast sensitivity, and stereo acuity are correlated with fall risks, little attention has been paid to the relationship between falls and the ability of the visual system to perceive motion in the environment. The omission of visual motion perception in the literature is a critical gap because it is an essential function in maintaining balance. In the present article, we first review existing studies regarding visual risk factors for falls and the effect of ageing vision on falls. We then present a group of phenomena such as vection and sensory reweighting that provide information on how visual motion signals are used to maintain balance. We suggest that the current list of visual risk factors for falls should be elaborated by taking into account the relationship between visual motion perception and balance control.
New insights into the role of motion and form vision in neurodevelopmental disorders.
Johnston, Richard; Pitchford, Nicola J; Roach, Neil W; Ledgeway, Timothy
2017-12-01
A selective deficit in processing the global (overall) motion, but not form, of spatially extensive objects in the visual scene is frequently associated with several neurodevelopmental disorders, including preterm birth. Existing theories that proposed to explain the origin of this visual impairment are, however, challenged by recent research. In this review, we explore alternative hypotheses for why deficits in the processing of global motion, relative to global form, might arise. We describe recent evidence that has utilised novel tasks of global motion and global form to elucidate the underlying nature of the visual deficit reported in different neurodevelopmental disorders. We also examine the role of IQ and how the sex of an individual can influence performance on these tasks, as these are factors that are associated with performance on global motion tasks, but have not been systematically controlled for in previous studies exploring visual processing in clinical populations. Finally, we suggest that a new theoretical framework is needed for visual processing in neurodevelopmental disorders and present recommendations for future research. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Adamchic, Ilya; Toth, Timea; Hauptmann, Christian; Walger, Martin; Langguth, Berthold; Klingmann, Ingrid; Tass, Peter Alexander
2017-01-01
Chronic subjective tinnitus is an auditory phantom phenomenon characterized by abnormal neuronal synchrony in the central auditory system. As shown computationally, acoustic coordinated reset (CR) neuromodulation causes a long-lasting desynchronization of pathological synchrony by downregulating abnormal synaptic connectivity. In a previous proof of concept study acoustic CR neuromodulation, employing stimulation tone patterns tailored to the dominant tinnitus frequency, was compared to noisy CR-like stimulation, a CR version significantly detuned by sparing the tinnitus-related pitch range and including substantial random variability of the tone spacing on the frequency axis. Both stimulation protocols caused an acute relief as measured with visual analogue scale scores for tinnitus loudness (VAS-L) and annoyance (VAS-A) in the stimulation-ON condition (i.e. 15 min after stimulation onset), but only acoustic CR neuromodulation had sustained long-lasting therapeutic effects after 12 weeks of treatment as assessed with VAS-L, VAS-A scores and a tinnitus questionnaire (TQ) in the stimulation-OFF condition (i.e. with patients being off stimulation for at least 2.5 h). To understand the source of the long-lasting therapeutic effects, we here study whether acoustic CR neuromodulation has different electrophysiological effects on oscillatory brain activity as compared to noisy CR-like stimulation under stimulation-ON conditions and immediately after cessation of stimulation. To this end, we used a single-blind, single application, cross over design in 18 patients with chronic tonal subjective tinnitus and administered three different 16-minute stimulation protocols: acoustic CR neuromodulation, noisy CR-like stimulation and low frequency range (LFR) stimulation, a CR type stimulation with deliberately detuned pitch and repetition rate of stimulation tones, as control stimulation. We measured VAS-L and VAS-A scores together with spontaneous EEG activity pre-, during- and post-stimulation. Under stimulation-ON conditions acoustic CR neuromodulation and noisy CR-like stimulation had similar effects: a reduction of VAS-L and VAS-A scores together with a decrease of auditory delta power and an increase of auditory alpha and gamma power, without significant differences. In contrast, LFR stimulation had significantly weaker EEG effects and no significant clinical effects under stimulation-ON conditions. The distinguishing feature between acoustic CR neuromodulation and noisy CR-like stimulation were the electrophysiological after-effects. Acoustic CR neuromodulation caused the longest significant reduction of delta and gamma and increase of alpha power in the auditory cortex region. Noisy CR-like stimulation had weaker and LFR stimulation hardly any electrophysiological after-effects. This qualitative difference further supports the assertion that long-term effects of acoustic CR neuromodulation on tinnitus are mediated by a specific disruption of synchronous neural activity. Furthermore, our results indicate that acute electrophysiological after-effects might serve as a marker to further improve desynchronizing sound stimulation.
Helicopter flight simulation motion platform requirements
NASA Astrophysics Data System (ADS)
Schroeder, Jeffery Allyn
Flight simulators attempt to reproduce in-flight pilot-vehicle behavior on the ground. This reproduction is challenging for helicopter simulators, as the pilot is often inextricably dependent on external cues for pilot-vehicle stabilization. One important simulator cue is platform motion; however, its required fidelity is unknown. To determine the required motion fidelity, several unique experiments were performed. A large displacement motion platform was used that allowed pilots to fly tasks with matched motion and visual cues. Then, the platform motion was modified to give cues varying from full motion to no motion. Several key results were found. First, lateral and vertical translational platform cues had significant effects on fidelity. Their presence improved performance and reduced pilot workload. Second, yaw and roll rotational platform cues were not as important as the translational platform cues. In particular, the yaw rotational motion platform cue did not appear at all useful in improving performance or reducing workload. Third, when the lateral translational platform cue was combined with visual yaw rotational cues, pilots believed the platform was rotating when it was not. Thus, simulator systems can be made more efficient by proper combination of platform and visual cues. Fourth, motion fidelity specifications were revised that now provide simulator users with a better prediction of motion fidelity based upon the frequency responses of their motion control laws. Fifth, vertical platform motion affected pilot estimates of steady-state altitude during altitude repositionings. This refutes the view that pilots estimate altitude and altitude rate in simulation solely from visual cues. Finally, the combined results led to a general method for configuring helicopter motion systems and for developing simulator tasks that more likely represent actual flight. The overall results can serve as a guide to future simulator designers and to today's operators.
Hirashima, Masaya
2016-01-01
Abstract When a visually guided reaching movement is unexpectedly perturbed, it is implicitly corrected in two ways: immediately after the perturbation by feedback control (online correction) and in the next movement by adjusting feedforward motor commands (offline correction or motor adaptation). Although recent studies have revealed a close relationship between feedback and feedforward controls, the nature of this relationship is not yet fully understood. Here, we show that both implicit online and offline movement corrections utilize the same visuomotor map for feedforward movement control that transforms the spatial location of visual objects into appropriate motor commands. First, we artificially distorted the visuomotor map by applying opposite visual rotations to the cursor representing the hand position while human participants reached for two different targets. This procedure implicitly altered the visuomotor map so that changes in the movement direction to the target location were more insensitive or more sensitive. Then, we examined how such visuomotor map distortion influenced online movement correction by suddenly changing the target location. The magnitude of online movement correction was altered according to the shape of the visuomotor map. We also examined offline movement correction; the aftereffect induced by visual rotation in the previous trial was modulated according to the shape of the visuomotor map. These results highlighted the importance of the visuomotor map as a foundation for implicit motor control mechanisms and the intimate relationship between feedforward control, feedback control, and motor adaptation. PMID:27275006
Hayashi, Takuji; Yokoi, Atsushi; Hirashima, Masaya; Nozaki, Daichi
2016-01-01
When a visually guided reaching movement is unexpectedly perturbed, it is implicitly corrected in two ways: immediately after the perturbation by feedback control (online correction) and in the next movement by adjusting feedforward motor commands (offline correction or motor adaptation). Although recent studies have revealed a close relationship between feedback and feedforward controls, the nature of this relationship is not yet fully understood. Here, we show that both implicit online and offline movement corrections utilize the same visuomotor map for feedforward movement control that transforms the spatial location of visual objects into appropriate motor commands. First, we artificially distorted the visuomotor map by applying opposite visual rotations to the cursor representing the hand position while human participants reached for two different targets. This procedure implicitly altered the visuomotor map so that changes in the movement direction to the target location were more insensitive or more sensitive. Then, we examined how such visuomotor map distortion influenced online movement correction by suddenly changing the target location. The magnitude of online movement correction was altered according to the shape of the visuomotor map. We also examined offline movement correction; the aftereffect induced by visual rotation in the previous trial was modulated according to the shape of the visuomotor map. These results highlighted the importance of the visuomotor map as a foundation for implicit motor control mechanisms and the intimate relationship between feedforward control, feedback control, and motor adaptation.
The notion of the motion: the neurocognition of motion lines in visual narratives.
Cohn, Neil; Maher, Stephen
2015-03-19
Motion lines appear ubiquitously in graphic representation to depict the path of a moving object, most popularly in comics. Some researchers have argued that these graphic signs directly tie to the "streaks" appearing in the visual system when a viewer tracks an object (Burr, 2000), despite the fact that previous studies have been limited to offline measurements. Here, we directly examine the cognition of motion lines by comparing images in comic strips that depicted normal motion lines with those that either had no lines or anomalous, reversed lines. In Experiment 1, shorter viewing times appeared to images with normal lines than those with no lines, which were shorter than those with anomalous lines. In Experiment 2, measurements of event-related potentials (ERPs) showed that, compared to normal lines, panels with no lines elicited a posterior positivity that was distinct from the frontal positivity evoked by anomalous lines. These results suggested that motion lines aid in the comprehension of depicted events. LORETA source localization implicated greater activation of visual and language areas when understanding was made more difficult by anomalous lines. Furthermore, in both experiments, participants' experience reading comics modulated these effects, suggesting motion lines are not tied to aspects of the visual system, but rather are conventionalized parts of the "vocabulary" of the visual language of comics. Copyright © 2015 Elsevier B.V. All rights reserved.
The notion of the motion: The neurocognition of motion lines in visual narratives
Cohn, Neil; Maher, Stephen
2015-01-01
Motion lines appear ubiquitously in graphic representation to depict the path of a moving object, most popularly in comics. Some researchers have argued that these graphic signs directly tie to the “streaks” appearing in the visual system when a viewer tracks an object (Burr, 2000), despite the fact that previous studies have been limited to offline measurements. Here, we directly examine the cognition of motion lines by comparing images in comic strips that depicted normal motion lines with those that either had no lines or anomalous, reversed lines. In Experiment 1, shorter viewing times appeared to images with normal lines than those with no lines, which were shorter than those with anomalous lines. In Experiment 2, measurements of event-related potentials (ERPs) showed that, compared to normal lines, panels with no lines elicited a posterior positivity that was distinct from the frontal positivity evoked by anomalous lines. These results suggested that motion lines aid in the comprehension of depicted events. LORETA source localization implicated greater activation of visual and language areas when understanding was made more difficult by anomalous lines. Furthermore, in both experiments, participants' experience reading comics modulated these effects, suggesting motion lines are not tied to aspects of the visual system, but rather are conventionalized parts of the “vocabulary” of the visual language of comics. PMID:25601006
Oculomotor Reflexes as a Test of Visual Dysfunctions in Cognitively Impaired Observers
2013-09-01
right. Gaze horizontal position is plotted along the y-axis. The red bar indicates a visual nystagmus event detected by the filter. (d) A mild curse word...experimental conditions were chosen to simulate testing cognitively impaired observers. Reflex Stimulus Functions Visual Nystagmus luminance grating low-level...developed a new stimulus for visual nystagmus to 8 test visual motion processing in the presence of incoherent motion noise. The drifting equiluminant
Agyei, Seth B.; van der Weel, F. R. (Ruud); van der Meer, Audrey L. H.
2016-01-01
During infancy, smart perceptual mechanisms develop allowing infants to judge time-space motion dynamics more efficiently with age and locomotor experience. This emerging capacity may be vital to enable preparedness for upcoming events and to be able to navigate in a changing environment. Little is known about brain changes that support the development of prospective control and about processes, such as preterm birth, that may compromise it. As a function of perception of visual motion, this paper will describe behavioral and brain studies with young infants investigating the development of visual perception for prospective control. By means of the three visual motion paradigms of occlusion, looming, and optic flow, our research shows the importance of including behavioral data when studying the neural correlates of prospective control. PMID:26903908
Effects of Spatio-Temporal Aliasing on Out-the-Window Visual Systems
NASA Technical Reports Server (NTRS)
Sweet, Barbara T.; Stone, Leland S.; Liston, Dorion B.; Hebert, Tim M.
2014-01-01
Designers of out-the-window visual systems face a challenge when attempting to simulate the outside world as viewed from a cockpit. Many methodologies have been developed and adopted to aid in the depiction of particular scene features, or levels of static image detail. However, because aircraft move, it is necessary to also consider the quality of the motion in the simulated visual scene. When motion is introduced in the simulated visual scene, perceptual artifacts can become apparent. A particular artifact related to image motion, spatiotemporal aliasing, will be addressed. The causes of spatio-temporal aliasing will be discussed, and current knowledge regarding the impact of these artifacts on both motion perception and simulator task performance will be reviewed. Methods of reducing the impact of this artifact are also addressed
Using the Visual World Paradigm to Study Retrieval Interference in Spoken Language Comprehension
Sekerina, Irina A.; Campanelli, Luca; Van Dyke, Julie A.
2016-01-01
The cue-based retrieval theory (Lewis et al., 2006) predicts that interference from similar distractors should create difficulty for argument integration, however this hypothesis has only been examined in the written modality. The current study uses the Visual World Paradigm (VWP) to assess its feasibility to study retrieval interference arising from distractors present in a visual display during spoken language comprehension. The study aims to extend findings from Van Dyke and McElree (2006), which utilized a dual-task paradigm with written sentences in which they manipulated the relationship between extra-sentential distractors and the semantic retrieval cues from a verb, to the spoken modality. Results indicate that retrieval interference effects do occur in the spoken modality, manifesting immediately upon encountering the verbal retrieval cue for inaccurate trials when the distractors are present in the visual field. We also observed indicators of repair processes in trials containing semantic distractors, which were ultimately answered correctly. We conclude that the VWP is a useful tool for investigating retrieval interference effects, including both the online effects of distractors and their after-effects, when repair is initiated. This work paves the way for further studies of retrieval interference in the spoken modality, which is especially significant for examining the phenomenon in pre-reading children, non-reading adults (e.g., people with aphasia), and spoken language bilinguals. PMID:27378974
The Mechanism for Processing Random-Dot Motion at Various Speeds in Early Visual Cortices
An, Xu; Gong, Hongliang; McLoughlin, Niall; Yang, Yupeng; Wang, Wei
2014-01-01
All moving objects generate sequential retinotopic activations representing a series of discrete locations in space and time (motion trajectory). How direction-selective neurons in mammalian early visual cortices process motion trajectory remains to be clarified. Using single-cell recording and optical imaging of intrinsic signals along with mathematical simulation, we studied response properties of cat visual areas 17 and 18 to random dots moving at various speeds. We found that, the motion trajectory at low speed was encoded primarily as a direction signal by groups of neurons preferring that motion direction. Above certain transition speeds, the motion trajectory is perceived as a spatial orientation representing the motion axis of the moving dots. In both areas studied, above these speeds, other groups of direction-selective neurons with perpendicular direction preferences were activated to encode the motion trajectory as motion-axis information. This applied to both simple and complex neurons. The average transition speed for switching between encoding motion direction and axis was about 31°/s in area 18 and 15°/s in area 17. A spatio-temporal energy model predicted the transition speeds accurately in both areas, but not the direction-selective indexes to random-dot stimuli in area 18. In addition, above transition speeds, the change of direction preferences of population responses recorded by optical imaging can be revealed using vector maximum but not vector summation method. Together, this combined processing of motion direction and axis by neurons with orthogonal direction preferences associated with speed may serve as a common principle of early visual motion processing. PMID:24682033
Video quality assessment method motivated by human visual perception
NASA Astrophysics Data System (ADS)
He, Meiling; Jiang, Gangyi; Yu, Mei; Song, Yang; Peng, Zongju; Shao, Feng
2016-11-01
Research on video quality assessment (VQA) plays a crucial role in improving the efficiency of video coding and the performance of video processing. It is well acknowledged that the motion energy model generates motion energy responses in a middle temporal area by simulating the receptive field of neurons in V1 for the motion perception of the human visual system. Motivated by the biological evidence for the visual motion perception, a VQA method is proposed in this paper, which comprises the motion perception quality index and the spatial index. To be more specific, the motion energy model is applied to evaluate the temporal distortion severity of each frequency component generated from the difference of Gaussian filter bank, which produces the motion perception quality index, and the gradient similarity measure is used to evaluate the spatial distortion of the video sequence to get the spatial quality index. The experimental results of the LIVE, CSIQ, and IVP video databases demonstrate that the random forests regression technique trained by the generated quality indices is highly correspondent to human visual perception and has many significant improvements than comparable well-performing methods. The proposed method has higher consistency with subjective perception and higher generalization capability.
Visual Neuroscience: Unique Neural System for Flight Stabilization in Hummingbirds.
Ibbotson, M R
2017-01-23
The pretectal visual motion processing area in the hummingbird brain is unlike that in other birds: instead of emphasizing detection of horizontal movements, it codes for motion in all directions through 360°, possibly offering precise visual stability control during hovering. Copyright © 2017 Elsevier Ltd. All rights reserved.
Multisensory Self-Motion Compensation During Object Trajectory Judgments
Dokka, Kalpana; MacNeilage, Paul R.; DeAngelis, Gregory C.; Angelaki, Dora E.
2015-01-01
Judging object trajectory during self-motion is a fundamental ability for mobile organisms interacting with their environment. This fundamental ability requires the nervous system to compensate for the visual consequences of self-motion in order to make accurate judgments, but the mechanisms of this compensation are poorly understood. We comprehensively examined both the accuracy and precision of observers' ability to judge object trajectory in the world when self-motion was defined by vestibular, visual, or combined visual–vestibular cues. Without decision feedback, subjects demonstrated no compensation for self-motion that was defined solely by vestibular cues, partial compensation (47%) for visually defined self-motion, and significantly greater compensation (58%) during combined visual–vestibular self-motion. With decision feedback, subjects learned to accurately judge object trajectory in the world, and this generalized to novel self-motion speeds. Across conditions, greater compensation for self-motion was associated with decreased precision of object trajectory judgments, indicating that self-motion compensation comes at the cost of reduced discriminability. Our findings suggest that the brain can flexibly represent object trajectory relative to either the observer or the world, but a world-centered representation comes at the cost of decreased precision due to the inclusion of noisy self-motion signals. PMID:24062317
NASA Astrophysics Data System (ADS)
Sousa, Teresa; Amaral, Carlos; Andrade, João; Pires, Gabriel; Nunes, Urbano J.; Castelo-Branco, Miguel
2017-08-01
Objective. The achievement of multiple instances of control with the same type of mental strategy represents a way to improve flexibility of brain-computer interface (BCI) systems. Here we test the hypothesis that pure visual motion imagery of an external actuator can be used as a tool to achieve three classes of electroencephalographic (EEG) based control, which might be useful in attention disorders. Approach. We hypothesize that different numbers of imagined motion alternations lead to distinctive signals, as predicted by distinct motion patterns. Accordingly, a distinct number of alternating sensory/perceptual signals would lead to distinct neural responses as previously demonstrated using functional magnetic resonance imaging (fMRI). We anticipate that differential modulations should also be observed in the EEG domain. EEG recordings were obtained from twelve participants using three imagery tasks: imagery of a static dot, imagery of a dot with two opposing motions in the vertical axis (two motion directions) and imagery of a dot with four opposing motions in vertical or horizontal axes (four directions). The data were analysed offline. Main results. An increase of alpha-band power was found in frontal and central channels as a result of visual motion imagery tasks when compared with static dot imagery, in contrast with the expected posterior alpha decreases found during simple visual stimulation. The successful classification and discrimination between the three imagery tasks confirmed that three different classes of control based on visual motion imagery can be achieved. The classification approach was based on a support vector machine (SVM) and on the alpha-band relative spectral power of a small group of six frontal and central channels. Patterns of alpha activity, as captured by single-trial SVM closely reflected imagery properties, in particular the number of imagined motion alternations. Significance. We found a new mental task based on visual motion imagery with potential for the implementation of multiclass (3) BCIs. Our results are consistent with the notion that frontal alpha synchronization is related with high internal processing demands, changing with the number of alternation levels during imagery. Together, these findings suggest the feasibility of pure visual motion imagery tasks as a strategy to achieve multiclass control systems with potential for BCI and in particular, neurofeedback applications in non-motor (attentional) disorders.
Visualization of Kepler’s laws of planetary motion
NASA Astrophysics Data System (ADS)
Lu, Meishu; Su, Jun; Wang, Weiguo; Lu, Jianlong
2017-03-01
For this article, we use a 3D printer to print a surface similar to universal gravitation for demonstrating and investigating Kepler’s laws of planetary motion describing the motion of a small ball on the surface. This novel experimental method allows Kepler’s laws of planetary motion to be visualized and will contribute to improving the manipulative ability of middle school students and the accessibility of classroom education.
Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion
Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo
2017-01-01
In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time. PMID:28475145
Use of cues in virtual reality depends on visual feedback.
Fulvio, Jacqueline M; Rokers, Bas
2017-11-22
3D motion perception is of central importance to daily life. However, when tested in laboratory settings, sensitivity to 3D motion signals is found to be poor, leading to the view that heuristics and prior assumptions are critical for 3D motion perception. Here we explore an alternative: sensitivity to 3D motion signals is context-dependent and must be learned based on explicit visual feedback in novel environments. The need for action-contingent visual feedback is well-established in the developmental literature. For example, young kittens that are passively moved through an environment, but unable to move through it themselves, fail to develop accurate depth perception. We find that these principles also obtain in adult human perception. Observers that do not experience visual consequences of their actions fail to develop accurate 3D motion perception in a virtual reality environment, even after prolonged exposure. By contrast, observers that experience the consequences of their actions improve performance based on available sensory cues to 3D motion. Specifically, we find that observers learn to exploit the small motion parallax cues provided by head jitter. Our findings advance understanding of human 3D motion processing and form a foundation for future study of perception in virtual and natural 3D environments.
Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo
2017-05-05
In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time.
Larcombe, Stephanie J; Kennard, Chris; Bridge, Holly
2018-01-01
Repeated practice of a specific task can improve visual performance, but the neural mechanisms underlying this improvement in performance are not yet well understood. Here we trained healthy participants on a visual motion task daily for 5 days in one visual hemifield. Before and after training, we used functional magnetic resonance imaging (fMRI) to measure the change in neural activity. We also imaged a control group of participants on two occasions who did not receive any task training. While in the MRI scanner, all participants completed the motion task in the trained and untrained visual hemifields separately. Following training, participants improved their ability to discriminate motion direction in the trained hemifield and, to a lesser extent, in the untrained hemifield. The amount of task learning correlated positively with the change in activity in the medial superior temporal (MST) area. MST is the anterior portion of the human motion complex (hMT+). MST changes were localized to the hemisphere contralateral to the region of the visual field, where perceptual training was delivered. Visual areas V2 and V3a showed an increase in activity between the first and second scan in the training group, but this was not correlated with performance. The contralateral anterior hippocampus and bilateral dorsolateral prefrontal cortex (DLPFC) and frontal pole showed changes in neural activity that also correlated with the amount of task learning. These findings emphasize the importance of MST in perceptual learning of a visual motion task. Hum Brain Mapp 39:145-156, 2018. © 2017 Wiley Periodicals, Inc. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Effects of face feature and contour crowding in facial expression adaptation.
Liu, Pan; Montaser-Kouhsari, Leila; Xu, Hong
2014-12-01
Prolonged exposure to a visual stimulus, such as a happy face, biases the perception of subsequently presented neutral face toward sad perception, the known face adaptation. Face adaptation is affected by visibility or awareness of the adapting face. However, whether it is affected by discriminability of the adapting face is largely unknown. In the current study, we used crowding to manipulate discriminability of the adapting face and test its effect on face adaptation. Instead of presenting flanking faces near the target face, we shortened the distance between facial features (internal feature crowding), and reduced the size of face contour (external contour crowding), to introduce crowding. We are interested in whether internal feature crowding or external contour crowding is more effective in inducing crowding effect in our first experiment. We found that combining internal feature and external contour crowding, but not either of them alone, induced significant crowding effect. In Experiment 2, we went on further to investigate its effect on adaptation. We found that both internal feature crowding and external contour crowding reduced its facial expression aftereffect (FEA) significantly. However, we did not find a significant correlation between discriminability of the adapting face and its FEA. Interestingly, we found a significant correlation between discriminabilities of the adapting and test faces. Experiment 3 found that the reduced adaptation aftereffect in combined crowding by the external face contour and the internal facial features cannot be decomposed into the effects from the face contour and facial features linearly. It thus suggested a nonlinear integration between facial features and face contour in face adaptation.
Is it just motion that silences awareness of other visual changes?
Peirce, Jonathan W
2013-06-28
When an array of visual elements is changing color, size, or shape incoherently, the changes are typically quite visible even when the overall color, size, or shape statistics of the field may not have changed. When the dots also move, however, the changes become much less apparent; awareness of them is "silenced" (Suchow & Alvarez, 2011). This finding might indicate that the perception of motion is of particular importance to the visual system, such that it is given priority in processing over other forms of visual change. Here we test whether that is the case by examining the converse: whether awareness of motion signals can be silenced by potent coherent changes in color or size. We find that they can, and with very similar effects, indicating that motion is not critical for silencing. Suchow and Alvarez's dots always moved in the same direction with the same speed, causing them to be grouped as a single entity. We also tested whether this coherence was a necessary component of the silencing effect. It is not; when the dot speeds are randomly selected, such that no coherent motion is present, the silencing effect remains. It is clear that neither motion nor grouping is directly responsible for the silencing effect. Silencing can be generated from any potent visual change.
Motion processing with two eyes in three dimensions.
Rokers, Bas; Czuba, Thaddeus B; Cormack, Lawrence K; Huk, Alexander C
2011-02-11
The movement of an object toward or away from the head is perhaps the most critical piece of information an organism can extract from its environment. Such 3D motion produces horizontally opposite motions on the two retinae. Little is known about how or where the visual system combines these two retinal motion signals, relative to the wealth of knowledge about the neural hierarchies involved in 2D motion processing and binocular vision. Canonical conceptions of primate visual processing assert that neurons early in the visual system combine monocular inputs into a single cyclopean stream (lacking eye-of-origin information) and extract 1D ("component") motions; later stages then extract 2D pattern motion from the cyclopean output of the earlier stage. Here, however, we show that 3D motion perception is in fact affected by the comparison of opposite 2D pattern motions between the two eyes. Three-dimensional motion sensitivity depends systematically on pattern motion direction when dichoptically viewing gratings and plaids-and a novel "dichoptic pseudoplaid" stimulus provides strong support for use of interocular pattern motion differences by precluding potential contributions from conventional disparity-based mechanisms. These results imply the existence of eye-of-origin information in later stages of motion processing and therefore motivate the incorporation of such eye-specific pattern-motion signals in models of motion processing and binocular integration.
Relationship Between Optimal Gain and Coherence Zone in Flight Simulation
NASA Technical Reports Server (NTRS)
Gracio, Bruno Jorge Correia; Pais, Ana Rita Valente; vanPaassen, M. M.; Mulder, Max; Kely, Lon C.; Houck, Jacob A.
2011-01-01
In motion simulation the inertial information generated by the motion platform is most of the times different from the visual information in the simulator displays. This occurs due to the physical limits of the motion platform. However, for small motions that are within the physical limits of the motion platform, one-to-one motion, i.e. visual information equal to inertial information, is possible. It has been shown in previous studies that one-to-one motion is often judged as too strong, causing researchers to lower the inertial amplitude. When trying to measure the optimal inertial gain for a visual amplitude, we found a zone of optimal gains instead of a single value. Such result seems related with the coherence zones that have been measured in flight simulation studies. However, the optimal gain results were never directly related with the coherence zones. In this study we investigated whether the optimal gain measurements are the same as the coherence zone measurements. We also try to infer if the results obtained from the two measurements can be used to differentiate between simulators with different configurations. An experiment was conducted at the NASA Langley Research Center which used both the Cockpit Motion Facility and the Visual Motion Simulator. The results show that the inertial gains obtained with the optimal gain are different than the ones obtained with the coherence zone measurements. The optimal gain is within the coherence zone.The point of mean optimal gain was lower and further away from the one-to-one line than the point of mean coherence. The zone width obtained for the coherence zone measurements was dependent on the visual amplitude and frequency. For the optimal gain, the zone width remained constant when the visual amplitude and frequency were varied. We found no effect of the simulator configuration in both the coherence zone and optimal gain measurements.
Zhang, Yi; Chen, Lihan
2016-01-01
Recent studies of brain plasticity that pertain to time perception have shown that fast training of temporal discrimination in one modality, for example, the auditory modality, can improve performance of temporal discrimination in another modality, such as the visual modality. We here examined whether the perception of visual Ternus motion could be recalibrated through fast crossmodal statistical binding of temporal information and stimuli properties binding. We conducted two experiments, composed of three sessions each: pre-test, learning, and post-test. In both the pre-test and the post-test, participants classified the Ternus display as either “element motion” or “group motion.” For the training session in Experiment 1, we constructed two types of temporal structures, in which two consecutively presented sound beeps were dominantly (80%) flanked by one leading visual Ternus frame and by one lagging visual Ternus frame (VAAV) or dominantly inserted by two Ternus visual frames (AVVA). Participants were required to respond which interval (auditory vs. visual) was longer. In Experiment 2, we presented only a single auditory–visual pair but with similar temporal configurations as in Experiment 1, and asked participants to perform an audio–visual temporal order judgment. The results of these two experiments support that statistical binding of temporal information and stimuli properties can quickly and selectively recalibrate the sensitivity of perceiving visual motion, according to the protocols of the specific bindings. PMID:27065910
NASA Technical Reports Server (NTRS)
Bigler, W. B., II
1977-01-01
The NASA passenger ride quality apparatus (PRQA), a ground based motion simulator, was compared to the total in flight simulator (TIFS). Tests were made on PRQA with varying stimuli: motions only; motions and noise; motions, noise, and visual; and motions and visual. Regression equations for the tests were obtained and subsequent t-testing of the slopes indicated that ground based simulator tests produced comfort change rates similar to actual flight data. It was recommended that PRQA be used in the ride quality program for aircraft and that it be validated for other transportation modes.
Parallax visualization of full motion video using the Pursuer GUI
NASA Astrophysics Data System (ADS)
Mayhew, Christopher A.; Forgues, Mark B.
2014-06-01
In 2013, the Authors reported to the SPIE on the Phase 1 development of a Parallax Visualization (PV) plug-in toolset for Wide Area Motion Imaging (WAMI) data using the Pursuer Graphical User Interface (GUI).1 In addition to the ability to PV WAMI data, the Phase 1 plug-in toolset also featured a limited ability to visualize Full Motion video (FMV) data. The ability to visualize both WAMI and FMV data is highly advantageous capability for an Electric Light Table (ELT) toolset. This paper reports on the Phase 2 development and addition of a full featured FMV capability to the Pursuer WAMI PV Plug-in.
Kinesthetic information disambiguates visual motion signals.
Hu, Bo; Knill, David C
2010-05-25
Numerous studies have shown that extra-retinal signals can disambiguate motion information created by movements of the eye or head. We report a new form of cross-modal sensory integration in which the kinesthetic information generated by active hand movements essentially captures ambiguous visual motion information. Several previous studies have shown that active movement can bias observers' percepts of bi-stable stimuli; however, these effects seem to be best explained by attentional mechanisms. We show that kinesthetic information can change an otherwise stable perception of motion, providing evidence of genuine fusion between visual and kinesthetic information. The experiments take advantage of the aperture problem, in which the motion of a one-dimensional grating pattern behind an aperture, while geometrically ambiguous, appears to move stably in the grating normal direction. When actively moving the pattern, however, the observer sees the motion to be in the hand movement direction. Copyright 2010 Elsevier Ltd. All rights reserved.
Flies and humans share a motion estimation strategy that exploits natural scene statistics
Clark, Damon A.; Fitzgerald, James E.; Ales, Justin M.; Gohl, Daryl M.; Silies, Marion A.; Norcia, Anthony M.; Clandinin, Thomas R.
2014-01-01
Sighted animals extract motion information from visual scenes by processing spatiotemporal patterns of light falling on the retina. The dominant models for motion estimation exploit intensity correlations only between pairs of points in space and time. Moving natural scenes, however, contain more complex correlations. Here we show that fly and human visual systems encode the combined direction and contrast polarity of moving edges using triple correlations that enhance motion estimation in natural environments. Both species extract triple correlations with neural substrates tuned for light or dark edges, and sensitivity to specific triple correlations is retained even as light and dark edge motion signals are combined. Thus, both species separately process light and dark image contrasts to capture motion signatures that can improve estimation accuracy. This striking convergence argues that statistical structures in natural scenes have profoundly affected visual processing, driving a common computational strategy over 500 million years of evolution. PMID:24390225
Impaired visual recognition of biological motion in schizophrenia.
Kim, Jejoong; Doop, Mikisha L; Blake, Randolph; Park, Sohee
2005-09-15
Motion perception deficits have been suggested to be an important feature of schizophrenia but the behavioral consequences of such deficits are unknown. Biological motion refers to the movements generated by living beings. The human visual system rapidly and effortlessly detects and extracts socially relevant information from biological motion. A deficit in biological motion perception may have significant consequences for detecting and interpreting social information. Schizophrenia patients and matched healthy controls were tested on two visual tasks: recognition of human activity portrayed in point-light animations (biological motion task) and a perceptual control task involving detection of a grouped figure against the background noise (global-form task). Both tasks required detection of a global form against background noise but only the biological motion task required the extraction of motion-related information. Schizophrenia patients performed as well as the controls in the global-form task, but were significantly impaired on the biological motion task. In addition, deficits in biological motion perception correlated with impaired social functioning as measured by the Zigler social competence scale [Zigler, E., Levine, J. (1981). Premorbid competence in schizophrenia: what is being measured? Journal of Consulting and Clinical Psychology, 49, 96-105.]. The deficit in biological motion processing, which may be related to the previously documented deficit in global motion processing, could contribute to abnormal social functioning in schizophrenia.
Application of virtual reality graphics in assessment of concussion.
Slobounov, Semyon; Slobounov, Elena; Newell, Karl
2006-04-01
Abnormal balance in individuals suffering from traumatic brain injury (TBI) has been documented in numerous recent studies. However, specific mechanisms causing balance deficits have not been systematically examined. This paper demonstrated the destabilizing effect of visual field motion, induced by virtual reality graphics in concussed individuals but not in normal controls. Fifty five student-athletes at risk for concussion participated in this study prior to injury and 10 of these subjects who suffered MTBI were tested again on day 3, day 10, and day 30 after the incident. Postural responses to visual field motion were recorded using a virtual reality (VR) environment in conjunction with balance (AMTI force plate) and motion tracking (Flock of Birds) technologies. Two experimental conditions were introduced where subjects passively viewed VR scenes or actively manipulated the visual field motion. Long-lasting destabilizing effects of visual field motion were revealed, although subjects were asymptomatic when standard balance tests were introduced. The findings demonstrate that advanced VR technology may detect residual symptoms of concussion at least 30 days post-injury.
Detection of visual events along the apparent motion trace in patients with paranoid schizophrenia.
Sanders, Lia Lira Olivier; Muckli, Lars; de Millas, Walter; Lautenschlager, Marion; Heinz, Andreas; Kathmann, Norbert; Sterzer, Philipp
2012-07-30
Dysfunctional prediction in sensory processing has been suggested as a possible causal mechanism in the development of delusions in patients with schizophrenia. Previous studies in healthy subjects have shown that while the perception of apparent motion can mask visual events along the illusory motion trace, such motion masking is reduced when events are spatio-temporally compatible with the illusion, and, therefore, predictable. Here we tested the hypothesis that this specific detection advantage for predictable target stimuli on the apparent motion trace is reduced in patients with paranoid schizophrenia. Our data show that, although target detection along the illusory motion trace is generally impaired, both patients and healthy control participants detect predictable targets more often than unpredictable targets. Patients had a stronger motion masking effect when compared to controls. However, patients showed the same advantage in the detection of predictable targets as healthy control subjects. Our findings reveal stronger motion masking but intact prediction of visual events along the apparent motion trace in patients with paranoid schizophrenia and suggest that the sensory prediction mechanism underlying apparent motion is not impaired in paranoid schizophrenia. Copyright © 2012. Published by Elsevier Ireland Ltd.
Decreased susceptibility to motion sickness during exposure to visual inversion in microgravity
NASA Technical Reports Server (NTRS)
Lackner, James R.; Dizio, Paul
1991-01-01
Head and body movements made in microgravity tend to bring on symptoms of motion sickness. Such head movements, relative to comparable ones made on earth, are accompanied by unusual combinations of semicircular canal and otolith activity owing to the unloading of the otoliths in 0G. Head movements also bring on symptoms of motion sickness during exposure to visual inversion (or reversal) on earth because the vestibulo-ocular reflex is rendered anti-compensatory. Here, evidence is presented that susceptibility to motion sickness during exposure to visual inversion is decreased in a 0G relative to 1G force background. This difference in susceptibility appears related to the alteration in otolith function in 0G. Some implications of this finding for the etiology of space motion sickness are described.
Sugita, Norihiro; Yoshizawa, Makoto; Abe, Makoto; Tanaka, Akira; Watanabe, Takashi; Chiba, Shigeru; Yambe, Tomoyuki; Nitta, Shin-ichi
2007-09-28
Computer graphics and virtual reality techniques are useful to develop automatic and effective rehabilitation systems. However, a kind of virtual environment including unstable visual images presented to wide field screen or a head mounted display tends to induce motion sickness. The motion sickness induced in using a rehabilitation system not only inhibits effective training but also may harm patients' health. There are few studies that have objectively evaluated the effects of the repetitive exposures to these stimuli on humans. The purpose of this study is to investigate the adaptation to visually induced motion sickness by physiological data. An experiment was carried out in which the same video image was presented to human subjects three times. We evaluated changes of the intensity of motion sickness they suffered from by a subjective score and the physiological index rho(max), which is defined as the maximum cross-correlation coefficient between heart rate and pulse wave transmission time and is considered to reflect the autonomic nervous activity. The results showed adaptation to visually-induced motion sickness by the repetitive presentation of the same image both in the subjective and the objective indices. However, there were some subjects whose intensity of sickness increased. Thus, it was possible to know the part in the video image which related to motion sickness by analyzing changes in rho(max) with time. The physiological index, rho(max), will be a good index for assessing the adaptation process to visually induced motion sickness and may be useful in checking the safety of rehabilitation systems with new image technologies.
Visual-Vestibular Conflict Detection Depends on Fixation.
Garzorz, Isabelle T; MacNeilage, Paul R
2017-09-25
Visual and vestibular signals are the primary sources of sensory information for self-motion. Conflict among these signals can be seriously debilitating, resulting in vertigo [1], inappropriate postural responses [2], and motion, simulator, or cyber sickness [3-8]. Despite this significance, the mechanisms mediating conflict detection are poorly understood. Here we model conflict detection simply as crossmodal discrimination with benchmark performance limited by variabilities of the signals being compared. In a series of psychophysical experiments conducted in a virtual reality motion simulator, we measure these variabilities and assess conflict detection relative to this benchmark. We also examine the impact of eye movements on visual-vestibular conflict detection. In one condition, observers fixate a point that is stationary in the simulated visual environment by rotating the eyes opposite head rotation, thereby nulling retinal image motion. In another condition, eye movement is artificially minimized via fixation of a head-fixed fixation point, thereby maximizing retinal image motion. Visual-vestibular integration performance is also measured, similar to previous studies [9-12]. We observe that there is a tradeoff between integration and conflict detection that is mediated by eye movements. Minimizing eye movements by fixating a head-fixed target leads to optimal integration but highly impaired conflict detection. Minimizing retinal motion by fixating a scene-fixed target improves conflict detection at the cost of impaired integration performance. The common tendency to fixate scene-fixed targets during self-motion [13] may indicate that conflict detection is typically a higher priority than the increase in precision of self-motion estimation that is obtained through integration. Copyright © 2017 Elsevier Ltd. All rights reserved.
Visual Occlusion Decreases Motion Sickness in a Flight Simulator.
Ishak, Shaziela; Bubka, Andrea; Bonato, Frederick
2018-05-01
Sensory conflict theories of motion sickness (MS) assert that symptoms may result when incoming sensory inputs (e.g., visual and vestibular) contradict each other. Logic suggests that attenuating input from one sense may reduce conflict and hence lessen MS symptoms. In the current study, it was hypothesized that attenuating visual input by blocking light entering the eye would reduce MS symptoms in a motion provocative environment. Participants sat inside an aircraft cockpit mounted onto a motion platform that simultaneously pitched, rolled, and heaved in two conditions. In the occluded condition, participants wore "blackout" goggles and closed their eyes to block light. In the control condition, participants opened their eyes and had full view of the cockpit's interior. Participants completed separate Simulator Sickness Questionnaires before and after each condition. The posttreatment total Simulator Sickness Questionnaires and subscores for nausea, oculomotor, and disorientation in the control condition were significantly higher than those in the occluded condition. These results suggest that under some conditions attenuating visual input may delay the onset of MS or weaken the severity of symptoms. Eliminating visual input may reduce visual/nonvisual sensory conflict by weakening the influence of the visual channel, which is consistent with the sensory conflict theory of MS.
Visual Target Tracking in the Presence of Unknown Observer Motion
NASA Technical Reports Server (NTRS)
Williams, Stephen; Lu, Thomas
2009-01-01
Much attention has been given to the visual tracking problem due to its obvious uses in military surveillance. However, visual tracking is complicated by the presence of motion of the observer in addition to the target motion, especially when the image changes caused by the observer motion are large compared to those caused by the target motion. Techniques for estimating the motion of the observer based on image registration techniques and Kalman filtering are presented and simulated. With the effects of the observer motion removed, an additional phase is implemented to track individual targets. This tracking method is demonstrated on an image stream from a buoy-mounted or periscope-mounted camera, where large inter-frame displacements are present due to the wave action on the camera. This system has been shown to be effective at tracking and predicting the global position of a planar vehicle (boat) being observed from a single, out-of-plane camera. Finally, the tracking system has been extended to a multi-target scenario.
A neural model of motion processing and visual navigation by cortical area MST.
Grossberg, S; Mingolla, E; Pack, C
1999-12-01
Cells in the dorsal medial superior temporal cortex (MSTd) process optic flow generated by self-motion during visually guided navigation. A neural model shows how interactions between well-known neural mechanisms (log polar cortical magnification, Gaussian motion-sensitive receptive fields, spatial pooling of motion-sensitive signals and subtractive extraretinal eye movement signals) lead to emergent properties that quantitatively simulate neurophysiological data about MSTd cell properties and psychophysical data about human navigation. Model cells match MSTd neuron responses to optic flow stimuli placed in different parts of the visual field, including position invariance, tuning curves, preferred spiral directions, direction reversals, average response curves and preferred locations for stimulus motion centers. The model shows how the preferred motion direction of the most active MSTd cells can explain human judgments of self-motion direction (heading), without using complex heading templates. The model explains when extraretinal eye movement signals are needed for accurate heading perception, and when retinal input is sufficient, and how heading judgments depend on scene layouts and rotation rates.
Rhesus Monkeys Behave As If They Perceive the Duncker Illusion
Zivotofsky, A. Z.; Goldberg, M. E.; Powell, K. D.
2008-01-01
The visual system uses the pattern of motion on the retina to analyze the motion of objects in the world, and the motion of the observer him/herself. Distinguishing between retinal motion evoked by movement of the retina in space and retinal motion evoked by movement of objects in the environment is computationally difficult, and the human visual system frequently misinterprets the meaning of retinal motion. In this study, we demonstrate that the visual system of the Rhesus monkey also misinterprets retinal motion. We show that monkeys erroneously report the trajectories of pursuit targets or their own pursuit eye movements during an epoch of smooth pursuit across an orthogonally moving background. Furthermore, when they make saccades to the spatial location of stimuli that flashed early in an epoch of smooth pursuit or fixation, they make large errors that appear to take into account the erroneous smooth eye movement that they report in the first experiment, and not the eye movement that they actually make. PMID:16102233
Event Processing in the Visual World: Projected Motion Paths during Spoken Sentence Comprehension
ERIC Educational Resources Information Center
Kamide, Yuki; Lindsay, Shane; Scheepers, Christoph; Kukona, Anuenue
2016-01-01
Motion events in language describe the movement of an entity to another location along a path. In 2 eye-tracking experiments, we found that comprehension of motion events involves the online construction of a spatial mental model that integrates language with the visual world. In Experiment 1, participants listened to sentences describing the…
The psychophysics of Visual Motion and Global form Processing in Autism
ERIC Educational Resources Information Center
Koldewyn, Kami; Whitney, David; Rivera, Susan M.
2010-01-01
Several groups have recently reported that people with autism may suffer from a deficit in visual motion processing and proposed that these deficits may be related to a general dorsal stream dysfunction. In order to test the dorsal stream deficit hypothesis, we investigated coherent and biological motion perception as well as coherent form…
ERIC Educational Resources Information Center
Koh, Hwan Cui; Milne, Elizabeth; Dobkins, Karen
2010-01-01
The magnocellular (M) pathway hypothesis proposes that impaired visual motion perception observed in individuals with Autism Spectrum Disorders (ASD) might be mediated by atypical functioning of the subcortical M pathway, as this pathway provides the bulk of visual input to cortical motion detectors. To test this hypothesis, we measured luminance…
Integration of visual and motion cues for simulator requirements and ride quality investigation
NASA Technical Reports Server (NTRS)
Young, L. R.
1976-01-01
Practical tools which can extend the state of the art of moving base flight simulation for research and training are developed. Main approaches to this research effort include: (1) application of the vestibular model for perception of orientation based on motion cues: optimum simulator motion controls; and (2) visual cues in landing.
Differential responses in dorsal visual cortex to motion and disparity depth cues
Arnoldussen, David M.; Goossens, Jeroen; van den Berg, Albert V.
2013-01-01
We investigated how interactions between monocular motion parallax and binocular cues to depth vary in human motion areas for wide-field visual motion stimuli (110 × 100°). We used fMRI with an extensive 2 × 3 × 2 factorial blocked design in which we combined two types of self-motion (translational motion and translational + rotational motion), with three categories of motion inflicted by the degree of noise (self-motion, distorted self-motion, and multiple object-motion), and two different view modes of the flow patterns (stereo and synoptic viewing). Interactions between disparity and motion category revealed distinct contributions to self- and object-motion processing in 3D. For cortical areas V6 and CSv, but not the anterior part of MT+ with bilateral visual responsiveness (MT+/b), we found a disparity-dependent effect of rotational flow and noise: When self-motion perception was degraded by adding rotational flow and moderate levels of noise, the BOLD responses were reduced compared with translational self-motion alone, but this reduction was cancelled by adding stereo information which also rescued the subject's self-motion percept. At high noise levels, when the self-motion percept gave way to a swarm of moving objects, the BOLD signal strongly increased compared to self-motion in areas MT+/b and V6, but only for stereo in the latter. BOLD response did not increase for either view mode in CSv. These different response patterns indicate different contributions of areas V6, MT+/b, and CSv to the processing of self-motion perception and the processing of multiple independent motions. PMID:24339808
Visuomotor adaptability in older adults with mild cognitive decline.
Schaffert, Jeffrey; Lee, Chi-Mei; Neill, Rebecca; Bo, Jin
2017-02-01
The current study examined the augmentation of error feedback on visuomotor adaptability in older adults with varying degrees of cognitive decline (assessed by the Montreal Cognitive Assessment; MoCA). Twenty-three participants performed a center-out computerized visuomotor adaptation task when the visual feedback of their hand movement error was presented in a regular (ratio=1:1) or enhanced (ratio=1:2) error feedback schedule. Results showed that older adults with lower scores on the MoCA had less adaptability than those with higher MoCA scores during the regular feedback schedule. However, participants demonstrated similar adaptability during the enhanced feedback schedule, regardless of their cognitive ability. Furthermore, individuals with lower MoCA scores showed larger after-effects in spatial control during the enhanced schedule compared to the regular schedule, whereas individuals with higher MoCA scores displayed the opposite pattern. Additional neuro-cognitive assessments revealed that spatial working memory and processing speed were positively related to motor adaptability during the regular scheduled but negatively related to adaptability during the enhanced schedule. We argue that individuals with mild cognitive decline employed different adaptation strategies when encountering enhanced visual feedback, suggesting older adults with mild cognitive impairment (MCI) may benefit from enhanced visual error feedback during sensorimotor adaptation. Copyright © 2016 Elsevier B.V. All rights reserved.
Raudies, Florian; Neumann, Heiko
2012-01-01
The analysis of motion crowds is concerned with the detection of potential hazards for individuals of the crowd. Existing methods analyze the statistics of pixel motion to classify non-dangerous or dangerous behavior, to detect outlier motions, or to estimate the mean throughput of people for an image region. We suggest a biologically inspired model for the analysis of motion crowds that extracts motion features indicative for potential dangers in crowd behavior. Our model consists of stages for motion detection, integration, and pattern detection that model functions of the primate primary visual cortex area (V1), the middle temporal area (MT), and the medial superior temporal area (MST), respectively. This model allows for the processing of motion transparency, the appearance of multiple motions in the same visual region, in addition to processing opaque motion. We suggest that motion transparency helps to identify “danger zones” in motion crowds. For instance, motion transparency occurs in small exit passages during evacuation. However, motion transparency occurs also for non-dangerous crowd behavior when people move in opposite directions organized into separate lanes. Our analysis suggests: The combination of motion transparency and a slow motion speed can be used for labeling of candidate regions that contain dangerous behavior. In addition, locally detected decelerations or negative speed gradients of motions are a precursor of danger in crowd behavior as are globally detected motion patterns that show a contraction toward a single point. In sum, motion transparency, image speeds, motion patterns, and speed gradients extracted from visual motion in videos are important features to describe the behavioral state of a motion crowd. PMID:23300930
Visual Motion Perception and Visual Attentive Processes.
1988-04-01
88-0551 Visual Motion Perception and Visual Attentive Processes George Spering , New YorkUnivesity A -cesson For DTIC TAB rant AFOSR 85-0364... Spering . HIPSt: A Unix-based image processing syslem. Computer Vision, Graphics, and Image Processing, 1984,25. 331-347. ’HIPS is the Human Information...Processing Laboratory’s Image Processing System. 1985 van Santen, Jan P. It, and George Spering . Elaborated Reichardt detectors. Journal of the Optical
Encodings of implied motion for animate and inanimate object categories in the two visual pathways.
Lu, Zhengang; Li, Xueting; Meng, Ming
2016-01-15
Previous research has proposed two separate pathways for visual processing: the dorsal pathway for "where" information vs. the ventral pathway for "what" information. Interestingly, the middle temporal cortex (MT) in the dorsal pathway is involved in representing implied motion from still pictures, suggesting an interaction between motion and object related processing. However, the relationship between how the brain encodes implied motion and how the brain encodes object/scene categories is unclear. To address this question, fMRI was used to measure activity along the two pathways corresponding to different animate and inanimate categories of still pictures with different levels of implied motion speed. In the visual areas of both pathways, activity induced by pictures of humans and animals was hardly modulated by the implied motion speed. By contrast, activity in these areas correlated with the implied motion speed for pictures of inanimate objects and scenes. The interaction between implied motion speed and stimuli category was significant, suggesting different encoding mechanisms of implied motion for animate-inanimate distinction. Further multivariate pattern analysis of activity in the dorsal pathway revealed significant effects of stimulus category that are comparable to the ventral pathway. Moreover, still pictures of inanimate objects/scenes with higher implied motion speed evoked activation patterns that were difficult to differentiate from those evoked by pictures of humans and animals, indicating a functional role of implied motion in the representation of object categories. These results provide novel evidence to support integrated encoding of motion and object categories, suggesting a rethink of the relationship between the two visual pathways. Copyright © 2015 Elsevier Inc. All rights reserved.
Perception of biological motion from size-invariant body representations.
Lappe, Markus; Wittinghofer, Karin; de Lussanet, Marc H E
2015-01-01
The visual recognition of action is one of the socially most important and computationally demanding capacities of the human visual system. It combines visual shape recognition with complex non-rigid motion perception. Action presented as a point-light animation is a striking visual experience for anyone who sees it for the first time. Information about the shape and posture of the human body is sparse in point-light animations, but it is essential for action recognition. In the posturo-temporal filter model of biological motion perception posture information is picked up by visual neurons tuned to the form of the human body before body motion is calculated. We tested whether point-light stimuli are processed through posture recognition of the human body form by using a typical feature of form recognition, namely size invariance. We constructed a point-light stimulus that can only be perceived through a size-invariant mechanism. This stimulus changes rapidly in size from one image to the next. It thus disrupts continuity of early visuo-spatial properties but maintains continuity of the body posture representation. Despite this massive manipulation at the visuo-spatial level, size-changing point-light figures are spontaneously recognized by naive observers, and support discrimination of human body motion.
Shift in speed selectivity of visual cortical neurons: A neural basis of perceived motion contrast
Li, Chao-Yi; Lei, Jing-Jiang; Yao, Hai-Shan
1999-01-01
The perceived speed of motion in one part of the visual field is influenced by the speed of motion in its surrounding fields. Little is known about the cellular mechanisms causing this phenomenon. Recordings from mammalian visual cortex revealed that speed preference of the cortical cells could be changed by displaying a contrast speed in the field surrounding the cell’s classical receptive field. The neuron’s selectivity shifted to prefer faster speed if the contextual surround motion was set at a relatively lower speed, and vice versa. These specific center–surround interactions may underlie the perceptual enhancement of speed contrast between adjacent fields. PMID:10097161
Interocular velocity difference contributes to stereomotion speed perception
NASA Technical Reports Server (NTRS)
Brooks, Kevin R.
2002-01-01
Two experiments are presented assessing the contributions of the rate of change of disparity (CD) and interocular velocity difference (IOVD) cues to stereomotion speed perception. Using a two-interval forced-choice paradigm, the perceived speed of directly approaching and receding stereomotion and of monocular lateral motion in random dot stereogram (RDS) targets was measured. Prior adaptation using dysjunctively moving random dot stimuli induced a velocity aftereffect (VAE). The degree of interocular correlation in the adapting images was manipulated to assess the effectiveness of each cue. While correlated adaptation involved a conventional RDS stimulus, containing both IOVD and CD cues, uncorrelated adaptation featured an independent dot array in each monocular half-image, and hence lacked a coherent disparity signal. Adaptation produced a larger VAE for stereomotion than for monocular lateral motion, implying effects at neural sites beyond that of binocular combination. For motion passing through the horopter, correlated and uncorrelated adaptation stimuli produced equivalent stereomotion VAEs. The possibility that these results were due to the adaptation of a CD mechanism through random matches in the uncorrelated stimulus was discounted in a control experiment. Here both simultaneous and sequential adaptation of left and right eyes produced similar stereomotion VAEs. Motion at uncrossed disparities was also affected by both correlated and uncorrelated adaptation stimuli, but showed a significantly greater VAE in response to the former. These results show that (1) there are two separate, specialised mechanisms for encoding stereomotion: one through IOVD, the other through CD; (2) the IOVD cue dominates the perception of stereomotion speed for stimuli passing through the horopter; and (3) at a disparity pedestal both the IOVD and the CD cues have a significant influence.
Guedry, F E; Benson, A J; Moore, H J
1982-06-01
Visual search within a head-fixed display consisting of a 12 X 12 digit matrix is degraded by whole-body angular oscillation at 0.02 Hz (+/- 155 degrees/s peak velocity), and signs and symptoms of motion sickness are prominent in a number of individuals within a 5-min exposure. Exposure to 2.5 Hz (+/- 20 degrees/s peak velocity) produces equivalent degradation of the visual search task, but does not produce signs and symptoms of motion sickness within a 5-min exposure.
NASA Technical Reports Server (NTRS)
Kirkpatrick, M.; Brye, R. G.
1974-01-01
A motion cue investigation program is reported that deals with human factor aspects of high fidelity vehicle simulation. General data on non-visual motion thresholds and specific threshold values are established for use as washout parameters in vehicle simulation. A general purpose similator is used to test the contradictory cue hypothesis that acceleration sensitivity is reduced during a vehicle control task involving visual feedback. The simulator provides varying acceleration levels. The method of forced choice is based on the theory of signal detect ability.
Hu, Bin; Yue, Shigang; Zhang, Zhuhong
All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.
Event processing in the visual world: Projected motion paths during spoken sentence comprehension.
Kamide, Yuki; Lindsay, Shane; Scheepers, Christoph; Kukona, Anuenue
2016-05-01
Motion events in language describe the movement of an entity to another location along a path. In 2 eye-tracking experiments, we found that comprehension of motion events involves the online construction of a spatial mental model that integrates language with the visual world. In Experiment 1, participants listened to sentences describing the movement of an agent to a goal while viewing visual scenes depicting the agent, goal, and empty space in between. Crucially, verbs suggested either upward (e.g., jump) or downward (e.g., crawl) paths. We found that in the rare event of fixating the empty space between the agent and goal, visual attention was biased upward or downward in line with the verb. In Experiment 2, visual scenes depicted a central obstruction, which imposed further constraints on the paths and increased the likelihood of fixating the empty space between the agent and goal. The results from this experiment corroborated and refined the previous findings. Specifically, eye-movement effects started immediately after hearing the verb and were in line with data from an additional mouse-tracking task that encouraged a more explicit spatial reenactment of the motion event. In revealing how event comprehension operates in the visual world, these findings suggest a mental simulation process whereby spatial details of motion events are mapped onto the world through visual attention. The strength and detectability of such effects in overt eye-movements is constrained by the visual world and the fact that perceivers rarely fixate regions of empty space. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
Young, L. R.
1976-01-01
Investigations for the improvement of flight simulators are reported. Topics include: visual cues in landing, comparison of linear and nonlinear washout filters using a model of the vestibular system, and visual vestibular interactions (yaw axis). An abstract is given for a thesis on the applications of human dynamic orientation models to motion simulation.
Window of visibility - A psychophysical theory of fidelity in time-sampled visual motion displays
NASA Technical Reports Server (NTRS)
Watson, A. B.; Ahumada, A. J., Jr.; Farrell, J. E.
1986-01-01
A film of an object in motion presents on the screen a sequence of static views, while the human observer sees the object moving smoothly across the screen. Questions related to the perceptual identity of continuous and stroboscopic displays are examined. Time-sampled moving images are considered along with the contrast distribution of continuous motion, the contrast distribution of stroboscopic motion, the frequency spectrum of continuous motion, the frequency spectrum of stroboscopic motion, the approximation of the limits of human visual sensitivity to spatial and temporal frequencies by a window of visibility, the critical sampling frequency, the contrast distribution of staircase motion and the frequency spectrum of this motion, and the spatial dependence of the critical sampling frequency. Attention is given to apparent motion, models of motion, image recording, and computer-generated imagery.
Visual Depth from Motion Parallax and Eye Pursuit
Stroyan, Keith; Nawrot, Mark
2012-01-01
A translating observer viewing a rigid environment experiences “motion parallax,” the relative movement upon the observer’s retina of variously positioned objects in the scene. This retinal movement of images provides a cue to the relative depth of objects in the environment, however retinal motion alone cannot mathematically determine relative depth of the objects. Visual perception of depth from lateral observer translation uses both retinal image motion and eye movement. In (Nawrot & Stroyan, 2009, Vision Res. 49, p.1969) we showed mathematically that the ratio of the rate of retinal motion over the rate of smooth eye pursuit mathematically determines depth relative to the fixation point in central vision. We also reported on psychophysical experiments indicating that this ratio is the important quantity for perception. Here we analyze the motion/pursuit cue for the more general, and more complicated, case when objects are distributed across the horizontal viewing plane beyond central vision. We show how the mathematical motion/pursuit cue varies with different points across the plane and with time as an observer translates. If the time varying retinal motion and smooth eye pursuit are the only signals used for this visual process, it is important to know what is mathematically possible to derive about depth and structure. Our analysis shows that the motion/pursuit ratio determines an excellent description of depth and structure in these broader stimulus conditions, provides a detailed quantitative hypothesis of these visual processes for the perception of depth and structure from motion parallax, and provides a computational foundation to analyze the dynamic geometry of future experiments. PMID:21695531
Accounting for direction and speed of eye motion in planning visually guided manual tracking.
Leclercq, Guillaume; Blohm, Gunnar; Lefèvre, Philippe
2013-10-01
Accurate motor planning in a dynamic environment is a critical skill for humans because we are often required to react quickly and adequately to the visual motion of objects. Moreover, we are often in motion ourselves, and this complicates motor planning. Indeed, the retinal and spatial motions of an object are different because of the retinal motion component induced by self-motion. Many studies have investigated motion perception during smooth pursuit and concluded that eye velocity is partially taken into account by the brain. Here we investigate whether the eye velocity during ongoing smooth pursuit is taken into account for the planning of visually guided manual tracking. We had 10 human participants manually track a target while in steady-state smooth pursuit toward another target such that the difference between the retinal and spatial target motion directions could be large, depending on both the direction and the speed of the eye. We used a measure of initial arm movement direction to quantify whether motor planning occurred in retinal coordinates (not accounting for eye motion) or was spatially correct (incorporating eye velocity). Results showed that the eye velocity was nearly fully taken into account by the neuronal areas involved in the visuomotor velocity transformation (between 75% and 102%). In particular, these neuronal pathways accounted for the nonlinear effects due to the relative velocity between the target and the eye. In conclusion, the brain network transforming visual motion into a motor plan for manual tracking adequately uses extraretinal signals about eye velocity.
Heading Tuning in Macaque Area V6.
Fan, Reuben H; Liu, Sheng; DeAngelis, Gregory C; Angelaki, Dora E
2015-12-16
Cortical areas, such as the dorsal subdivision of the medial superior temporal area (MSTd) and the ventral intraparietal area (VIP), have been shown to integrate visual and vestibular self-motion signals. Area V6 is interconnected with areas MSTd and VIP, allowing for the possibility that V6 also integrates visual and vestibular self-motion cues. An alternative hypothesis in the literature is that V6 does not use these sensory signals to compute heading but instead discounts self-motion signals to represent object motion. However, the responses of V6 neurons to visual and vestibular self-motion cues have never been studied, thus leaving the functional roles of V6 unclear. We used a virtual reality system to examine the 3D heading tuning of macaque V6 neurons in response to optic flow and inertial motion stimuli. We found that the majority of V6 neurons are selective for heading defined by optic flow. However, unlike areas MSTd and VIP, V6 neurons are almost universally unresponsive to inertial motion in the absence of optic flow. We also explored the spatial reference frames of heading signals in V6 by measuring heading tuning for different eye positions, and we found that the visual heading tuning of most V6 cells was eye-centered. Similar to areas MSTd and VIP, the population of V6 neurons was best able to discriminate small variations in heading around forward and backward headings. Our findings support the idea that V6 is involved primarily in processing visual motion signals and does not appear to play a role in visual-vestibular integration for self-motion perception. To understand how we successfully navigate our world, it is important to understand which parts of the brain process cues used to perceive our direction of self-motion (i.e., heading). Cortical area V6 has been implicated in heading computations based on human neuroimaging data, but direct measurements of heading selectivity in individual V6 neurons have been lacking. We provide the first demonstration that V6 neurons carry 3D visual heading signals, which are represented in an eye-centered reference frame. In contrast, we found almost no evidence for vestibular heading signals in V6, indicating that V6 is unlikely to contribute to multisensory integration of heading signals, unlike other cortical areas. These findings provide important constraints on the roles of V6 in self-motion perception. Copyright © 2015 the authors 0270-6474/15/3516303-12$15.00/0.
Posture-based processing in visual short-term memory for actions.
Vicary, Staci A; Stevens, Catherine J
2014-01-01
Visual perception of human action involves both form and motion processing, which may rely on partially dissociable neural networks. If form and motion are dissociable during visual perception, then they may also be dissociable during their retention in visual short-term memory (VSTM). To elicit form-plus-motion and form-only processing of dance-like actions, individual action frames can be presented in the correct or incorrect order. The former appears coherent and should elicit action perception, engaging both form and motion pathways, whereas the latter appears incoherent and should elicit posture perception, engaging form pathways alone. It was hypothesized that, if form and motion are dissociable in VSTM, then recognition of static body posture should be better after viewing incoherent than after viewing coherent actions. However, as VSTM is capacity limited, posture-based encoding of actions may be ineffective with increased number of items or frames. Using a behavioural change detection task, recognition of a single test posture was significantly more likely after studying incoherent than after studying coherent stimuli. However, this effect only occurred for spans of two (but not three) items and for stimuli with five (but not nine) frames. As in perception, posture and motion are dissociable in VSTM.
Visual and motion cueing in helicopter simulation
NASA Technical Reports Server (NTRS)
Bray, R. S.
1985-01-01
Early experience in fixed-cockpit simulators, with limited field of view, demonstrated the basic difficulties of simulating helicopter flight at the level of subjective fidelity required for confident evaluation of vehicle characteristics. More recent programs, utilizing large-amplitude cockpit motion and a multiwindow visual-simulation system have received a much higher degree of pilot acceptance. However, none of these simulations has presented critical visual-flight tasks that have been accepted by the pilots as the full equivalent of flight. In this paper, the visual cues presented in the simulator are compared with those of flight in an attempt to identify deficiencies that contribute significantly to these assessments. For the low-amplitude maneuvering tasks normally associated with the hover mode, the unique motion capabilities of the Vertical Motion Simulator (VMS) at Ames Research Center permit nearly a full representation of vehicle motion. Especially appreciated in these tasks are the vertical-acceleration responses to collective control. For larger-amplitude maneuvering, motion fidelity must suffer diminution through direct attenuation through high-pass filtering washout of the computer cockpit accelerations or both. Experiments were conducted in an attempt to determine the effects of these distortions on pilot performance of height-control tasks.
Adaptation of velocity encoding in synaptically coupled neurons in the fly visual system.
Kalb, Julia; Egelhaaf, Martin; Kurtz, Rafael
2008-09-10
Although many adaptation-induced effects on neuronal response properties have been described, it is often unknown at what processing stages in the nervous system they are generated. We focused on fly visual motion-sensitive neurons to identify changes in response characteristics during prolonged visual motion stimulation. By simultaneous recordings of synaptically coupled neurons, we were able to directly compare adaptation-induced effects at two consecutive processing stages in the fly visual motion pathway. This allowed us to narrow the potential sites of adaptation effects within the visual system and to relate them to the properties of signal transfer between neurons. Motion adaptation was accompanied by a response reduction, which was somewhat stronger in postsynaptic than in presynaptic cells. We found that the linear representation of motion velocity degrades during adaptation to a white-noise velocity-modulated stimulus. This effect is caused by an increasingly nonlinear velocity representation rather than by an increase of noise and is similarly strong in presynaptic and postsynaptic neurons. In accordance with this similarity, the dynamics and the reliability of interneuronal signal transfer remained nearly constant. Thus, adaptation is mainly based on processes located in the presynaptic neuron or in more peripheral processing stages. In contrast, changes of transfer properties at the analyzed synapse or in postsynaptic spike generation contribute little to changes in velocity coding during motion adaptation.
Decoding the direction of imagined visual motion using 7 T ultra-high field fMRI
Emmerling, Thomas C.; Zimmermann, Jan; Sorger, Bettina; Frost, Martin A.; Goebel, Rainer
2016-01-01
There is a long-standing debate about the neurocognitive implementation of mental imagery. One form of mental imagery is the imagery of visual motion, which is of interest due to its naturalistic and dynamic character. However, so far only the mere occurrence rather than the specific content of motion imagery was shown to be detectable. In the current study, the application of multi-voxel pattern analysis to high-resolution functional data of 12 subjects acquired with ultra-high field 7 T functional magnetic resonance imaging allowed us to show that imagery of visual motion can indeed activate the earliest levels of the visual hierarchy, but the extent thereof varies highly between subjects. Our approach enabled classification not only of complex imagery, but also of its actual contents, in that the direction of imagined motion out of four options was successfully identified in two thirds of the subjects and with accuracies of up to 91.3% in individual subjects. A searchlight analysis confirmed the local origin of decodable information in striate and extra-striate cortex. These high-accuracy findings not only shed new light on a central question in vision science on the constituents of mental imagery, but also show for the first time that the specific sub-categorical content of visual motion imagery is reliably decodable from brain imaging data on a single-subject level. PMID:26481673
Dynamic and predictive links between touch and vision.
Gray, Rob; Tan, Hong Z
2002-07-01
We investigated crossmodal links between vision and touch for moving objects. In experiment 1, observers discriminated visual targets presented randomly at one of five locations on their forearm. Tactile pulses simulating motion along the forearm preceded visual targets. At short tactile-visual ISIs, discriminations were more rapid when the final tactile pulse and visual target were at the same location. At longer ISIs, discriminations were more rapid when the visual target was offset in the motion direction and were slower for offsets opposite to the motion direction. In experiment 2, speeded tactile discriminations at one of three random locations on the forearm were preceded by a visually simulated approaching object. Discriminations were more rapid when the object approached the location of the tactile stimulation and discrimination performance was dependent on the approaching object's time to contact. These results demonstrate dynamic links in the spatial mapping between vision and touch.
Global motion perception is associated with motor function in 2-year-old children.
Thompson, Benjamin; McKinlay, Christopher J D; Chakraborty, Arijit; Anstice, Nicola S; Jacobs, Robert J; Paudel, Nabin; Yu, Tzu-Ying; Ansell, Judith M; Wouldes, Trecia A; Harding, Jane E
2017-09-29
The dorsal visual processing stream that includes V1, motion sensitive area V5 and the posterior parietal lobe, supports visually guided motor function. Two recent studies have reported associations between global motion perception, a behavioural measure of processing in V5, and motor function in pre-school and school aged children. This indicates a relationship between visual and motor development and also supports the use of global motion perception to assess overall dorsal stream function in studies of human neurodevelopment. We investigated whether associations between vision and motor function were present at 2 years of age, a substantially earlier stage of development. The Bayley III test of Infant and Toddler Development and measures of vision including visual acuity (Cardiff Acuity Cards), stereopsis (Lang stereotest) and global motion perception were attempted in 404 2-year-old children (±4 weeks). Global motion perception (quantified as a motion coherence threshold) was assessed by observing optokinetic nystagmus in response to random dot kinematograms of varying coherence. Linear regression revealed that global motion perception was modestly, but statistically significantly associated with Bayley III composite motor (r 2 =0.06, P<0.001, n=375) and gross motor scores (r 2 =0.06, p<0.001, n=375). The associations remained significant when language score was included in the regression model. In addition, when language score was included in the model, stereopsis was significantly associated with composite motor and fine motor scores, but unaided visual acuity was not statistically significantly associated with any of the motor scores. These results demonstrate that global motion perception and binocular vision are associated with motor function at an early stage of development. Global motion perception can be used as a partial measure of dorsal stream function from early childhood. Copyright © 2017 Elsevier B.V. All rights reserved.
The broken escalator phenomenon. Aftereffect of walking onto a moving platform.
Reynolds, R F; Bronstein, A M
2003-08-01
We investigated the physiological basis of the 'broken escalator phenomenon', namely the sensation that when walking onto an escalator which is stationary one experiences an odd sensation of imbalance, despite full awareness that the escalator is not going to move. The experimental moving surface was provided by a linear motor-powered sled, moving at 1.2 m/s. Sled velocity, trunk position, trunk angular velocity, EMG of the ankle flexors-extensors and foot-contact signals were recorded in 14 normal subjects. The experiments involved, initially, walking onto the stationary sled (condition Before). Then, subjects walked 20 times onto the moving sled (condition Moving), and it was noted that they increased their walking velocity from a baseline of 0.60 m/s to 0.90 m/s. After the moving trials, subjects were unequivocally warned that the platform would no longer move and asked to walk onto the stationary sled again (condition After). It was found that, despite this warning, subjects walked onto the stationary platform inappropriately fast (0.71 m/s), experienced a large overshoot of the trunk and displayed increased leg electromyographic (EMG) activity. Subjects were surprised by their own behaviour and subjectively reported that the 'broken escalator phenomenon', as experienced in urban life, felt similar to the experiment. By the second trial, most movement parameters had returned to baseline values. The findings represent a motor aftereffect of walking onto a moving platform that occurs despite full knowledge of the changing context. As such, it demonstrates dissociation between the declarative and procedural systems in the CNS. Since gait velocity was raised before foot-sled contact, the findings are at least partly explained by open-loop, predictive behaviour. A cautious strategy of limb stiffness was not responsible for the aftereffect, as revealed by no increase in muscle cocontraction. The observed aftereffect is unlike others previously reported in the literature, which occur only after prolonged continuous exposure to a sensory mismatch, large numbers of learning trials or unpredictable catch trials. The relative ease with which the aftereffect was induced suggests that locomotor adaptation may be more impervious to cognitive control than other types of motor learning.
Harrison, Neil R; Witheridge, Sian; Makin, Alexis; Wuerger, Sophie M; Pegna, Alan J; Meyer, Georg F
2015-11-01
Motion is represented by low-level signals, such as size-expansion in vision or loudness changes in the auditory modality. The visual and auditory signals from the same object or event may be integrated and facilitate detection. We explored behavioural and electrophysiological correlates of congruent and incongruent audio-visual depth motion in conditions where auditory level changes, visual expansion, and visual disparity cues were manipulated. In Experiment 1 participants discriminated auditory motion direction whilst viewing looming or receding, 2D or 3D, visual stimuli. Responses were faster and more accurate for congruent than for incongruent audio-visual cues, and the congruency effect (i.e., difference between incongruent and congruent conditions) was larger for visual 3D cues compared to 2D cues. In Experiment 2, event-related potentials (ERPs) were collected during presentation of the 2D and 3D, looming and receding, audio-visual stimuli, while participants detected an infrequent deviant sound. Our main finding was that audio-visual congruity was affected by retinal disparity at an early processing stage (135-160ms) over occipito-parietal scalp. Topographic analyses suggested that similar brain networks were activated for the 2D and 3D congruity effects, but that cortical responses were stronger in the 3D condition. Differences between congruent and incongruent conditions were observed between 140-200ms, 220-280ms, and 350-500ms after stimulus onset. Copyright © 2015 Elsevier Ltd. All rights reserved.
Embodied learning of a generative neural model for biological motion perception and inference
Schrodt, Fabian; Layher, Georg; Neumann, Heiko; Butz, Martin V.
2015-01-01
Although an action observation network and mirror neurons for understanding the actions and intentions of others have been under deep, interdisciplinary consideration over recent years, it remains largely unknown how the brain manages to map visually perceived biological motion of others onto its own motor system. This paper shows how such a mapping may be established, even if the biologically motion is visually perceived from a new vantage point. We introduce a learning artificial neural network model and evaluate it on full body motion tracking recordings. The model implements an embodied, predictive inference approach. It first learns to correlate and segment multimodal sensory streams of own bodily motion. In doing so, it becomes able to anticipate motion progression, to complete missing modal information, and to self-generate learned motion sequences. When biological motion of another person is observed, this self-knowledge is utilized to recognize similar motion patterns and predict their progress. Due to the relative encodings, the model shows strong robustness in recognition despite observing rather large varieties of body morphology and posture dynamics. By additionally equipping the model with the capability to rotate its visual frame of reference, it is able to deduce the visual perspective onto the observed person, establishing full consistency to the embodied self-motion encodings by means of active inference. In further support of its neuro-cognitive plausibility, we also model typical bistable perceptions when crucial depth information is missing. In sum, the introduced neural model proposes a solution to the problem of how the human brain may establish correspondence between observed bodily motion and its own motor system, thus offering a mechanism that supports the development of mirror neurons. PMID:26217215
Embodied learning of a generative neural model for biological motion perception and inference.
Schrodt, Fabian; Layher, Georg; Neumann, Heiko; Butz, Martin V
2015-01-01
Although an action observation network and mirror neurons for understanding the actions and intentions of others have been under deep, interdisciplinary consideration over recent years, it remains largely unknown how the brain manages to map visually perceived biological motion of others onto its own motor system. This paper shows how such a mapping may be established, even if the biologically motion is visually perceived from a new vantage point. We introduce a learning artificial neural network model and evaluate it on full body motion tracking recordings. The model implements an embodied, predictive inference approach. It first learns to correlate and segment multimodal sensory streams of own bodily motion. In doing so, it becomes able to anticipate motion progression, to complete missing modal information, and to self-generate learned motion sequences. When biological motion of another person is observed, this self-knowledge is utilized to recognize similar motion patterns and predict their progress. Due to the relative encodings, the model shows strong robustness in recognition despite observing rather large varieties of body morphology and posture dynamics. By additionally equipping the model with the capability to rotate its visual frame of reference, it is able to deduce the visual perspective onto the observed person, establishing full consistency to the embodied self-motion encodings by means of active inference. In further support of its neuro-cognitive plausibility, we also model typical bistable perceptions when crucial depth information is missing. In sum, the introduced neural model proposes a solution to the problem of how the human brain may establish correspondence between observed bodily motion and its own motor system, thus offering a mechanism that supports the development of mirror neurons.
Petermeijer, Sebastiaan M; Abbink, David A; de Winter, Joost C F
2015-02-01
The aim of this study was to compare continuous versus bandwidth haptic steering guidance in terms of lane-keeping behavior, aftereffects, and satisfaction. An important human factors question is whether operators should be supported continuously or only when tolerance limits are exceeded. We aimed to clarify this issue for haptic steering guidance by investigating costs and benefits of both approaches in a driving simulator. Thirty-two participants drove five trials, each with a different level of haptic support: no guidance (Manual); guidance outside a 0.5-m bandwidth (Band1); a hysteresis version of Band1, which guided back to the lane center once triggered (Band2); continuous guidance (Cont); and Cont with double feedback gain (ContS). Participants performed a reaction time task while driving. Toward the end of each trial, the guidance was unexpectedly disabled to investigate aftereffects. All four guidance systems prevented large lateral errors (>0.7 m). Cont and especially ContS yielded smaller lateral errors and higher time to line crossing than Manual, Band1, and Band2. Cont and ContS yielded short-lasting aftereffects, whereas Band1 and Band2 did not. Cont yielded higher self-reported satisfaction and faster reaction times than Band1. Continuous and bandwidth guidance both prevent large driver errors. Continuous guidance yields improved performance and satisfaction over bandwidth guidance at the cost of aftereffects and variability in driver torque (indicating human-automation conflicts). The presented results are useful for designers of haptic guidance systems and support critical thinking about the costs and benefits of automation support systems.
Al-Aziz, Jameel; Christou, Nicolas; Dinov, Ivo D.
2011-01-01
The amount, complexity and provenance of data have dramatically increased in the past five years. Visualization of observed and simulated data is a critical component of any social, environmental, biomedical or scientific quest. Dynamic, exploratory and interactive visualization of multivariate data, without preprocessing by dimensionality reduction, remains a nearly insurmountable challenge. The Statistics Online Computational Resource (www.SOCR.ucla.edu) provides portable online aids for probability and statistics education, technology-based instruction and statistical computing. We have developed a new Java-based infrastructure, SOCR Motion Charts, for discovery-based exploratory analysis of multivariate data. This interactive data visualization tool enables the visualization of high-dimensional longitudinal data. SOCR Motion Charts allows mapping of ordinal, nominal and quantitative variables onto time, 2D axes, size, colors, glyphs and appearance characteristics, which facilitates the interactive display of multidimensional data. We validated this new visualization paradigm using several publicly available multivariate datasets including Ice-Thickness, Housing Prices, Consumer Price Index, and California Ozone Data. SOCR Motion Charts is designed using object-oriented programming, implemented as a Java Web-applet and is available to the entire community on the web at www.socr.ucla.edu/SOCR_MotionCharts. It can be used as an instructional tool for rendering and interrogating high-dimensional data in the classroom, as well as a research tool for exploratory data analysis. PMID:21479108
Modality-dependent effect of motion information in sensory-motor synchronised tapping.
Ono, Kentaro
2018-05-14
Synchronised action is important for everyday life. Generally, the auditory domain is more sensitive for coding temporal information, and previous studies have shown that auditory-motor synchronisation is much more precise than visuo-motor synchronisation. Interestingly, adding motion information improves synchronisation with visual stimuli and the advantage of the auditory modality seems to diminish. However, whether adding motion information also improves auditory-motor synchronisation remains unknown. This study compared tapping accuracy with a stationary or moving stimulus in both auditory and visual modalities. Participants were instructed to tap in synchrony with the onset of a sound or flash in the stationary condition, while these stimuli were perceived as moving from side to side in the motion condition. The results demonstrated that synchronised tapping with a moving visual stimulus was significantly more accurate than tapping with a stationary visual stimulus, as previous studies have shown. However, tapping with a moving auditory stimulus was significantly poorer than tapping with a stationary auditory stimulus. Although motion information impaired audio-motor synchronisation, an advantage of auditory modality compared to visual modality still existed. These findings are likely the result of higher temporal resolution in the auditory domain, which is likely due to the physiological and structural differences in the auditory and visual pathways in the brain. Copyright © 2018 Elsevier B.V. All rights reserved.
When eyes drive hand: Influence of non-biological motion on visuo-motor coupling.
Thoret, Etienne; Aramaki, Mitsuko; Bringoux, Lionel; Ystad, Sølvi; Kronland-Martinet, Richard
2016-01-26
Many studies stressed that the human movement execution but also the perception of motion are constrained by specific kinematics. For instance, it has been shown that the visuo-manual tracking of a spotlight was optimal when the spotlight motion complies with biological rules such as the so-called 1/3 power law, establishing the co-variation between the velocity and the trajectory curvature of the movement. The visual or kinesthetic perception of a geometry induced by motion has also been shown to be constrained by such biological rules. In the present study, we investigated whether the geometry induced by the visuo-motor coupling of biological movements was also constrained by the 1/3 power law under visual open loop control, i.e. without visual feedback of arm displacement. We showed that when someone was asked to synchronize a drawing movement with a visual spotlight following a circular shape, the geometry of the reproduced shape was fooled by visual kinematics that did not respect the 1/3 power law. In particular, elliptical shapes were reproduced when the circle is trailed with a kinematics corresponding to an ellipse. Moreover, the distortions observed here were larger than in the perceptual tasks stressing the role of motor attractors in such a visuo-motor coupling. Finally, by investigating the direct influence of visual kinematics on the motor reproduction, our result conciliates previous knowledge on sensorimotor coupling of biological motions with external stimuli and gives evidence to the amodal encoding of biological motion. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Breaking cover: neural responses to slow and fast camouflage-breaking motion.
Yin, Jiapeng; Gong, Hongliang; An, Xu; Chen, Zheyuan; Lu, Yiliang; Andolina, Ian M; McLoughlin, Niall; Wang, Wei
2015-08-22
Primates need to detect and recognize camouflaged animals in natural environments. Camouflage-breaking movements are often the only visual cue available to accomplish this. Specifically, sudden movements are often detected before full recognition of the camouflaged animal is made, suggesting that initial processing of motion precedes the recognition of motion-defined contours or shapes. What are the neuronal mechanisms underlying this initial processing of camouflaged motion in the primate visual brain? We investigated this question using intrinsic-signal optical imaging of macaque V1, V2 and V4, along with computer simulations of the neural population responses. We found that camouflaged motion at low speed was processed as a direction signal by both direction- and orientation-selective neurons, whereas at high-speed camouflaged motion was encoded as a motion-streak signal primarily by orientation-selective neurons. No population responses were found to be invariant to the camouflage contours. These results suggest that the initial processing of camouflaged motion at low and high speeds is encoded as direction and motion-streak signals in primate early visual cortices. These processes are consistent with a spatio-temporal filter mechanism that provides for fast processing of motion signals, prior to full recognition of camouflage-breaking animals. © 2015 The Authors.
Breaking cover: neural responses to slow and fast camouflage-breaking motion
Yin, Jiapeng; Gong, Hongliang; An, Xu; Chen, Zheyuan; Lu, Yiliang; Andolina, Ian M.; McLoughlin, Niall; Wang, Wei
2015-01-01
Primates need to detect and recognize camouflaged animals in natural environments. Camouflage-breaking movements are often the only visual cue available to accomplish this. Specifically, sudden movements are often detected before full recognition of the camouflaged animal is made, suggesting that initial processing of motion precedes the recognition of motion-defined contours or shapes. What are the neuronal mechanisms underlying this initial processing of camouflaged motion in the primate visual brain? We investigated this question using intrinsic-signal optical imaging of macaque V1, V2 and V4, along with computer simulations of the neural population responses. We found that camouflaged motion at low speed was processed as a direction signal by both direction- and orientation-selective neurons, whereas at high-speed camouflaged motion was encoded as a motion-streak signal primarily by orientation-selective neurons. No population responses were found to be invariant to the camouflage contours. These results suggest that the initial processing of camouflaged motion at low and high speeds is encoded as direction and motion-streak signals in primate early visual cortices. These processes are consistent with a spatio-temporal filter mechanism that provides for fast processing of motion signals, prior to full recognition of camouflage-breaking animals. PMID:26269500
A Pursuit Theory Account for the Perception of Common Motion in Motion Parallax.
Ratzlaff, Michael; Nawrot, Mark
2016-09-01
The visual system uses an extraretinal pursuit eye movement signal to disambiguate the perception of depth from motion parallax. Visual motion in the same direction as the pursuit is perceived nearer in depth while visual motion in the opposite direction as pursuit is perceived farther in depth. This explanation of depth sign applies to either an allocentric frame of reference centered on the fixation point or an egocentric frame of reference centered on the observer. A related problem is that of depth order when two stimuli have a common direction of motion. The first psychophysical study determined whether perception of egocentric depth order is adequately explained by a model employing an allocentric framework, especially when the motion parallax stimuli have common rather than divergent motion. A second study determined whether a reversal in perceived depth order, produced by a reduction in pursuit velocity, is also explained by this model employing this allocentric framework. The results show than an allocentric model can explain both the egocentric perception of depth order with common motion and the perceptual depth order reversal created by a reduction in pursuit velocity. We conclude that an egocentric model is not the only explanation for perceived depth order in these common motion conditions. © The Author(s) 2016.
Segalowitz, Sidney J; Sternin, Avital; Lewis, Terri L; Dywan, Jane; Maurer, Daphne
2017-04-01
We examined the role of early visual input in visual system development by testing adults who had been born with dense bilateral cataracts that blocked all patterned visual input during infancy until the cataractous lenses were removed surgically and the eyes fitted with compensatory contact lenses. Patients viewed checkerboards and textures to explore early processing regions (V1, V2), Glass patterns to examine global form processing (V4), and moving stimuli to explore global motion processing (V5). Patients' ERPs differed from those of controls in that (1) the V1 component was much smaller for all but the simplest stimuli and (2) extrastriate components did not differentiate amongst texture stimuli, Glass patterns, or motion stimuli. The results indicate that early visual deprivation contributes to permanent abnormalities at early and mid levels of visual processing, consistent with enduring behavioral deficits in the ability to process complex textures, global form, and global motion. © 2017 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Krauzlis, Rich; Stone, Leland; Null, Cynthia H. (Technical Monitor)
1998-01-01
When viewing objects, primates use a combination of saccadic and pursuit eye movements to stabilize the retinal image of the object of regard within the high-acuity region near the fovea. Although these movements involve widespread regions of the nervous system, they mix seamlessly in normal behavior. Saccades are discrete movements that quickly direct the eyes toward a visual target, thereby translating the image of the target from an eccentric retinal location to the fovea. In contrast, pursuit is a continuous movement that slowly rotates the eyes to compensate for the motion of the visual target, minimizing the blur that can compromise visual acuity. While other mammalian species can generate smooth optokinetic eye movements - which track the motion of the entire visual surround - only primates can smoothly pursue a single small element within a complex visual scene, regardless of the motion elsewhere on the retina. This ability likely reflects the greater ability of primates to segment the visual scene, to identify individual visual objects, and to select a target of interest.
The effect of visual-motion time delays on pilot performance in a pursuit tracking task
NASA Technical Reports Server (NTRS)
Miller, G. K., Jr.; Riley, D. R.
1976-01-01
A study has been made to determine the effect of visual-motion time delays on pilot performance of a simulated pursuit tracking task. Three interrelated major effects have been identified: task difficulty, motion cues, and time delays. As task difficulty, as determined by airplane handling qualities or target frequency, increases, the amount of acceptable time delay decreases. However, when relatively complete motion cues are included in the simulation, the pilot can maintain his performance for considerably longer time delays. In addition, the number of degrees of freedom of motion employed is a significant factor.
Bidirectional Gender Face Aftereffects: Evidence Against Normative Facial Coding.
Cronin, Sophie L; Spence, Morgan L; Miller, Paul A; Arnold, Derek H
2017-02-01
Facial appearance can be altered, not just by restyling but also by sensory processes. Exposure to a female face can, for instance, make subsequent faces look more masculine than they would otherwise. Two explanations exist. According to one, exposure to a female face renormalizes face perception, making that female and all other faces look more masculine as a consequence-a unidirectional effect. According to that explanation, exposure to a male face would have the opposite unidirectional effect. Another suggestion is that face gender is subject to contrastive aftereffects. These should make some faces look more masculine than the adaptor and other faces more feminine-a bidirectional effect. Here, we show that face gender aftereffects are bidirectional, as predicted by the latter hypothesis. Images of real faces rated as more and less masculine than adaptors at baseline tended to look even more and less masculine than adaptors post adaptation. This suggests that, rather than mental representations of all faces being recalibrated to better reflect the prevailing statistics of the environment, mental operations exaggerate differences between successive faces, and this can impact facial gender perception.
Wang, Hao; Crewther, Sheila G.; Liang, Minglong; Laycock, Robin; Yu, Tao; Alexander, Bonnie; Crewther, David P.; Wang, Jian; Yin, Zhengqin
2017-01-01
Strabismic amblyopia is now acknowledged to be more than a simple loss of acuity and to involve alterations in visually driven attention, though whether this applies to both stimulus-driven and goal-directed attention has not been explored. Hence we investigated monocular threshold performance during a motion salience-driven attention task involving detection of a coherent dot motion target in one of four quadrants in adult controls and those with strabismic amblyopia. Psychophysical motion thresholds were impaired for the strabismic amblyopic eye, requiring longer inspection time and consequently slower target speed for detection compared to the fellow eye or control eyes. We compared fMRI activation and functional connectivity between four ROIs of the occipital-parieto-frontal visual attention network [primary visual cortex (V1), motion sensitive area V5, intraparietal sulcus (IPS) and frontal eye fields (FEF)], during a suprathreshold version of the motion-driven attention task, and also a simple goal-directed task, requiring voluntary saccades to targets randomly appearing along a horizontal line. Activation was compared when viewed monocularly by controls and the amblyopic and its fellow eye in strabismics. BOLD activation was weaker in IPS, FEF and V5 for both tasks when viewing through the amblyopic eye compared to viewing through the fellow eye or control participants' non-dominant eye. No difference in V1 activation was seen between the amblyopic and fellow eye, nor between the two eyes of control participants during the motion salience task, though V1 activation was significantly less through the amblyopic eye than through the fellow eye and control group non-dominant eye viewing during the voluntary saccade task. Functional correlations of ROIs within the attention network were impaired through the amblyopic eye during the motion salience task, whereas this was not the case during the voluntary saccade task. Specifically, FEF showed reduced functional connectivity with visual cortical nodes during the motion salience task through the amblyopic eye, despite suprathreshold detection performance. This suggests that the reduced ability of the amblyopic eye to activate the frontal components of the attention networks may help explain the aberrant control of visual attention and eye movements in amblyopes. PMID:28484381
Walking modulates speed sensitivity in Drosophila motion vision.
Chiappe, M Eugenia; Seelig, Johannes D; Reiser, Michael B; Jayaraman, Vivek
2010-08-24
Changes in behavioral state modify neural activity in many systems. In some vertebrates such modulation has been observed and interpreted in the context of attention and sensorimotor coordinate transformations. Here we report state-dependent activity modulations during walking in a visual-motor pathway of Drosophila. We used two-photon imaging to monitor intracellular calcium activity in motion-sensitive lobula plate tangential cells (LPTCs) in head-fixed Drosophila walking on an air-supported ball. Cells of the horizontal system (HS)--a subgroup of LPTCs--showed stronger calcium transients in response to visual motion when flies were walking rather than resting. The amplified responses were also correlated with walking speed. Moreover, HS neurons showed a relatively higher gain in response strength at higher temporal frequencies, and their optimum temporal frequency was shifted toward higher motion speeds. Walking-dependent modulation of HS neurons in the Drosophila visual system may constitute a mechanism to facilitate processing of higher image speeds in behavioral contexts where these speeds of visual motion are relevant for course stabilization. Copyright 2010 Elsevier Ltd. All rights reserved.
Nakamura, S; Shimojo, S
1998-10-01
The effects of the size and eccentricity of the visual stimulus upon visually induced perception of self-motion (vection) were examined with various sizes of central and peripheral visual stimulation. Analysis indicated the strength of vection increased linearly with the size of the area in which the moving pattern was presented, but there was no difference in vection strength between central and peripheral stimuli when stimulus sizes were the same. Thus, the effect of stimulus size is homogeneous across eccentricities in the visual field.
Vestibular nuclei and cerebellum put visual gravitational motion in context.
Miller, William L; Maffei, Vincenzo; Bosco, Gianfranco; Iosa, Marco; Zago, Myrka; Macaluso, Emiliano; Lacquaniti, Francesco
2008-04-01
Animal survival in the forest, and human success on the sports field, often depend on the ability to seize a target on the fly. All bodies fall at the same rate in the gravitational field, but the corresponding retinal motion varies with apparent viewing distance. How then does the brain predict time-to-collision under gravity? A perspective context from natural or pictorial settings might afford accurate predictions of gravity's effects via the recovery of an environmental reference from the scene structure. We report that embedding motion in a pictorial scene facilitates interception of gravitational acceleration over unnatural acceleration, whereas a blank scene eliminates such bias. Functional magnetic resonance imaging (fMRI) revealed blood-oxygen-level-dependent correlates of these visual context effects on gravitational motion processing in the vestibular nuclei and posterior cerebellar vermis. Our results suggest an early stage of integration of high-level visual analysis with gravity-related motion information, which may represent the substrate for perceptual constancy of ubiquitous gravitational motion.
Neural Circuit to Integrate Opposing Motions in the Visual Field.
Mauss, Alex S; Pankova, Katarina; Arenz, Alexander; Nern, Aljoscha; Rubin, Gerald M; Borst, Alexander
2015-07-16
When navigating in their environment, animals use visual motion cues as feedback signals that are elicited by their own motion. Such signals are provided by wide-field neurons sampling motion directions at multiple image points as the animal maneuvers. Each one of these neurons responds selectively to a specific optic flow-field representing the spatial distribution of motion vectors on the retina. Here, we describe the discovery of a group of local, inhibitory interneurons in the fruit fly Drosophila key for filtering these cues. Using anatomy, molecular characterization, activity manipulation, and physiological recordings, we demonstrate that these interneurons convey direction-selective inhibition to wide-field neurons with opposite preferred direction and provide evidence for how their connectivity enables the computation required for integrating opposing motions. Our results indicate that, rather than sharpening directional selectivity per se, these circuit elements reduce noise by eliminating non-specific responses to complex visual information. Copyright © 2015 Elsevier Inc. All rights reserved.
Bidet-Ildei, Christel; Kitromilides, Elenitsa; Orliaguet, Jean-Pierre; Pavlova, Marina; Gentaz, Edouard
2014-01-01
In human newborns, spontaneous visual preference for biological motion is reported to occur at birth, but the factors underpinning this preference are still in debate. Using a standard visual preferential looking paradigm, 4 experiments were carried out in 3-day-old human newborns to assess the influence of translational displacement on perception of human locomotion. Experiment 1 shows that human newborns prefer a point-light walker display representing human locomotion as if on a treadmill over random motion. However, no preference for biological movement is observed in Experiment 2 when both biological and random motion displays are presented with translational displacement. Experiments 3 and 4 show that newborns exhibit preference for translated biological motion (Experiment 3) and random motion (Experiment 4) displays over the same configurations moving without translation. These findings reveal that human newborns have a preference for the translational component of movement independently of the presence of biological kinematics. The outcome suggests that translation constitutes the first step in development of visual preference for biological motion. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Adams, Haley; Narasimham, Gayathri; Rieser, John; Creem-Regehr, Sarah; Stefanucci, Jeanine; Bodenheimer, Bobby
2018-04-01
As virtual reality expands in popularity, an increasingly diverse audience is gaining exposure to immersive virtual environments (IVEs). A significant body of research has demonstrated how perception and action work in such environments, but most of this work has been done studying adults. Less is known about how physical and cognitive development affect perception and action in IVEs, particularly as applied to preteen and teenage children. Accordingly, in the current study we assess how preteens (children aged 8-12 years) and teenagers (children aged 15-18 years) respond to mismatches between their motor behavior and the visual information presented by an IVE. Over two experiments, we evaluate how these individuals recalibrate their actions across functionally distinct systems of movement. The first experiment analyzed forward walking recalibration after exposure to an IVE with either increased or decreased visual flow. Visual flow during normal bipedal locomotion was manipulated to be either twice or half as fast as the physical gait. The second experiment leveraged a prism throwing adaptation paradigm to test the effect of recalibration on throwing movement. In the first experiment, our results show no differences across age groups, although subjects generally experienced a post-exposure effect of shortened distance estimation after experiencing visually faster flow and longer distance estimation after experiencing visually slower flow. In the second experiment, subjects generally showed the typical prism adaptation behavior of a throwing after-effect error. The error lasted longer for preteens than older children. Our results have implications for the design of virtual systems with children as a target audience.
Optic flow detection is not influenced by visual-vestibular congruency.
Holten, Vivian; MacNeilage, Paul R
2018-01-01
Optic flow patterns generated by self-motion relative to the stationary environment result in congruent visual-vestibular self-motion signals. Incongruent signals can arise due to object motion, vestibular dysfunction, or artificial stimulation, which are less common. Hence, we are predominantly exposed to congruent rather than incongruent visual-vestibular stimulation. If the brain takes advantage of this probabilistic association, we expect observers to be more sensitive to visual optic flow that is congruent with ongoing vestibular stimulation. We tested this expectation by measuring the motion coherence threshold, which is the percentage of signal versus noise dots, necessary to detect an optic flow pattern. Observers seated on a hexapod motion platform in front of a screen experienced two sequential intervals. One interval contained optic flow with a given motion coherence and the other contained noise dots only. Observers had to indicate which interval contained the optic flow pattern. The motion coherence threshold was measured for detection of laminar and radial optic flow during leftward/rightward and fore/aft linear self-motion, respectively. We observed no dependence of coherence thresholds on vestibular congruency for either radial or laminar optic flow. Prior studies using similar methods reported both decreases and increases in coherence thresholds in response to congruent vestibular stimulation; our results do not confirm either of these prior reports. While methodological differences may explain the diversity of results, another possibility is that motion coherence thresholds are mediated by neural populations that are either not modulated by vestibular stimulation or that are modulated in a manner that does not depend on congruency.
Selectivity to Translational Egomotion in Human Brain Motion Areas
Pitzalis, Sabrina; Sdoia, Stefano; Bultrini, Alessandro; Committeri, Giorgia; Di Russo, Francesco; Fattori, Patrizia; Galletti, Claudio; Galati, Gaspare
2013-01-01
The optic flow generated when a person moves through the environment can be locally decomposed into several basic components, including radial, circular, translational and spiral motion. Since their analysis plays an important part in the visual perception and control of locomotion and posture it is likely that some brain regions in the primate dorsal visual pathway are specialized to distinguish among them. The aim of this study is to explore the sensitivity to different types of egomotion-compatible visual stimulations in the human motion-sensitive regions of the brain. Event-related fMRI experiments, 3D motion and wide-field stimulation, functional localizers and brain mapping methods were used to study the sensitivity of six distinct motion areas (V6, MT, MST+, V3A, CSv and an Intra-Parietal Sulcus motion [IPSmot] region) to different types of optic flow stimuli. Results show that only areas V6, MST+ and IPSmot are specialized in distinguishing among the various types of flow patterns, with a high response for the translational flow which was maximum in V6 and IPSmot and less marked in MST+. Given that during egomotion the translational optic flow conveys differential information about the near and far external objects, areas V6 and IPSmot likely process visual egomotion signals to extract information about the relative distance of objects with respect to the observer. Since area V6 is also involved in distinguishing object-motion from self-motion, it could provide information about location in space of moving and static objects during self-motion, particularly in a dynamically unstable environment. PMID:23577096
Spatio-Temporal Brain Mapping of Motion-Onset VEPs Combined with fMRI and Retinotopic Maps
Pitzalis, Sabrina; Strappini, Francesca; De Gasperis, Marco; Bultrini, Alessandro; Di Russo, Francesco
2012-01-01
Neuroimaging studies have identified several motion-sensitive visual areas in the human brain, but the time course of their activation cannot be measured with these techniques. In the present study, we combined electrophysiological and neuroimaging methods (including retinotopic brain mapping) to determine the spatio-temporal profile of motion-onset visual evoked potentials for slow and fast motion stimuli and to localize its neural generators. We found that cortical activity initiates in the primary visual area (V1) for slow stimuli, peaking 100 ms after the onset of motion. Subsequently, activity in the mid-temporal motion-sensitive areas, MT+, peaked at 120 ms, followed by peaks in activity in the more dorsal area, V3A, at 160 ms and the lateral occipital complex at 180 ms. Approximately 250 ms after stimulus onset, activity fast motion stimuli was predominant in area V6 along the parieto-occipital sulcus. Finally, at 350 ms (100 ms after the motion offset) brain activity was visible again in area V1. For fast motion stimuli, the spatio-temporal brain pattern was similar, except that the first activity was detected at 70 ms in area MT+. Comparing functional magnetic resonance data for slow vs. fast motion, we found signs of slow-fast motion stimulus topography along the posterior brain in at least three cortical regions (MT+, V3A and LOR). PMID:22558222
Orientation selectivity sharpens motion detection in Drosophila
Fisher, Yvette E.; Silies, Marion; Clandinin, Thomas R.
2015-01-01
SUMMARY Detecting the orientation and movement of edges in a scene is critical to visually guided behaviors of many animals. What are the circuit algorithms that allow the brain to extract such behaviorally vital visual cues? Using in vivo two-photon calcium imaging in Drosophila, we describe direction selective signals in the dendrites of T4 and T5 neurons, detectors of local motion. We demonstrate that this circuit performs selective amplification of local light inputs, an observation that constrains motion detection models and confirms a core prediction of the Hassenstein-Reichardt Correlator (HRC). These neurons are also orientation selective, responding strongly to static features that are orthogonal to their preferred axis of motion, a tuning property not predicted by the HRC. This coincident extraction of orientation and direction sharpens directional tuning through surround inhibition and reveals a striking parallel between visual processing in flies and vertebrate cortex, suggesting a universal strategy for motion processing. PMID:26456048
Bosworth, Rain G.; Petrich, Jennifer A.; Dobkins, Karen R.
2012-01-01
In order to investigate differences in the effects of spatial attention between the left visual field (LVF) and the right visual field (RVF), we employed a full/poor attention paradigm using stimuli presented in the LVF vs. RVF. In addition, to investigate differences in the effects of spatial attention between the Dorsal and Ventral processing streams, we obtained motion thresholds (motion coherence thresholds and fine direction discrimination thresholds) and orientation thresholds, respectively. The results of this study showed negligible effects of attention on the orientation task, in either the LVF or RVF. In contrast, for both motion tasks, there was a significant effect of attention in the LVF, but not in the RVF. These data provide psychophysical evidence for greater effects of spatial attention in the LVF/right hemisphere, specifically, for motion processing in the Dorsal stream. PMID:22051893
An experimental study of the nonlinear dynamic phenomenon known as wing rock
NASA Technical Reports Server (NTRS)
Arena, A. S., Jr.; Nelson, R. C.; Schiff, L. B.
1990-01-01
An experimental investigation into the physical phenomena associated with limit cycle wing rock on slender delta wings has been conducted. The model used was a slender flat plate delta wing with 80-deg leading edge sweep. The investigation concentrated on three main areas: motion characteristics obtained from time history plots, static and dynamic flow visualization of vortex position, and static and dynamic flow visualization of vortex breakdown. The flow visualization studies are correlated with model motion to determine the relationship between vortex position and vortex breakdown with the dynamic rolling moments. Dynamic roll moment coefficient curves reveal rate-dependent hysteresis, which drives the motion. Vortex position correlated with time and model motion show a time lag in the normal position of the upward moving wing vortex. This time lag may be the mechanism responsible for the hysteresis. Vortex breakdown is shown to have a damping effect on the motion.
NASA Astrophysics Data System (ADS)
Chen, Ho-Hsing; Wu, Jay; Chuang, Keh-Shih; Kuo, Hsiang-Chi
2007-07-01
Intensity-modulated radiation therapy (IMRT) utilizes nonuniform beam profile to deliver precise radiation doses to a tumor while minimizing radiation exposure to surrounding normal tissues. However, the problem of intrafraction organ motion distorts the dose distribution and leads to significant dosimetric errors. In this research, we applied an aperture adaptive technique with a visual guiding system to toggle the problem of respiratory motion. A homemade computer program showing a cyclic moving pattern was projected onto the ceiling to visually help patients adjust their respiratory patterns. Once the respiratory motion becomes regular, the leaf sequence can be synchronized with the target motion. An oscillator was employed to simulate the patient's breathing pattern. Two simple fields and one IMRT field were measured to verify the accuracy. Preliminary results showed that after appropriate training, the amplitude and duration of volunteer's breathing can be well controlled by the visual guiding system. The sharp dose gradient at the edge of the radiation fields was successfully restored. The maximum dosimetric error in the IMRT field was significantly decreased from 63% to 3%. We conclude that the aperture adaptive technique with the visual guiding system can be an inexpensive and feasible alternative without compromising delivery efficiency in clinical practice.
Motion transparency: making models of motion perception transparent.
Snowden; Verstraten
1999-10-01
In daily life our visual system is bombarded with motion information. We see cars driving by, flocks of birds flying in the sky, clouds passing behind trees that are dancing in the wind. Vision science has a good understanding of the first stage of visual motion processing, that is, the mechanism underlying the detection of local motions. Currently, research is focused on the processes that occur beyond the first stage. At this level, local motions have to be integrated to form objects, define the boundaries between them, construct surfaces and so on. An interesting, if complicated case is known as motion transparency: the situation in which two overlapping surfaces move transparently over each other. In that case two motions have to be assigned to the same retinal location. Several researchers have tried to solve this problem from a computational point of view, using physiological and psychophysical results as a guideline. We will discuss two models: one uses the traditional idea known as 'filter selection' and the other a relatively new approach based on Bayesian inference. Predictions from these models are compared with our own visual behaviour and that of the neural substrates that are presumed to underlie these perceptions.
Spering, Miriam; Montagnini, Anna
2011-04-22
Many neurophysiological studies in monkeys have indicated that visual motion information for the guidance of perception and smooth pursuit eye movements is - at an early stage - processed in the same visual pathway in the brain, crucially involving the middle temporal area (MT). However, these studies left some questions unanswered: Are perception and pursuit driven by the same or independent neuronal signals within this pathway? Are the perceptual interpretation of visual motion information and the motor response to visual signals limited by the same source of neuronal noise? Here, we review psychophysical studies that were motivated by these questions and compared perception and pursuit behaviorally in healthy human observers. We further review studies that focused on the interaction between perception and pursuit. The majority of results point to similarities between perception and pursuit, but dissociations were also reported. We discuss recent developments in this research area and conclude with suggestions for common and separate principles for the guidance of perceptual and motor responses to visual motion information. Copyright © 2010 Elsevier Ltd. All rights reserved.
Development of adaptive sensorimotor control in infant sitting posture.
Chen, Li-Chiou; Jeka, John; Clark, Jane E
2016-03-01
A reliable and adaptive relationship between action and perception is necessary for postural control. Our understanding of how this adaptive sensorimotor control develops during infancy is very limited. This study examines the dynamic visual-postural relationship during early development. Twenty healthy infants were divided into 4 developmental groups (each n=5): sitting onset, standing alone, walking onset, and 1-year post-walking. During the experiment, the infant sat independently in a virtual moving-room in which anterior-posterior oscillations of visual motion were presented using a sum-of-sines technique with five input frequencies (from 0.12 to 1.24 Hz). Infants were tested in five conditions that varied in the amplitude of visual motion (from 0 to 8.64 cm). Gain and phase responses of infants' postural sway were analyzed. Our results showed that infants, from a few months post-sitting to 1 year post-walking, were able to control their sitting posture in response to various frequency and amplitude properties of the visual motion. Infants showed an adult-like inverted-U pattern for the frequency response to visual inputs with the highest gain at 0.52 and 0.76 Hz. As the visual motion amplitude increased, the gain response decreased. For the phase response, an adult-like frequency-dependent pattern was observed in all amplitude conditions for the experienced walkers. Newly sitting infants, however, showed variable postural behavior and did not systemically respond to the visual stimulus. Our results suggest that visual-postural entrainment and sensory re-weighting are fundamental processes that are present after a few months post sitting. Sensorimotor refinement during early postural development may result from the interactions of improved self-motion control and enhanced perceptual abilities. Copyright © 2016 Elsevier B.V. All rights reserved.
Motion perception tasks as potential correlates to driving difficulty in the elderly
NASA Astrophysics Data System (ADS)
Raghuram, A.; Lakshminarayanan, V.
2006-09-01
Changes in the demographics indicates that the population older than 65 is on the rise because of the aging of the ‘baby boom’ generation. This aging trend and driving related accident statistics reveal the need for procedures and tests that would assess the driving ability of older adults and predict whether they would be safe or unsafe drivers. Literature shows that an attention based test called the useful field of view (UFOV) was a significant predictor of accident rates compared to any other visual function tests. The present study evaluates a qualitative trend on using motion perception tasks as a potential visual perceptual correlates in screening elderly drivers who might have difficulty in driving. Data was collected from 15 older subjects with a mean age of 71. Motion perception tasks included—speed discrimination with radial and lamellar motion, time to collision using prediction motion and estimating direction of heading. A motion index score was calculated which was indicative of performance on all of the above-mentioned motion tasks. Scores on visual attention was assessed using UFOV. A driving habit questionnaire was also administered for a self report on the driving difficulties and accident rates. A qualitative trend based on frequency distributions show that thresholds on the motion perception tasks are successful in identifying subjects who reported to have had difficulty in certain aspects of driving and had accidents. Correlation between UFOV and motion index scores was not significant indicating that probably different aspects of visual information processing that are crucial to driving behaviour are being tapped by these two paradigms. UFOV and motion perception tasks together can be a better predictor for identifying at risk or safe drivers than just using either one of them.
Automated reference-free detection of motion artifacts in magnetic resonance images.
Küstner, Thomas; Liebgott, Annika; Mauch, Lukas; Martirosian, Petros; Bamberg, Fabian; Nikolaou, Konstantin; Yang, Bin; Schick, Fritz; Gatidis, Sergios
2018-04-01
Our objectives were to provide an automated method for spatially resolved detection and quantification of motion artifacts in MR images of the head and abdomen as well as a quality control of the trained architecture. T1-weighted MR images of the head and the upper abdomen were acquired in 16 healthy volunteers under rest and under motion. Images were divided into overlapping patches of different sizes achieving spatial separation. Using these patches as input data, a convolutional neural network (CNN) was trained to derive probability maps for the presence of motion artifacts. A deep visualization offers a human-interpretable quality control of the trained CNN. Results were visually assessed on probability maps and as classification accuracy on a per-patch, per-slice and per-volunteer basis. On visual assessment, a clear difference of probability maps was observed between data sets with and without motion. The overall accuracy of motion detection on a per-patch/per-volunteer basis reached 97%/100% in the head and 75%/100% in the abdomen, respectively. Automated detection of motion artifacts in MRI is feasible with good accuracy in the head and abdomen. The proposed method provides quantification and localization of artifacts as well as a visualization of the learned content. It may be extended to other anatomic areas and used for quality assurance of MR images.
Summation of visual motion across eye movements reflects a nonspatial decision mechanism.
Morris, Adam P; Liu, Charles C; Cropper, Simon J; Forte, Jason D; Krekelberg, Bart; Mattingley, Jason B
2010-07-21
Human vision remains perceptually stable even though retinal inputs change rapidly with each eye movement. Although the neural basis of visual stability remains unknown, a recent psychophysical study pointed to the existence of visual feature-representations anchored in environmental rather than retinal coordinates (e.g., "spatiotopic" receptive fields; Melcher and Morrone, 2003). In that study, sensitivity to a moving stimulus presented after a saccadic eye movement was enhanced when preceded by another moving stimulus at the same spatial location before the saccade. The finding is consistent with spatiotopic sensory integration, but it could also have arisen from a probabilistic improvement in performance due to the presence of more than one motion signal for the perceptual decision. Here we show that this statistical advantage accounts completely for summation effects in this task. We first demonstrate that measurements of summation are confounded by noise related to an observer's uncertainty about motion onset times. When this uncertainty is minimized, comparable summation is observed regardless of whether two motion signals occupy the same or different locations in space, and whether they contain the same or opposite directions of motion. These results are incompatible with the tuning properties of motion-sensitive sensory neurons and provide no evidence for a spatiotopic representation of visual motion. Instead, summation in this context reflects a decision mechanism that uses abstract representations of sensory events to optimize choice behavior.
Can biological motion research provide insight on how to reduce friendly fire incidents?
Steel, Kylie A; Baxter, David; Dogramaci, Sera; Cobley, Stephen; Ellem, Eathan
2016-10-01
The ability to accurately detect, perceive, and recognize biological motion can be associated with a fundamental drive for survival, and it is a significant interest for perception researchers. This field examines various perceptual features of motion and has been assessed and applied in several real-world contexts (e.g., biometric, sport). Unexplored applications still exist however, including the military issue of friendly fire. There are many causes and processes leading to friendly fire and specific challenges that are associated with visual information extraction during engagement, such as brief glimpses, low acuity, camouflage, and uniform deception. Furthermore, visual information must often be processed under highly stressful (potentially threatening), time-constrained conditions that present a significant problem for soldiers. Biological motion research and anecdotal evidence from experienced combatants suggests that intentions, emotions, identities of human motion can be identified and discriminated, even when visual display is degraded or limited. Furthermore, research suggests that perceptual discriminatory capability of movement under visually constrained conditions is trainable. Therefore, given the limited military research linked to biological motion and friendly fire, an opportunity for cross-disciplinary investigations exists. The focus of this paper is twofold: first, to provide evidence for the possible link between biological motion factors and friendly fire, and second, to propose conceptual and methodological considerations and recommendations for perceptual-cognitive training within current military programs.
Burnat, Kalina; Hu, Tjing-Tjing; Kossut, Małgorzata; Eysel, Ulf T; Arckens, Lutgarde
2017-09-13
Induction of a central retinal lesion in both eyes of adult mammals is a model for macular degeneration and leads to retinotopic map reorganization in the primary visual cortex (V1). Here we characterized the spatiotemporal dynamics of molecular activity levels in the central and peripheral representation of five higher-order visual areas, V2/18, V3/19, V4/21a,V5/PMLS, area 7, and V1/17, in adult cats with central 10° retinal lesions (both sexes), by means of real-time PCR for the neuronal activity reporter gene zif268. The lesions elicited a similar, permanent reduction in activity in the center of the lesion projection zone of area V1/17, V2/18, V3/19, and V4/21a, but not in the motion-driven V5/PMLS, which instead displayed an increase in molecular activity at 3 months postlesion, independent of visual field coordinates. Also area 7 only displayed decreased activity in its LPZ in the first weeks postlesion and increased activities in its periphery from 1 month onward. Therefore we examined the impact of central vision loss on motion perception using random dot kinematograms to test the capacity for form from motion detection based on direction and velocity cues. We revealed that the central retinal lesions either do not impair motion detection or even result in better performance, specifically when motion discrimination was based on velocity discrimination. In conclusion, we propose that central retinal damage leads to enhanced peripheral vision by sensitizing the visual system for motion processing relying on feedback from V5/PMLS and area 7. SIGNIFICANCE STATEMENT Central retinal lesions, a model for macular degeneration, result in functional reorganization of the primary visual cortex. Examining the level of cortical reactivation with the molecular activity marker zif268 revealed reorganization in visual areas outside V1. Retinotopic lesion projection zones typically display an initial depression in zif268 expression, followed by partial recovery with postlesion time. Only the motion-sensitive area V5/PMLS shows no decrease, and even a significant activity increase at 3 months post-retinal lesion. Behavioral tests of motion perception found no impairment and even better sensitivity to higher random dot stimulus velocities. We demonstrate that the loss of central vision induces functional mobilization of motion-sensitive visual cortex, resulting in enhanced perception of moving stimuli. Copyright © 2017 the authors 0270-6474/17/378989-11$15.00/0.
Allen, Christopher P. G.; Dunkley, Benjamin T.; Muthukumaraswamy, Suresh D.; Edden, Richard; Evans, C. John; Sumner, Petroc; Singh, Krish D.; Chambers, Christopher D.
2014-01-01
This series of experiments investigated the neural basis of conscious vision in humans using a form of transcranial magnetic stimulation (TMS) known as continuous theta burst stimulation (cTBS). Previous studies have shown that occipital TMS, when time-locked to the onset of visual stimuli, can induce a phenomenon analogous to blindsight in which conscious detection is impaired while the ability to discriminate ‘unseen’ stimuli is preserved above chance. Here we sought to reproduce this phenomenon using offline occipital cTBS, which has been shown to induce an inhibitory cortical aftereffect lasting 45–60 minutes. Contrary to expectations, our first experiment revealed the opposite effect: cTBS enhanced conscious vision relative to a sham control. We then sought to replicate this cTBS-induced potentiation of consciousness in conjunction with magnetoencephalography (MEG) and undertook additional experiments to assess its relationship to visual cortical excitability and levels of the inhibitory neurotransmitter γ-aminobutyric acid (GABA; via magnetic resonance spectroscopy, MRS). Occipital cTBS decreased cortical excitability and increased regional GABA concentration. No significant effects of cTBS on MEG measures were observed, although the results provided weak evidence for potentiation of event related desynchronisation in the β band. Collectively these experiments suggest that, through the suppression of noise, cTBS can increase the signal-to-noise ratio of neural activity underlying conscious vision. We speculate that gating-by-inhibition in the visual cortex may provide a key foundation of consciousness. PMID:24956195
Preattentive binding of auditory and visual stimulus features.
Winkler, István; Czigler, István; Sussman, Elyse; Horváth, János; Balázs, Lászlo
2005-02-01
We investigated the role of attention in feature binding in the auditory and the visual modality. One auditory and one visual experiment used the mismatch negativity (MMN and vMMN, respectively) event-related potential to index the memory representations created from stimulus sequences, which were either task-relevant and, therefore, attended or task-irrelevant and ignored. In the latter case, the primary task was a continuous demanding within-modality task. The test sequences were composed of two frequently occurring stimuli, which differed from each other in two stimulus features (standard stimuli) and two infrequently occurring stimuli (deviants), which combined one feature from one standard stimulus with the other feature of the other standard stimulus. Deviant stimuli elicited MMN responses of similar parameters across the different attentional conditions. These results suggest that the memory representations involved in the MMN deviance detection response encoded the frequently occurring feature combinations whether or not the test sequences were attended. A possible alternative to the memory-based interpretation of the visual results, the elicitation of the McCollough color-contingent aftereffect, was ruled out by the results of our third experiment. The current results are compared with those supporting the attentive feature integration theory. We conclude that (1) with comparable stimulus paradigms, similar results have been obtained in the two modalities, (2) there exist preattentive processes of feature binding, however, (3) conjoining features within rich arrays of objects under time pressure and/or longterm retention of the feature-conjoined memory representations may require attentive processes.
Pavan, Andrea; Ghin, Filippo; Donato, Rita; Campana, Gianluca; Mather, George
2017-08-15
A long-held view of the visual system is that form and motion are independently analysed. However, there is physiological and psychophysical evidence of early interaction in the processing of form and motion. In this study, we used a combination of Glass patterns (GPs) and repetitive Transcranial Magnetic Stimulation (rTMS) to investigate in human observers the neural mechanisms underlying form-motion integration. GPs consist of randomly distributed dot pairs (dipoles) that induce the percept of an oriented stimulus. GPs can be either static or dynamic. Dynamic GPs have both a form component (i.e., orientation) and a non-directional motion component along the orientation axis. GPs were presented in two temporal intervals and observers were asked to discriminate the temporal interval containing the most coherent GP. rTMS was delivered over early visual area (V1/V2) and over area V5/MT shortly after the presentation of the GP in each interval. The results showed that rTMS applied over early visual areas affected the perception of static GPs, but the stimulation of area V5/MT did not affect observers' performance. On the other hand, rTMS was delivered over either V1/V2 or V5/MT strongly impaired the perception of dynamic GPs. These results suggest that early visual areas seem to be involved in the processing of the spatial structure of GPs, and interfering with the extraction of the global spatial structure also affects the extraction of the motion component, possibly interfering with early form-motion integration. However, visual area V5/MT is likely to be involved only in the processing of the motion component of dynamic GPs. These results suggest that motion and form cues may interact as early as V1/V2. Copyright © 2017 Elsevier Inc. All rights reserved.
Haltere mechanosensory influence on tethered flight behavior in Drosophila.
Mureli, Shwetha; Fox, Jessica L
2015-08-01
In flies, mechanosensory information from modified hindwings known as halteres is combined with visual information for wing-steering behavior. Haltere input is necessary for free flight, making it difficult to study the effects of haltere ablation under natural flight conditions. We thus used tethered Drosophila melanogaster flies to examine the relationship between halteres and the visual system, using wide-field motion or moving figures as visual stimuli. Haltere input was altered by surgically decreasing its mass, or by removing it entirely. Haltere removal does not affect the flies' ability to flap or steer their wings, but it does increase the temporal frequency at which they modify their wingbeat amplitude. Reducing the haltere mass decreases the optomotor reflex response to wide-field motion, and removing the haltere entirely does not further decrease the response. Decreasing the mass does not attenuate the response to figure motion, but removing the entire haltere does attenuate the response. When flies are allowed to control a visual stimulus in closed-loop conditions, haltereless flies fixate figures with the same acuity as intact flies, but cannot stabilize a wide-field stimulus as accurately as intact flies can. These manipulations suggest that the haltere mass is influential in wide-field stabilization, but less so in figure tracking. In both figure and wide-field experiments, we observe responses to visual motion with and without halteres, indicating that during tethered flight, intact halteres are not strictly necessary for visually guided wing-steering responses. However, the haltere feedback loop may operate in a context-dependent way to modulate responses to visual motion. © 2015. Published by The Company of Biologists Ltd.
NASA Technical Reports Server (NTRS)
Harm, D. L.; Taylor, L. C.
2006-01-01
Virtual environments offer unique training opportunities, particularly for training astronauts and preadapting them to the novel sensory conditions of microgravity. Two unresolved human factors issues in virtual reality (VR) systems are: 1) potential "cybersickness", and 2) maladaptive sensorimotor performance following exposure to VR systems. Interestingly, these aftereffects are often quite similar to adaptive sensorimotor responses observed in astronauts during and/or following space flight. Changes in the environmental sensory stimulus conditions and the way we interact with the new stimuli may result in motion sickness, and perceptual, spatial orientation and sensorimotor disturbances. Initial interpretation of novel sensory information may be inappropriate and result in perceptual errors. Active exploratory behavior in a new environment, with resulting feedback and the formation of new associations between sensory inputs and response outputs, promotes appropriate perception and motor control in the new environment. Thus, people adapt to consistent, sustained alterations of sensory input such as those produced by microgravity, unilateral labyrinthectomy and experimentally produced stimulus rearrangements. Adaptation is revealed by aftereffects including perceptual disturbances and sensorimotor control disturbances. The purpose of the current study was to compare disturbances in postural control produced by dome and head-mounted virtual environment displays, and to examine the effects of exposure duration, and repeated exposures to VR systems. Forty-one subjects (21 men, 20 women) participated in the study with an age range of 21-49 years old. One training session was completed in order to achieve stable performance on the posture and VR tasks before participating in the experimental sessions. Three experimental sessions were performed each separated by one day. The subjects performed a navigation and pick and place task in either a dome or head-mounted display (HMD) VR system for either 30 or 60 min. The environment was a square room with 15 pedestals on two opposite walls. The objects appeared on one set of pedestals and the subject s objective was to move the objects to the other set of pedestals. After the subject picked up an object, a pathway appeared and they were required to follow the pathway to the other side of the room. The subject was instructed to perform the task as quickly and accurately as possible, avoiding hitting walls and other any obstacles and placing the object on the center of the pedestal. Postural equilibrium was measured (using the Equitest CDP balance system, Neurocom, International) before, immediately after, and at 1 hr, 2 hr, 4 hr and 6 hr following exposure to VR. Postural equilibrium was measured during quiet stance with eyes open, eyes closed and vision and/or ankle proprioceptive inputs selectively altered by servo-controlling the visual surround and/or support surface to the subject s center of mass sway. Posture data was normalized using a log transformation and motion sickness data were normalized using the square root. In general, we found that exposure to VR resulted in decrements in postural stability. The largest decrements were observed in the tests performed immediately following exposure to VR and showed a fairly rapid recovery across the remaining test sessions. In addition, subjects generally showed improvement across days. We found significant main effects for day and time for the composite equilibrium score and for sensory organization tests (SOT) 1, 2 and 6. Significant main effects were observed for day for SOT 3 and 5. Although we found no significant main effects for gender (when center of gravity was used as a covariate), we did observe significant gender X time interaction effects for composite equilibrium and for SOT 1, 3, 4 and 5. Women appeared to show larger decrements in postural stability immediately after exposure to VR than men, but recover more quickly than n. Finally, we found no significant main effects for type of VR device or for exposure duration, however, these factors did interact with other factors during some of the SOTs. Subjects exhibited rapid recovery of motion sickness symptoms across time following exposure to VR and significantly less severe symptoms across days. We did not observe main effects for gender, type of device or duration of exposure. Individuals recovered from the detrimental effects of exposure to virtual reality on postural control and motion sickness within one hour. Sickness severity and initial decrements in postural equilibrium decreases over days, which suggests that subjects become dual-adapted over time. These findings provide some direction for developing training schedules for VR users that facilitate adaptation, and support the idea that preflight training of astronauts may serve as useful countermeasure for the sensorimotor effects of space flight.
Casellato, Claudia; Pedrocchi, Alessandra; Zorzi, Giovanna; Rizzi, Giorgio; Ferrigno, Giancarlo; Nardocci, Nardo
2012-07-23
Robot-generated deviating forces during multijoint reaching movements have been applied to investigate motor control and to tune neuromotor adaptation. Can the application of force to limbs improve motor learning? In this framework, the response to altered dynamic environments of children affected by primary dystonia has never been studied. As preliminary pilot study, eleven children with primary dystonia and eleven age-matched healthy control subjects were asked to perform upper limb movements, triangle-reaching (three directions) and circle-writing, using a haptic robot interacting with ad-hoc developed task-specific visual interfaces. Three dynamic conditions were provided, null additive external force (A), constant disturbing force (B) and deactivation of the additive external force again (C). The path length for each trial was computed, from the recorded position data and interaction events. The results show that the disturbing force affects significantly the movement outcomes in healthy but not in dystonic subjects, already compromised in the reference condition: the external alteration uncalibrates the healthy sensorimotor system, while the dystonic one is already strongly uncalibrated. The lack of systematic compensation for perturbation effects during B condition is reflected into the absence of after-effects in C condition, which would be the evidence that CNS generates a prediction of the perturbing forces using an internal model of the environment.The most promising finding is that in dystonic population the altered dynamic exposure seems to induce a subsequent improvement, i.e. a beneficial after-effect in terms of optimal path control, compared with the correspondent reference movement outcome. The short-time error-enhancing training in dystonia could represent an effective approach for motor performance improvement, since the exposure to controlled dynamic alterations induces a refining of the existing but strongly imprecise motor scheme and sensorimotor patterns.
2012-01-01
Background Robot-generated deviating forces during multijoint reaching movements have been applied to investigate motor control and to tune neuromotor adaptation. Can the application of force to limbs improve motor learning? In this framework, the response to altered dynamic environments of children affected by primary dystonia has never been studied. Methods As preliminary pilot study, eleven children with primary dystonia and eleven age-matched healthy control subjects were asked to perform upper limb movements, triangle-reaching (three directions) and circle-writing, using a haptic robot interacting with ad-hoc developed task-specific visual interfaces. Three dynamic conditions were provided, null additive external force (A), constant disturbing force (B) and deactivation of the additive external force again (C). The path length for each trial was computed, from the recorded position data and interaction events. Results The results show that the disturbing force affects significantly the movement outcomes in healthy but not in dystonic subjects, already compromised in the reference condition: the external alteration uncalibrates the healthy sensorimotor system, while the dystonic one is already strongly uncalibrated. The lack of systematic compensation for perturbation effects during B condition is reflected into the absence of after-effects in C condition, which would be the evidence that CNS generates a prediction of the perturbing forces using an internal model of the environment. The most promising finding is that in dystonic population the altered dynamic exposure seems to induce a subsequent improvement, i.e. a beneficial after-effect in terms of optimal path control, compared with the correspondent reference movement outcome. Conclusions The short-time error-enhancing training in dystonia could represent an effective approach for motor performance improvement, since the exposure to controlled dynamic alterations induces a refining of the existing but strongly imprecise motor scheme and sensorimotor patterns. PMID:22824547
Sensory convergence in the parieto-insular vestibular cortex
Shinder, Michael E.
2014-01-01
Vestibular signals are pervasive throughout the central nervous system, including the cortex, where they likely play different roles than they do in the better studied brainstem. Little is known about the parieto-insular vestibular cortex (PIVC), an area of the cortex with prominent vestibular inputs. Neural activity was recorded in the PIVC of rhesus macaques during combinations of head, body, and visual target rotations. Activity of many PIVC neurons was correlated with the motion of the head in space (vestibular), the twist of the neck (proprioceptive), and the motion of a visual target, but was not associated with eye movement. PIVC neurons responded most commonly to more than one stimulus, and responses to combined movements could often be approximated by a combination of the individual sensitivities to head, neck, and target motion. The pattern of visual, vestibular, and somatic sensitivities on PIVC neurons displayed a continuous range, with some cells strongly responding to one or two of the stimulus modalities while other cells responded to any type of motion equivalently. The PIVC contains multisensory convergence of self-motion cues with external visual object motion information, such that neurons do not represent a specific transformation of any one sensory input. Instead, the PIVC neuron population may define the movement of head, body, and external visual objects in space and relative to one another. This comparison of self and external movement is consistent with insular cortex functions related to monitoring and explains many disparate findings of previous studies. PMID:24671533
Attributing intentions to random motion engages the posterior superior temporal sulcus.
Lee, Su Mei; Gao, Tao; McCarthy, Gregory
2014-01-01
The right posterior superior temporal sulcus (pSTS) is a neural region involved in assessing the goals and intentions underlying the motion of social agents. Recent research has identified visual cues, such as chasing, that trigger animacy detection and intention attribution. When readily available in a visual display, these cues reliably activate the pSTS. Here, using functional magnetic resonance imaging, we examined if attributing intentions to random motion would likewise engage the pSTS. Participants viewed displays of four moving circles and were instructed to search for chasing or mirror-correlated motion. On chasing trials, one circle chased another circle, invoking the percept of an intentional agent; while on correlated motion trials, one circle's motion was mirror reflected by another. On the remaining trials, all circles moved randomly. As expected, pSTS activation was greater when participants searched for chasing vs correlated motion when these cues were present in the displays. Of critical importance, pSTS activation was also greater when participants searched for chasing compared to mirror-correlated motion when the displays in both search conditions were statistically identical random motion. We conclude that pSTS activity associated with intention attribution can be invoked by top-down processes in the absence of reliable visual cues for intentionality.
Directional asymmetries in human smooth pursuit eye movements.
Ke, Sally R; Lam, Jessica; Pai, Dinesh K; Spering, Miriam
2013-06-27
Humans make smooth pursuit eye movements to bring the image of a moving object onto the fovea. Although pursuit accuracy is critical to prevent motion blur, the eye often falls behind the target. Previous studies suggest that pursuit accuracy differs between motion directions. Here, we systematically assess asymmetries in smooth pursuit. In experiment 1, binocular eye movements were recorded while observers (n = 20) tracked a small spot of light moving along one of four cardinal or diagonal axes across a featureless background. We analyzed pursuit latency, acceleration, peak velocity, gain, and catch-up saccade latency, number, and amplitude. In experiment 2 (n = 22), we examined the effects of spatial location and constrained stimulus motion within the upper or lower visual field. Pursuit was significantly faster (higher acceleration, peak velocity, and gain) and smoother (fewer and later catch-up saccades) in response to downward versus upward motion in both the upper and the lower visual fields. Pursuit was also more accurate and smoother in response to horizontal versus vertical motion. CONCLUSIONS. Our study is the first to report a consistent up-down asymmetry in human adults, regardless of visual field. Our findings suggest that pursuit asymmetries are adaptive responses to the requirements of the visual context: preferred motion directions (horizontal and downward) are more critical to our survival than nonpreferred ones.
Takamuku, Shinya; Gomi, Hiroaki
2015-01-01
How our central nervous system (CNS) learns and exploits relationships between force and motion is a fundamental issue in computational neuroscience. While several lines of evidence have suggested that the CNS predicts motion states and signals from motor commands for control and perception (forward dynamics), it remains controversial whether it also performs the ‘inverse’ computation, i.e. the estimation of force from motion (inverse dynamics). Here, we show that the resistive sensation we experience while moving a delayed cursor, perceived purely from the change in visual motion, provides evidence of the inverse computation. To clearly specify the computational process underlying the sensation, we systematically varied the visual feedback and examined its effect on the strength of the sensation. In contrast to the prevailing theory that sensory prediction errors modulate our perception, the sensation did not correlate with errors in cursor motion due to the delay. Instead, it correlated with the amount of exposure to the forward acceleration of the cursor. This indicates that the delayed cursor is interpreted as a mechanical load, and the sensation represents its visually implied reaction force. Namely, the CNS automatically computes inverse dynamics, using visually detected motions, to monitor the dynamic forces involved in our actions. PMID:26156766
NASA Technical Reports Server (NTRS)
Sinacori, J. B.
1980-01-01
A conceptual design of a visual system for a rotorcraft flight simulator is presented. Also, drive logic elements for a coupled motion base for such a simulator are given. The design is the result of an assessment of many potential arrangements of electro-optical elements and is a concept considered feasible for the application. The motion drive elements represent an example logic for a coupled motion base and is essentially an appeal to the designers of such logic to combine their washout and braking functions.
Functional specialization and generalization for grouping of stimuli based on colour and motion
Zeki, Semir; Stutters, Jonathan
2013-01-01
This study was undertaken to learn whether the principle of functional specialization that is evident at the level of the prestriate visual cortex extends to areas that are involved in grouping visual stimuli according to attribute, and specifically according to colour and motion. Subjects viewed, in an fMRI scanner, visual stimuli composed of moving dots, which could be either coloured or achromatic; in some stimuli the moving coloured dots were randomly distributed or moved in random directions; in others, some of the moving dots were grouped together according to colour or to direction of motion, with the number of groupings varying from 1 to 3. Increased activation was observed in area V4 in response to colour grouping and in V5 in response to motion grouping while both groupings led to activity in separate though contiguous compartments within the intraparietal cortex. The activity in all the above areas was parametrically related to the number of groupings, as was the prominent activity in Crus I of the cerebellum where the activity resulting from the two types of grouping overlapped. This suggests (a) that, the specialized visual areas of the prestriate cortex have functions beyond the processing of visual signals according to attribute, namely that of grouping signals according to colour (V4) or motion (V5); (b) that the functional separation evident in visual cortical areas devoted to motion and colour, respectively, is maintained at the level of parietal cortex, at least as far as grouping according to attribute is concerned; and (c) that, by contrast, this grouping-related functional segregation is not maintained at the level of the cerebellum. PMID:23415950
The effect of visual-motion time-delays on pilot performance in a simulated pursuit tracking task
NASA Technical Reports Server (NTRS)
Miller, G. K., Jr.; Riley, D. R.
1977-01-01
An experimental study was made to determine the effect on pilot performance of time delays in the visual and motion feedback loops of a simulated pursuit tracking task. Three major interrelated factors were identified: task difficulty either in the form of airplane handling qualities or target frequency, the amount and type of motion cues, and time delay itself. In general, the greater the task difficulty, the smaller the time delay that could exist without degrading pilot performance. Conversely, the greater the motion fidelity, the greater the time delay that could be tolerated. The effect of motion was, however, pilot dependent.
Wong, Yvonne J; Aldcroft, Adrian J; Large, Mary-Ellen; Culham, Jody C; Vilis, Tutis
2009-12-01
We examined the role of temporal synchrony-the simultaneous appearance of visual features-in the perceptual and neural processes underlying object persistence. When a binding cue (such as color or motion) momentarily exposes an object from a background of similar elements, viewers remain aware of the object for several seconds before it perceptually fades into the background, a phenomenon known as object persistence. We showed that persistence from temporal stimulus synchrony, like that arising from motion and color, is associated with activation in the lateral occipital (LO) area, as measured by functional magnetic resonance imaging. We also compared the distribution of occipital cortex activity related to persistence to that of iconic visual memory. Although activation related to iconic memory was largely confined to LO, activation related to object persistence was present across V1 to LO, peaking in V3 and V4, regardless of the binding cue (temporal synchrony, motion, or color). Although persistence from motion cues was not associated with higher activation in the MT+ motion complex, persistence from color cues was associated with increased activation in V4. Taken together, these results demonstrate that although persistence is a form of visual memory, it relies on neural mechanisms different from those of iconic memory. That is, persistence not only activates LO in a cue-independent manner, it also recruits visual areas that may be necessary to maintain binding between object elements.