Sample records for motion perception

  1. The role of human ventral visual cortex in motion perception

    PubMed Central

    Saygin, Ayse P.; Lorenzi, Lauren J.; Egan, Ryan; Rees, Geraint; Behrmann, Marlene

    2013-01-01

    Visual motion perception is fundamental to many aspects of visual perception. Visual motion perception has long been associated with the dorsal (parietal) pathway and the involvement of the ventral ‘form’ (temporal) visual pathway has not been considered critical for normal motion perception. Here, we evaluated this view by examining whether circumscribed damage to ventral visual cortex impaired motion perception. The perception of motion in basic, non-form tasks (motion coherence and motion detection) and complex structure-from-motion, for a wide range of motion speeds, all centrally displayed, was assessed in five patients with a circumscribed lesion to either the right or left ventral visual pathway. Patients with a right, but not with a left, ventral visual lesion displayed widespread impairments in central motion perception even for non-form motion, for both slow and for fast speeds, and this held true independent of the integrity of areas MT/V5, V3A or parietal regions. In contrast with the traditional view in which only the dorsal visual stream is critical for motion perception, these novel findings implicate a more distributed circuit in which the integrity of the right ventral visual pathway is also necessary even for the perception of non-form motion. PMID:23983030

  2. Disorders of motion and depth.

    PubMed

    Nawrot, Mark

    2003-08-01

    Damage to the human homologue of area MT produces a motion perception deficit similar to that found in the monkey with MT lesions. Even temporary disruption of MT processing with transcranial magnetic stimulation can produce a temporary akinetopsia [127]. Motion perception deficits, however, also are found with a variety of subcortical lesions and other neurologic disorders that can best be described as causing a disconnection within the motion processing stream. The precise role of these subcortical structures, such as the cerebellum, remains to be determined. Simple motion perception, moreover, is only a part of MT function. It undoubtedly has an important role in the perception of depth from motion and stereopsis [112]. Psychophysical studies using aftereffects in normal observers suggest a link between stereo mechanisms and the perception of depth from motion [9-11]. There is even a simple correlation between stereo acuity and the perception of depth from motion [128]. Future studies of patients with cortical lesions will take a closer look at depth perception in association with motion perception and should provide a better understanding of how motion and depth are processed together.

  3. Visual motion integration for perception and pursuit

    NASA Technical Reports Server (NTRS)

    Stone, L. S.; Beutter, B. R.; Lorenceau, J.

    2000-01-01

    To examine the relationship between visual motion processing for perception and pursuit, we measured the pursuit eye-movement and perceptual responses to the same complex-motion stimuli. We show that humans can both perceive and pursue the motion of line-figure objects, even when partial occlusion makes the resulting image motion vastly different from the underlying object motion. Our results show that both perception and pursuit can perform largely accurate motion integration, i.e. the selective combination of local motion signals across the visual field to derive global object motion. Furthermore, because we manipulated perceived motion while keeping image motion identical, the observed parallel changes in perception and pursuit show that the motion signals driving steady-state pursuit and perception are linked. These findings disprove current pursuit models whose control strategy is to minimize retinal image motion, and suggest a new framework for the interplay between visual cortex and cerebellum in visuomotor control.

  4. Impaired Perception of Biological Motion in Parkinson’s Disease

    PubMed Central

    Jaywant, Abhishek; Shiffrar, Maggie; Roy, Serge; Cronin-Golomb, Alice

    2016-01-01

    Objective We examined biological motion perception in Parkinson’s disease (PD). Biological motion perception is related to one’s own motor function and depends on the integrity of brain areas affected in PD, including posterior superior temporal sulcus. If deficits in biological motion perception exist, they may be specific to perceiving natural/fast walking patterns that individuals with PD can no longer perform, and may correlate with disease-related motor dysfunction. Method 26 non-demented individuals with PD and 24 control participants viewed videos of point-light walkers and scrambled versions that served as foils, and indicated whether each video depicted a human walking. Point-light walkers varied by gait type (natural, parkinsonian) and speed (0.5, 1.0, 1.5 m/s). Participants also completed control tasks (object motion, coherent motion perception), a contrast sensitivity assessment, and a walking assessment. Results The PD group demonstrated significantly less sensitivity to biological motion than the control group (p<.001, Cohen’s d=1.22), regardless of stimulus gait type or speed, with a less substantial deficit in object motion perception (p=.02, Cohen’s d=.68). There was no group difference in coherent motion perception. Although individuals with PD had slower walking speed and shorter stride length than control participants, gait parameters did not correlate with biological motion perception. Contrast sensitivity and coherent motion perception also did not correlate with biological motion perception. Conclusion PD leads to a deficit in perceiving biological motion, which is independent of gait dysfunction and low-level vision changes, and may therefore arise from difficulty perceptually integrating form and motion cues in posterior superior temporal sulcus. PMID:26949927

  5. Motion coherence affects human perception and pursuit similarly.

    PubMed

    Beutter, B R; Stone, L S

    2000-01-01

    Pursuit and perception both require accurate information about the motion of objects. Recovering the motion of objects by integrating the motion of their components is a difficult visual task. Successful integration produces coherent global object motion, while a failure to integrate leaves the incoherent local motions of the components unlinked. We compared the ability of perception and pursuit to perform motion integration by measuring direction judgments and the concomitant eye-movement responses to line-figure parallelograms moving behind stationary rectangular apertures. The apertures were constructed such that only the line segments corresponding to the parallelogram's sides were visible; thus, recovering global motion required the integration of the local segment motion. We investigated several potential motion-integration rules by using stimuli with different object, vector-average, and line-segment terminator-motion directions. We used an oculometric decision rule to directly compare direction discrimination for pursuit and perception. For visible apertures, the percept was a coherent object, and both the pursuit and perceptual performance were close to the object-motion prediction. For invisible apertures, the percept was incoherently moving segments, and both the pursuit and perceptual performance were close to the terminator-motion prediction. Furthermore, both psychometric and oculometric direction thresholds were much higher for invisible apertures than for visible apertures. We constructed a model in which both perception and pursuit are driven by a shared motion-processing stage, with perception having an additional input from an independent static-processing stage. Model simulations were consistent with our perceptual and oculomotor data. Based on these results, we propose the use of pursuit as an objective and continuous measure of perceptual coherence. Our results support the view that pursuit and perception share a common motion-integration stage, perhaps within areas MT or MST.

  6. Motion coherence affects human perception and pursuit similarly

    NASA Technical Reports Server (NTRS)

    Beutter, B. R.; Stone, L. S.

    2000-01-01

    Pursuit and perception both require accurate information about the motion of objects. Recovering the motion of objects by integrating the motion of their components is a difficult visual task. Successful integration produces coherent global object motion, while a failure to integrate leaves the incoherent local motions of the components unlinked. We compared the ability of perception and pursuit to perform motion integration by measuring direction judgments and the concomitant eye-movement responses to line-figure parallelograms moving behind stationary rectangular apertures. The apertures were constructed such that only the line segments corresponding to the parallelogram's sides were visible; thus, recovering global motion required the integration of the local segment motion. We investigated several potential motion-integration rules by using stimuli with different object, vector-average, and line-segment terminator-motion directions. We used an oculometric decision rule to directly compare direction discrimination for pursuit and perception. For visible apertures, the percept was a coherent object, and both the pursuit and perceptual performance were close to the object-motion prediction. For invisible apertures, the percept was incoherently moving segments, and both the pursuit and perceptual performance were close to the terminator-motion prediction. Furthermore, both psychometric and oculometric direction thresholds were much higher for invisible apertures than for visible apertures. We constructed a model in which both perception and pursuit are driven by a shared motion-processing stage, with perception having an additional input from an independent static-processing stage. Model simulations were consistent with our perceptual and oculomotor data. Based on these results, we propose the use of pursuit as an objective and continuous measure of perceptual coherence. Our results support the view that pursuit and perception share a common motion-integration stage, perhaps within areas MT or MST.

  7. Influence of Visual Motion, Suggestion, and Illusory Motion on Self-Motion Perception in the Horizontal Plane.

    PubMed

    Rosenblatt, Steven David; Crane, Benjamin Thomas

    2015-01-01

    A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37) participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s) at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001) and rotation (p<0.001), and arrows (p = 0.02). For the visual motion stimuli, inertial motion perception was shifted in the direction consistent with the visual stimulus. Arrows had a small effect on self-motion perception driven by a minority of subjects. There was no significant effect of illusory motion on self-motion perception for either translation or rotation (p>0.1 for both). Thus, although a true moving visual field can induce self-motion, results of this study show that illusory motion does not.

  8. Behavioural evidence for distinct mechanisms related to global and biological motion perception.

    PubMed

    Miller, Louisa; Agnew, Hannah C; Pilz, Karin S

    2018-01-01

    The perception of human motion is a vital ability in our daily lives. Human movement recognition is often studied using point-light stimuli in which dots represent the joints of a moving person. Depending on task and stimulus, the local motion of the single dots, and the global form of the stimulus can be used to discriminate point-light stimuli. Previous studies often measured motion coherence for global motion perception and contrasted it with performance in biological motion perception to assess whether difficulties in biological motion processing are related to more general difficulties with motion processing. However, it is so far unknown as to how performance in global motion tasks relates to the ability to use local motion or global form to discriminate point-light stimuli. Here, we investigated this relationship in more detail. In Experiment 1, we measured participants' ability to discriminate the facing direction of point-light stimuli that contained primarily local motion, global form, or both. In Experiment 2, we embedded point-light stimuli in noise to assess whether previously found relationships in task performance are related to the ability to detect signal in noise. In both experiments, we also assessed motion coherence thresholds from random-dot kinematograms. We found relationships between performances for the different biological motion stimuli, but performance for global and biological motion perception was unrelated. These results are in accordance with previous neuroimaging studies that highlighted distinct areas for global and biological motion perception in the dorsal pathway, and indicate that results regarding the relationship between global motion perception and biological motion perception need to be interpreted with caution. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Motion perception and driving: predicting performance through testing and shortening braking reaction times through training.

    PubMed

    Wilkins, Luke; Gray, Rob; Gaska, James; Winterbottom, Marc

    2013-12-30

    A driving simulator was used to examine the relationship between motion perception and driving performance. Although motion perception test scores have been shown to be related to driving safety, it is not clear which combination of tests are the best predictors and whether motion perception training can improve driving performance. In experiment 1, 60 younger drivers (22.4 ± 2.5 years) completed three motion perception tests (2-dimensional [2D] motion-defined letter [MDL] identification, 3D motion in depth sensitivity [MID], and dynamic visual acuity [DVA]) followed by two driving tests (emergency braking [EB] and hazard perception [HP]). In experiment 2, 20 drivers (21.6 ± 2.1 years) completed 6 weeks of motion perception training (using the MDL, MID, and DVA tests), while 20 control drivers (22.0 ± 2.7 years) completed an online driving safety course. The EB performance was measured before and after training. In experiment 1, MDL (r = 0.34) and MID (r = 0.46) significantly correlated with EB score. The change in DVA score as a function of target speed (i.e., "velocity susceptibility") was correlated most strongly with HP score (r = -0.61). In experiment 2, the motion perception training group had a significant decrease in brake reaction time on the EB test from pre- to posttreatment, while there was no significant change for the control group: t(38) = 2.24, P = 0.03. Tests of 3D motion perception are the best predictor of EB, while DVA velocity susceptibility is the best predictor of hazard perception. Motion perception training appears to result in faster braking responses.

  10. Individualistic weight perception from motion on a slope

    PubMed Central

    Zintus-art, K.; Shin, D.; Kambara, H.; Yoshimura, N.; Koike, Y.

    2016-01-01

    Perception of an object’s weight is linked to its form and motion. Studies have shown the relationship between weight perception and motion in horizontal and vertical environments to be universally identical across subjects during passive observation. Here we show a contradicting finding in that not all humans share the same motion-weight pairing. A virtual environment where participants control the steepness of a slope was used to investigate the relationship between sliding motion and weight perception. Our findings showed that distinct, albeit subjective, motion-weight relationships in perception could be identified for slope environments. These individualistic perceptions were found when changes in environmental parameters governing motion were introduced, specifically inclination and surface texture. Differences in environmental parameters, combined with individual factors such as experience, affected participants’ weight perception. This phenomenon may offer evidence of the central nervous system’s ability to choose and combine internal models based on information from the sensory system. The results also point toward the possibility of controlling human perception by presenting strong sensory cues to manipulate the mechanisms managing internal models. PMID:27174036

  11. Being Moved by the Self and Others: Influence of Empathy on Self-Motion Perception

    PubMed Central

    Lopez, Christophe; Falconer, Caroline J.; Mast, Fred W.

    2013-01-01

    Background The observation of conspecifics influences our bodily perceptions and actions: Contagious yawning, contagious itching, or empathy for pain, are all examples of mechanisms based on resonance between our own body and others. While there is evidence for the involvement of the mirror neuron system in the processing of motor, auditory and tactile information, it has not yet been associated with the perception of self-motion. Methodology/Principal Findings We investigated whether viewing our own body, the body of another, and an object in motion influences self-motion perception. We found a visual-vestibular congruency effect for self-motion perception when observing self and object motion, and a reduction in this effect when observing someone else's body motion. The congruency effect was correlated with empathy scores, revealing the importance of empathy in mirroring mechanisms. Conclusions/Significance The data show that vestibular perception is modulated by agent-specific mirroring mechanisms. The observation of conspecifics in motion is an essential component of social life, and self-motion perception is crucial for the distinction between the self and the other. Finally, our results hint at the presence of a “vestibular mirror neuron system”. PMID:23326302

  12. Tracking without perceiving: a dissociation between eye movements and motion perception.

    PubMed

    Spering, Miriam; Pomplun, Marc; Carrasco, Marisa

    2011-02-01

    Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept.

  13. Tracking Without Perceiving: A Dissociation Between Eye Movements and Motion Perception

    PubMed Central

    Spering, Miriam; Pomplun, Marc; Carrasco, Marisa

    2011-01-01

    Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept. PMID:21189353

  14. Contrast and assimilation in motion perception and smooth pursuit eye movements.

    PubMed

    Spering, Miriam; Gegenfurtner, Karl R

    2007-09-01

    The analysis of visual motion serves many different functions ranging from object motion perception to the control of self-motion. The perception of visual motion and the oculomotor tracking of a moving object are known to be closely related and are assumed to be controlled by shared brain areas. We compared perceived velocity and the velocity of smooth pursuit eye movements in human observers in a paradigm that required the segmentation of target object motion from context motion. In each trial, a pursuit target and a visual context were independently perturbed simultaneously to briefly increase or decrease in speed. Observers had to accurately track the target and estimate target speed during the perturbation interval. Here we show that the same motion signals are processed in fundamentally different ways for perception and steady-state smooth pursuit eye movements. For the computation of perceived velocity, motion of the context was subtracted from target motion (motion contrast), whereas pursuit velocity was determined by the motion average (motion assimilation). We conclude that the human motion system uses these computations to optimally accomplish different functions: image segmentation for object motion perception and velocity estimation for the control of smooth pursuit eye movements.

  15. A research on motion design for APP's loading pages based on time perception

    NASA Astrophysics Data System (ADS)

    Cao, Huai; Hu, Xiaoyun

    2018-04-01

    Due to restrictions caused by objective reasons like network bandwidth, hardware performance and etc., waiting is still an inevitable phenomenon that appears in our using mobile-terminal products. Relevant researches show that users' feelings in a waiting scenario can affect their evaluations on the whole product and services the product provides. With the development of user experience and inter-facial design subjects, the role of motion effect in the interface design has attracted more and more scholars' attention. In the current studies, the research theory of motion design in a waiting scenario is imperfect. This article will use the basic theory and experimental research methods of cognitive psychology to explore the motion design's impact on user's time perception when users are waiting for loading APP pages. Firstly, the article analyzes the factors that affect waiting experience of loading APP pages based on the theory of time perception, and then discusses motion design's impact on the level of time-perception when loading pages and its design strategy. Moreover, by the operation analysis of existing loading motion designs, the article classifies the existing loading motions and designs an experiment to verify the impact of different types of motions on the user's time perception. The result shows that the waiting time perception of mobile's terminals' APPs is related to the loading motion types, the combination type of loading motions can effectively shorten the waiting time perception as it scores a higher mean value in the length of time perception.

  16. Neural Correlates of Coherent and Biological Motion Perception in Autism

    ERIC Educational Resources Information Center

    Koldewyn, Kami; Whitney, David; Rivera, Susan M.

    2011-01-01

    Recent evidence suggests those with autism may be generally impaired in visual motion perception. To examine this, we investigated both coherent and biological motion processing in adolescents with autism employing both psychophysical and fMRI methods. Those with autism performed as well as matched controls during coherent motion perception but…

  17. The Perception of Auditory Motion

    PubMed Central

    Leung, Johahn

    2016-01-01

    The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029

  18. Ventral aspect of the visual form pathway is not critical for the perception of biological motion

    PubMed Central

    Gilaie-Dotan, Sharon; Saygin, Ayse Pinar; Lorenzi, Lauren J.; Rees, Geraint; Behrmann, Marlene

    2015-01-01

    Identifying the movements of those around us is fundamental for many daily activities, such as recognizing actions, detecting predators, and interacting with others socially. A key question concerns the neurobiological substrates underlying biological motion perception. Although the ventral “form” visual cortex is standardly activated by biologically moving stimuli, whether these activations are functionally critical for biological motion perception or are epiphenomenal remains unknown. To address this question, we examined whether focal damage to regions of the ventral visual cortex, resulting in significant deficits in form perception, adversely affects biological motion perception. Six patients with damage to the ventral cortex were tested with sensitive point-light display paradigms. All patients were able to recognize unmasked point-light displays and their perceptual thresholds were not significantly different from those of three different control groups, one of which comprised brain-damaged patients with spared ventral cortex (n > 50). Importantly, these six patients performed significantly better than patients with damage to regions critical for biological motion perception. To assess the necessary contribution of different regions in the ventral pathway to biological motion perception, we complement the behavioral findings with a fine-grained comparison between the lesion location and extent, and the cortical regions standardly implicated in biological motion processing. This analysis revealed that the ventral aspects of the form pathway (e.g., fusiform regions, ventral extrastriate body area) are not critical for biological motion perception. We hypothesize that the role of these ventral regions is to provide enhanced multiview/posture representations of the moving person rather than to represent biological motion perception per se. PMID:25583504

  19. Neck Proprioception Shapes Body Orientation and Perception of Motion

    PubMed Central

    Pettorossi, Vito Enrico; Schieppati, Marco

    2014-01-01

    This review article deals with some effects of neck muscle proprioception on human balance, gait trajectory, subjective straight-ahead (SSA), and self-motion perception. These effects are easily observed during neck muscle vibration, a strong stimulus for the spindle primary afferent fibers. We first remind the early findings on human balance, gait trajectory, SSA, induced by limb, and neck muscle vibration. Then, more recent findings on self-motion perception of vestibular origin are described. The use of a vestibular asymmetric yaw-rotation stimulus for emphasizing the proprioceptive modulation of motion perception from the neck is mentioned. In addition, an attempt has been made to conjointly discuss the effects of unilateral neck proprioception on motion perception, SSA, and walking trajectory. Neck vibration also induces persistent aftereffects on the SSA and on self-motion perception of vestibular origin. These perceptive effects depend on intensity, duration, side of the conditioning vibratory stimulation, and on muscle status. These effects can be maintained for hours when prolonged high-frequency vibration is superimposed on muscle contraction. Overall, this brief outline emphasizes the contribution of neck muscle inflow to the construction and fine-tuning of perception of body orientation and motion. Furthermore, it indicates that tonic neck-proprioceptive input may induce persistent influences on the subject’s mental representation of space. These plastic changes might adapt motion sensitiveness to lasting or permanent head positional or motor changes. PMID:25414660

  20. Neck proprioception shapes body orientation and perception of motion.

    PubMed

    Pettorossi, Vito Enrico; Schieppati, Marco

    2014-01-01

    This review article deals with some effects of neck muscle proprioception on human balance, gait trajectory, subjective straight-ahead (SSA), and self-motion perception. These effects are easily observed during neck muscle vibration, a strong stimulus for the spindle primary afferent fibers. We first remind the early findings on human balance, gait trajectory, SSA, induced by limb, and neck muscle vibration. Then, more recent findings on self-motion perception of vestibular origin are described. The use of a vestibular asymmetric yaw-rotation stimulus for emphasizing the proprioceptive modulation of motion perception from the neck is mentioned. In addition, an attempt has been made to conjointly discuss the effects of unilateral neck proprioception on motion perception, SSA, and walking trajectory. Neck vibration also induces persistent aftereffects on the SSA and on self-motion perception of vestibular origin. These perceptive effects depend on intensity, duration, side of the conditioning vibratory stimulation, and on muscle status. These effects can be maintained for hours when prolonged high-frequency vibration is superimposed on muscle contraction. Overall, this brief outline emphasizes the contribution of neck muscle inflow to the construction and fine-tuning of perception of body orientation and motion. Furthermore, it indicates that tonic neck-proprioceptive input may induce persistent influences on the subject's mental representation of space. These plastic changes might adapt motion sensitiveness to lasting or permanent head positional or motor changes.

  1. Video quality assessment method motivated by human visual perception

    NASA Astrophysics Data System (ADS)

    He, Meiling; Jiang, Gangyi; Yu, Mei; Song, Yang; Peng, Zongju; Shao, Feng

    2016-11-01

    Research on video quality assessment (VQA) plays a crucial role in improving the efficiency of video coding and the performance of video processing. It is well acknowledged that the motion energy model generates motion energy responses in a middle temporal area by simulating the receptive field of neurons in V1 for the motion perception of the human visual system. Motivated by the biological evidence for the visual motion perception, a VQA method is proposed in this paper, which comprises the motion perception quality index and the spatial index. To be more specific, the motion energy model is applied to evaluate the temporal distortion severity of each frequency component generated from the difference of Gaussian filter bank, which produces the motion perception quality index, and the gradient similarity measure is used to evaluate the spatial distortion of the video sequence to get the spatial quality index. The experimental results of the LIVE, CSIQ, and IVP video databases demonstrate that the random forests regression technique trained by the generated quality indices is highly correspondent to human visual perception and has many significant improvements than comparable well-performing methods. The proposed method has higher consistency with subjective perception and higher generalization capability.

  2. Audiovisual associations alter the perception of low-level visual motion

    PubMed Central

    Kafaligonul, Hulusi; Oluk, Can

    2015-01-01

    Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role. PMID:25873869

  3. Gravity matters: Motion perceptions modified by direction and body position.

    PubMed

    Claassen, Jens; Bardins, Stanislavs; Spiegel, Rainer; Strupp, Michael; Kalla, Roger

    2016-07-01

    Motion coherence thresholds are consistently higher at lower velocities. In this study we analysed the influence of the position and direction of moving objects on their perception and thereby the influence of gravity. This paradigm allows a differentiation to be made between coherent and randomly moving objects in an upright and a reclining position with a horizontal or vertical axis of motion. 18 young healthy participants were examined in this coherent threshold paradigm. Motion coherence thresholds were significantly lower when position and motion were congruent with gravity independent of motion velocity (p=0.024). In the other conditions higher motion coherence thresholds (MCT) were found at lower velocities and vice versa (p<0.001). This result confirms previous studies with higher MCT at lower velocity but is in contrast to studies concerning perception of virtual turns and optokinetic nystagmus, in which differences of perception were due to different directions irrespective of body position, i.e. perception took place in an egocentric reference frame. Since the observed differences occurred in an upright position only, perception of coherent motion in this study is defined by an earth-centered reference frame rather than by an ego-centric frame. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Neural correlates of coherent and biological motion perception in autism.

    PubMed

    Koldewyn, Kami; Whitney, David; Rivera, Susan M

    2011-09-01

    Recent evidence suggests those with autism may be generally impaired in visual motion perception. To examine this, we investigated both coherent and biological motion processing in adolescents with autism employing both psychophysical and fMRI methods. Those with autism performed as well as matched controls during coherent motion perception but had significantly higher thresholds for biological motion perception. The autism group showed reduced posterior Superior Temporal Sulcus (pSTS), parietal and frontal activity during a biological motion task while showing similar levels of activity in MT+/V5 during both coherent and biological motion trials. Activity in MT+/V5 was predictive of individual coherent motion thresholds in both groups. Activity in dorsolateral prefrontal cortex (DLPFC) and pSTS was predictive of biological motion thresholds in control participants but not in those with autism. Notably, however, activity in DLPFC was negatively related to autism symptom severity. These results suggest that impairments in higher-order social or attentional networks may underlie visual motion deficits observed in autism. © 2011 Blackwell Publishing Ltd.

  5. Neural correlates of coherent and biological motion perception in autism

    PubMed Central

    Koldewyn, Kami; Whitney, David; Rivera, Susan M.

    2011-01-01

    Recent evidence suggests those with autism may be generally impaired in visual motion perception. To examine this, we investigated both coherent and biological motion processing in adolescents with autism employing both psychophysical and fMRI methods. Those with autism performed as well as matched controls during coherent motion perception but had significantly higher thresholds for biological motion perception. The autism group showed reduced posterior Superior Temporal Sulcus (pSTS), parietal and frontal activity during a biological motion task while showing similar levels of activity in MT+/V5 during both coherent and biological motion trials. Activity in MT+/V5 was predictive of individual coherent motion thresholds in both groups. Activity in dorsolateral prefrontal cortex (DLPFC) and pSTS was predictive of biological motion thresholds in control participants but not in those with autism. Notably, however, activity in DLPFC was negatively related to autism symptom severity. These results suggest that impairments in higher-order social or attentional networks may underlie visual motion deficits observed in autism. PMID:21884323

  6. Dynamical evolution of motion perception.

    PubMed

    Kanai, Ryota; Sheth, Bhavin R; Shimojo, Shinsuke

    2007-03-01

    Motion is defined as a sequence of positional changes over time. However, in perception, spatial position and motion dynamically interact with each other. This reciprocal interaction suggests that the perception of a moving object itself may dynamically evolve following the onset of motion. Here, we show evidence that the percept of a moving object systematically changes over time. In experiments, we introduced a transient gap in the motion sequence or a brief change in some feature (e.g., color or shape) of an otherwise smoothly moving target stimulus. Observers were highly sensitive to the gap or transient change if it occurred soon after motion onset (< or =200 ms), but significantly less so if it occurred later (> or = 300 ms). Our findings suggest that the moving stimulus is initially perceived as a time series of discrete potentially isolatable frames; later failures to perceive change suggests that over time, the stimulus begins to be perceived as a single, indivisible gestalt integrated over space as well as time, which could well be the signature of an emergent stable motion percept.

  7. Global motion perception is related to motor function in 4.5-year-old children born at risk of abnormal development

    PubMed Central

    Chakraborty, Arijit; Anstice, Nicola S.; Jacobs, Robert J.; Paudel, Nabin; LaGasse, Linda L.; Lester, Barry M.; McKinlay, Christopher J. D.; Harding, Jane E.; Wouldes, Trecia A.; Thompson, Benjamin

    2017-01-01

    Global motion perception is often used as an index of dorsal visual stream function in neurodevelopmental studies. However, the relationship between global motion perception and visuomotor control, a primary function of the dorsal stream, is unclear. We measured global motion perception (motion coherence threshold; MCT) and performance on standardized measures of motor function in 606 4.5-year-old children born at risk of abnormal neurodevelopment. Visual acuity, stereoacuity and verbal IQ were also assessed. After adjustment for verbal IQ or both visual acuity and stereoacuity, MCT was modestly, but significantly, associated with all components of motor function with the exception of gross motor scores. In a separate analysis, stereoacuity, but not visual acuity, was significantly associated with both gross and fine motor scores. These results indicate that the development of motion perception and stereoacuity are associated with motor function in pre-school children. PMID:28435122

  8. Smelling directions: Olfaction modulates ambiguous visual motion perception

    PubMed Central

    Kuang, Shenbing; Zhang, Tao

    2014-01-01

    Senses of smells are often accompanied by simultaneous visual sensations. Previous studies have documented enhanced olfactory performance with concurrent presence of congruent color- or shape- related visual cues, and facilitated visual object perception when congruent smells are simultaneously present. These visual object-olfaction interactions suggest the existences of couplings between the olfactory pathway and the visual ventral processing stream. However, it is not known if olfaction can modulate visual motion perception, a function that is related to the visual dorsal stream. We tested this possibility by examining the influence of olfactory cues on the perceptions of ambiguous visual motion signals. We showed that, after introducing an association between motion directions and olfactory cues, olfaction could indeed bias ambiguous visual motion perceptions. Our result that olfaction modulates visual motion processing adds to the current knowledge of cross-modal interactions and implies a possible functional linkage between the olfactory system and the visual dorsal pathway. PMID:25052162

  9. Self-motion perception: assessment by real-time computer-generated animations

    NASA Technical Reports Server (NTRS)

    Parker, D. E.; Phillips, J. O.

    2001-01-01

    We report a new procedure for assessing complex self-motion perception. In three experiments, subjects manipulated a 6 degree-of-freedom magnetic-field tracker which controlled the motion of a virtual avatar so that its motion corresponded to the subjects' perceived self-motion. The real-time animation created by this procedure was stored using a virtual video recorder for subsequent analysis. Combined real and illusory self-motion and vestibulo-ocular reflex eye movements were evoked by cross-coupled angular accelerations produced by roll and pitch head movements during passive yaw rotation in a chair. Contrary to previous reports, illusory self-motion did not correspond to expectations based on semicircular canal stimulation. Illusory pitch head-motion directions were as predicted for only 37% of trials; whereas, slow-phase eye movements were in the predicted direction for 98% of the trials. The real-time computer-generated animations procedure permits use of naive, untrained subjects who lack a vocabulary for reporting motion perception and is applicable to basic self-motion perception studies, evaluation of motion simulators, assessment of balance disorders and so on.

  10. Use of cues in virtual reality depends on visual feedback.

    PubMed

    Fulvio, Jacqueline M; Rokers, Bas

    2017-11-22

    3D motion perception is of central importance to daily life. However, when tested in laboratory settings, sensitivity to 3D motion signals is found to be poor, leading to the view that heuristics and prior assumptions are critical for 3D motion perception. Here we explore an alternative: sensitivity to 3D motion signals is context-dependent and must be learned based on explicit visual feedback in novel environments. The need for action-contingent visual feedback is well-established in the developmental literature. For example, young kittens that are passively moved through an environment, but unable to move through it themselves, fail to develop accurate depth perception. We find that these principles also obtain in adult human perception. Observers that do not experience visual consequences of their actions fail to develop accurate 3D motion perception in a virtual reality environment, even after prolonged exposure. By contrast, observers that experience the consequences of their actions improve performance based on available sensory cues to 3D motion. Specifically, we find that observers learn to exploit the small motion parallax cues provided by head jitter. Our findings advance understanding of human 3D motion processing and form a foundation for future study of perception in virtual and natural 3D environments.

  11. Global Motion Perception in 2-Year-Old Children: A Method for Psychophysical Assessment and Relationships With Clinical Measures of Visual Function

    PubMed Central

    Yu, Tzu-Ying; Jacobs, Robert J.; Anstice, Nicola S.; Paudel, Nabin; Harding, Jane E.; Thompson, Benjamin

    2013-01-01

    Purpose. We developed and validated a technique for measuring global motion perception in 2-year-old children, and assessed the relationship between global motion perception and other measures of visual function. Methods. Random dot kinematogram (RDK) stimuli were used to measure motion coherence thresholds in 366 children at risk of neurodevelopmental problems at 24 ± 1 months of age. RDKs of variable coherence were presented and eye movements were analyzed offline to grade the direction of the optokinetic reflex (OKR) for each trial. Motion coherence thresholds were calculated by fitting psychometric functions to the resulting datasets. Test–retest reliability was assessed in 15 children, and motion coherence thresholds were measured in a group of 10 adults using OKR and behavioral responses. Standard age-appropriate optometric tests also were performed. Results. Motion coherence thresholds were measured successfully in 336 (91.8%) children using the OKR technique, but only 31 (8.5%) using behavioral responses. The mean threshold was 41.7 ± 13.5% for 2-year-old children and 3.3 ± 1.2% for adults. Within-assessor reliability and test–retest reliability were high in children. Children's motion coherence thresholds were significantly correlated with stereoacuity (LANG I & II test, ρ = 0.29, P < 0.001; Frisby, ρ = 0.17, P = 0.022), but not with binocular visual acuity (ρ = 0.11, P = 0.07). In adults OKR and behavioral motion coherence thresholds were highly correlated (intraclass correlation = 0.81, P = 0.001). Conclusions. Global motion perception can be measured in 2-year-old children using the OKR. This technique is reliable and data from adults suggest that motion coherence thresholds based on the OKR are related to motion perception. Global motion perception was related to stereoacuity in children. PMID:24282224

  12. Modulation frequency as a cue for auditory speed perception.

    PubMed

    Senna, Irene; Parise, Cesare V; Ernst, Marc O

    2017-07-12

    Unlike vision, the mechanisms underlying auditory motion perception are poorly understood. Here we describe an auditory motion illusion revealing a novel cue to auditory speed perception: the temporal frequency of amplitude modulation (AM-frequency), typical for rattling sounds. Naturally, corrugated objects sliding across each other generate rattling sounds whose AM-frequency tends to directly correlate with speed. We found that AM-frequency modulates auditory speed perception in a highly systematic fashion: moving sounds with higher AM-frequency are perceived as moving faster than sounds with lower AM-frequency. Even more interestingly, sounds with higher AM-frequency also induce stronger motion aftereffects. This reveals the existence of specialized neural mechanisms for auditory motion perception, which are sensitive to AM-frequency. Thus, in spatial hearing, the brain successfully capitalizes on the AM-frequency of rattling sounds to estimate the speed of moving objects. This tightly parallels previous findings in motion vision, where spatio-temporal frequency of moving displays systematically affects both speed perception and the magnitude of the motion aftereffects. Such an analogy with vision suggests that motion detection may rely on canonical computations, with similar neural mechanisms shared across the different modalities. © 2017 The Author(s).

  13. Model Predictive Control Based Motion Drive Algorithm for a Driving Simulator

    NASA Astrophysics Data System (ADS)

    Rehmatullah, Faizan

    In this research, we develop a model predictive control based motion drive algorithm for the driving simulator at Toronto Rehabilitation Institute. Motion drive algorithms exploit the limitations of the human vestibular system to formulate a perception of motion within the constrained workspace of a simulator. In the absence of visual cues, the human perception system is unable to distinguish between acceleration and the force of gravity. The motion drive algorithm determines control inputs to displace the simulator platform, and by using the resulting inertial forces and angular rates, creates the perception of motion. By using model predictive control, we can optimize the use of simulator workspace for every maneuver while simulating the vehicle perception. With the ability to handle nonlinear constraints, the model predictive control allows us to incorporate workspace limitations.

  14. The upper spatial limit for perception of displacement is affected by preceding motion.

    PubMed

    Stefanova, Miroslava; Mateeff, Stefan; Hohnsbein, Joachim

    2009-03-01

    The upper spatial limit D(max) for perception of apparent motion of a random dot pattern may be strongly affected by another, collinear, motion that precedes it [Mateeff, S., Stefanova, M., &. Hohnsbein, J. (2007). Perceived global direction of a compound of real and apparent motion. Vision Research, 47, 1455-1463]. In the present study this phenomenon was studied with two-dimensional motion stimuli. A random dot pattern moved alternately in the vertical and oblique direction (zig-zag motion). The vertical motion was of 1.04 degrees length; it was produced by three discrete spatial steps of the dots. Thereafter the dots were displaced by a single spatial step in oblique direction. Each motion lasted for 57ms. The upper spatial limit for perception of the oblique motion was measured under two conditions: the vertical component of the oblique motion and the vertical motion were either in the same or in opposite directions. It was found that the perception of the oblique motion was strongly influenced by the relative direction of the vertical motion that preceded it; in the "same" condition the upper spatial limit was much shorter than in the "opposite" condition. Decreasing the speed of the vertical motion reversed this effect. Interpretations based on networks of motion detectors and on Gestalt theory are discussed.

  15. IQ Predicts Biological Motion Perception in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Rutherford, M. D.; Troje, Nikolaus F.

    2012-01-01

    Biological motion is easily perceived by neurotypical observers when encoded in point-light displays. Some but not all relevant research shows significant deficits in biological motion perception among those with ASD, especially with respect to emotional displays. We tested adults with and without ASD on the perception of masked biological motion…

  16. Contrasting accounts of direction and shape perception in short-range motion: Counterchange compared with motion energy detection.

    PubMed

    Norman, Joseph; Hock, Howard; Schöner, Gregor

    2014-07-01

    It has long been thought (e.g., Cavanagh & Mather, 1989) that first-order motion-energy extraction via space-time comparator-type models (e.g., the elaborated Reichardt detector) is sufficient to account for human performance in the short-range motion paradigm (Braddick, 1974), including the perception of reverse-phi motion when the luminance polarity of the visual elements is inverted during successive frames. Human observers' ability to discriminate motion direction and use coherent motion information to segregate a region of a random cinematogram and determine its shape was tested; they performed better in the same-, as compared with the inverted-, polarity condition. Computational analyses of short-range motion perception based on the elaborated Reichardt motion energy detector (van Santen & Sperling, 1985) predict, incorrectly, that symmetrical results will be obtained for the same- and inverted-polarity conditions. In contrast, the counterchange detector (Hock, Schöner, & Gilroy, 2009) predicts an asymmetry quite similar to that of human observers in both motion direction and shape discrimination. The further advantage of counterchange, as compared with motion energy, detection for the perception of spatial shape- and depth-from-motion is discussed.

  17. Neural mechanisms underlying sound-induced visual motion perception: An fMRI study.

    PubMed

    Hidaka, Souta; Higuchi, Satomi; Teramoto, Wataru; Sugita, Yoichi

    2017-07-01

    Studies of crossmodal interactions in motion perception have reported activation in several brain areas, including those related to motion processing and/or sensory association, in response to multimodal (e.g., visual and auditory) stimuli that were both in motion. Recent studies have demonstrated that sounds can trigger illusory visual apparent motion to static visual stimuli (sound-induced visual motion: SIVM): A visual stimulus blinking at a fixed location is perceived to be moving laterally when an alternating left-right sound is also present. Here, we investigated brain activity related to the perception of SIVM using a 7T functional magnetic resonance imaging technique. Specifically, we focused on the patterns of neural activities in SIVM and visually induced visual apparent motion (VIVM). We observed shared activations in the middle occipital area (V5/hMT), which is thought to be involved in visual motion processing, for SIVM and VIVM. Moreover, as compared to VIVM, SIVM resulted in greater activation in the superior temporal area and dominant functional connectivity between the V5/hMT area and the areas related to auditory and crossmodal motion processing. These findings indicate that similar but partially different neural mechanisms could be involved in auditory-induced and visually-induced motion perception, and neural signals in auditory, visual, and, crossmodal motion processing areas closely and directly interact in the perception of SIVM. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Translational Vestibulo-Ocular Reflex and Motion Perception During Interaural Linear Acceleration: Comparison of Different Motion Paradigms

    NASA Technical Reports Server (NTRS)

    Beaton, K. H.; Holly, J. E.; Clement, G. R.; Wood, S. J.

    2011-01-01

    The neural mechanisms to resolve ambiguous tilt-translation motion have been hypothesized to be different for motion perception and eye movements. Previous studies have demonstrated differences in ocular and perceptual responses using a variety of motion paradigms, including Off-Vertical Axis Rotation (OVAR), Variable Radius Centrifugation (VRC), translation along a linear track, and tilt about an Earth-horizontal axis. While the linear acceleration across these motion paradigms is presumably equivalent, there are important differences in semicircular canal cues. The purpose of this study was to compare translation motion perception and horizontal slow phase velocity to quantify consistencies, or lack thereof, across four different motion paradigms. Twelve healthy subjects were exposed to sinusoidal interaural linear acceleration between 0.01 and 0.6 Hz at 1.7 m/s/s (equivalent to 10 tilt) using OVAR, VRC, roll tilt, and lateral translation. During each trial, subjects verbally reported the amount of perceived peak-to-peak lateral translation and indicated the direction of motion with a joystick. Binocular eye movements were recorded using video-oculography. In general, the gain of translation perception (ratio of reported linear displacement to equivalent linear stimulus displacement) increased with stimulus frequency, while the phase did not significantly vary. However, translation perception was more pronounced during both VRC and lateral translation involving actual translation, whereas perceptions were less consistent and more variable during OVAR and roll tilt which did not involve actual translation. For each motion paradigm, horizontal eye movements were negligible at low frequencies and showed phase lead relative to the linear stimulus. At higher frequencies, the gain of the eye movements increased and became more inphase with the acceleration stimulus. While these results are consistent with the hypothesis that the neural computational strategies for motion perception and eye movements differ, they also indicate that the specific motion platform employed can have a significant effect on both the amplitude and phase of each.

  19. Global motion perception is associated with motor function in 2-year-old children.

    PubMed

    Thompson, Benjamin; McKinlay, Christopher J D; Chakraborty, Arijit; Anstice, Nicola S; Jacobs, Robert J; Paudel, Nabin; Yu, Tzu-Ying; Ansell, Judith M; Wouldes, Trecia A; Harding, Jane E

    2017-09-29

    The dorsal visual processing stream that includes V1, motion sensitive area V5 and the posterior parietal lobe, supports visually guided motor function. Two recent studies have reported associations between global motion perception, a behavioural measure of processing in V5, and motor function in pre-school and school aged children. This indicates a relationship between visual and motor development and also supports the use of global motion perception to assess overall dorsal stream function in studies of human neurodevelopment. We investigated whether associations between vision and motor function were present at 2 years of age, a substantially earlier stage of development. The Bayley III test of Infant and Toddler Development and measures of vision including visual acuity (Cardiff Acuity Cards), stereopsis (Lang stereotest) and global motion perception were attempted in 404 2-year-old children (±4 weeks). Global motion perception (quantified as a motion coherence threshold) was assessed by observing optokinetic nystagmus in response to random dot kinematograms of varying coherence. Linear regression revealed that global motion perception was modestly, but statistically significantly associated with Bayley III composite motor (r 2 =0.06, P<0.001, n=375) and gross motor scores (r 2 =0.06, p<0.001, n=375). The associations remained significant when language score was included in the regression model. In addition, when language score was included in the model, stereopsis was significantly associated with composite motor and fine motor scores, but unaided visual acuity was not statistically significantly associated with any of the motor scores. These results demonstrate that global motion perception and binocular vision are associated with motor function at an early stage of development. Global motion perception can be used as a partial measure of dorsal stream function from early childhood. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Coherent modulation of stimulus colour can affect visually induced self-motion perception.

    PubMed

    Nakamura, Shinji; Seno, Takeharu; Ito, Hiroyuki; Sunaga, Shoji

    2010-01-01

    The effects of dynamic colour modulation on vection were investigated to examine whether perceived variation of illumination affects self-motion perception. Participants observed expanding optic flow which simulated their forward self-motion. Onset latency, accumulated duration, and estimated magnitude of the self-motion were measured as indices of vection strength. Colour of the dots in the visual stimulus was modulated between white and red (experiment 1), white and grey (experiment 2), and grey and red (experiment 3). The results indicated that coherent colour oscillation in the visual stimulus significantly suppressed the strength of vection, whereas incoherent or static colour modulation did not affect vection. There was no effect of the types of the colour modulation; both achromatic and chromatic modulations turned out to be effective in inhibiting self-motion perception. Moreover, in a situation where the simulated direction of a spotlight was manipulated dynamically, vection strength was also suppressed (experiment 4). These results suggest that observer's perception of illumination is critical for self-motion perception, and rapid variation of perceived illumination would impair the reliabilities of visual information in determining self-motion.

  1. Contextual effects on motion perception and smooth pursuit eye movements.

    PubMed

    Spering, Miriam; Gegenfurtner, Karl R

    2008-08-15

    Smooth pursuit eye movements are continuous, slow rotations of the eyes that allow us to follow the motion of a visual object of interest. These movements are closely related to sensory inputs from the visual motion processing system. To track a moving object in the natural environment, its motion first has to be segregated from the motion signals provided by surrounding stimuli. Here, we review experiments on the effect of the visual context on motion processing with a focus on the relationship between motion perception and smooth pursuit eye movements. While perception and pursuit are closely linked, we show that they can behave quite distinctly when required by the visual context.

  2. Color and luminance in the perception of 1- and 2-dimensional motion.

    PubMed

    Farell, B

    1999-08-01

    An isoluminant color grating usually appears to move more slowly than a luminance grating that has the same physical speed. Yet a grating defined by both color and luminance is seen as perceptually unified and moving at a single intermediate speed. In experiments measuring perceived speed and direction, it was found that color- and luminance-based motion signals are combined differently in the perception of 1-D motion than they are in the perception of 2-D motion. Adding color to a moving 1-D luminance pattern, a grating, slows its perceived speed. Adding color to a moving 2-D luminance pattern, a plaid made of orthogonal gratings, leaves its perceived speed unchanged. Analogous results occur for the perception of the direction of 2-D motion. The visual system appears to discount color when analyzing the motion of luminance-bearing 2-D patterns. This strategy has adaptive advantages, making the sensing of object motion more veridical without sacrificing the ability to see motion at isoluminance.

  3. Global motion perception is related to motor function in 4.5-year-old children born at risk of abnormal development.

    PubMed

    Chakraborty, Arijit; Anstice, Nicola S; Jacobs, Robert J; Paudel, Nabin; LaGasse, Linda L; Lester, Barry M; McKinlay, Christopher J D; Harding, Jane E; Wouldes, Trecia A; Thompson, Benjamin

    2017-06-01

    Global motion perception is often used as an index of dorsal visual stream function in neurodevelopmental studies. However, the relationship between global motion perception and visuomotor control, a primary function of the dorsal stream, is unclear. We measured global motion perception (motion coherence threshold; MCT) and performance on standardized measures of motor function in 606 4.5-year-old children born at risk of abnormal neurodevelopment. Visual acuity, stereoacuity and verbal IQ were also assessed. After adjustment for verbal IQ or both visual acuity and stereoacuity, MCT was modestly, but significantly, associated with all components of motor function with the exception of fine motor scores. In a separate analysis, stereoacuity, but not visual acuity, was significantly associated with both gross and fine motor scores. These results indicate that the development of motion perception and stereoacuity are associated with motor function in pre-school children. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Deficient motion-defined and texture-defined figure-ground segregation in amblyopic children.

    PubMed

    Wang, Jane; Ho, Cindy S; Giaschi, Deborah E

    2007-01-01

    Motion-defined form deficits in the fellow eye and the amblyopic eye of children with amblyopia implicate possible direction-selective motion processing or static figure-ground segregation deficits. Deficient motion-defined form perception in the fellow eye of amblyopic children may not be fully accounted for by a general motion processing deficit. This study investigates the contribution of figure-ground segregation deficits to the motion-defined form perception deficits in amblyopia. Performances of 6 amblyopic children (5 anisometropic, 1 anisostrabismic) and 32 control children with normal vision were assessed on motion-defined form, texture-defined form, and global motion tasks. Performance on motion-defined and texture-defined form tasks was significantly worse in amblyopic children than in control children. Performance on global motion tasks was not significantly different between the 2 groups. Faulty figure-ground segregation mechanisms are likely responsible for the observed motion-defined form perception deficits in amblyopia.

  5. Neural network architecture for form and motion perception (Abstract Only)

    NASA Astrophysics Data System (ADS)

    Grossberg, Stephen

    1991-08-01

    Evidence is given for a new neural network theory of biological motion perception, a motion boundary contour system. This theory clarifies why parallel streams V1 yields V2 and V1 yields MT exist for static form and motion form processing among the areas V1, V2, and MT of visual cortex. The motion boundary contour system consists of several parallel copies, such that each copy is activated by a different range of receptive field sizes. Each copy is further subdivided into two hierarchically organized subsystems: a motion oriented contrast (MOC) filter, for preprocessing moving images; and a cooperative-competitive feedback (CC) loop, for generating emergent boundary segmentations of the filtered signals. The present work uses the MOC filter to explain a variety of classical and recent data about short-range and long- range apparent motion percepts that have not yet been explained by alternative models. These data include split motion; reverse-contrast gamma motion; delta motion; visual inertia; group motion in response to a reverse-contrast Ternus display at short interstimulus intervals; speed- up of motion velocity as interflash distance increases or flash duration decreases; dependence of the transition from element motion to group motion on stimulus duration and size; various classical dependencies between flash duration, spatial separation, interstimulus interval, and motion threshold known as Korte''s Laws; and dependence of motion strength on stimulus orientation and spatial frequency. These results supplement earlier explanations by the model of apparent motion data that other models have not explained; a recent proposed solution of the global aperture problem including explanations of motion capture and induced motion; an explanation of how parallel cortical systems for static form perception and motion form perception may develop, including a demonstration that these parallel systems are variations on a common cortical design; an explanation of why the geometries of static form and motion form differ, in particular why opposite orientations differ by 90 degree(s), whereas opposite directions differ by 180 degree(s), and why a cortical stream V1 yields V2 yields MT is needed; and a summary of how the main properties of other motion perception models can be assimilated into different parts of the motion boundary contour system design.

  6. Prolonged asymmetric vestibular stimulation induces opposite, long-term effects on self-motion perception and ocular responses.

    PubMed

    Pettorossi, V E; Panichi, R; Botti, F M; Kyriakareli, A; Ferraresi, A; Faralli, M; Schieppati, M; Bronstein, A M

    2013-04-01

    Self-motion perception and the vestibulo-ocular reflex (VOR) were investigated in healthy subjects during asymmetric whole body yaw plane oscillations while standing on a platform in the dark. Platform oscillation consisted of two half-sinusoidal cycles of the same amplitude (40°) but different duration, featuring a fast (FHC) and a slow half-cycle (SHC). Rotation consisted of four or 20 consecutive cycles to probe adaptation further with the longer duration protocol. Self-motion perception was estimated by subjects tracking with a pointer the remembered position of an earth-fixed visual target. VOR was measured by electro-oculography. The asymmetric stimulation pattern consistently induced a progressive increase of asymmetry in motion perception, whereby the gain of the tracking response gradually increased during FHCs and decreased during SHCs. The effect was observed already during the first few cycles and further increased during 20 cycles, leading to a totally distorted location of the initial straight-ahead. In contrast, after some initial interindividual variability, the gain of the slow phase VOR became symmetric, decreasing for FHCs and increasing for SHCs. These oppositely directed adaptive effects in motion perception and VOR persisted for nearly an hour. Control conditions using prolonged but symmetrical stimuli produced no adaptive effects on either motion perception or VOR. These findings show that prolonged asymmetric activation of the vestibular system leads to opposite patterns of adaptation of self-motion perception and VOR. The results provide strong evidence that semicircular canal inputs are processed centrally by independent mechanisms for perception of body motion and eye movement control. These divergent adaptation mechanisms enhance awareness of movement toward the faster body rotation, while improving the eye stabilizing properties of the VOR.

  7. Prolonged asymmetric vestibular stimulation induces opposite, long-term effects on self-motion perception and ocular responses

    PubMed Central

    Pettorossi, V E; Panichi, R; Botti, F M; Kyriakareli, A; Ferraresi, A; Faralli, M; Schieppati, M; Bronstein, A M

    2013-01-01

    Self-motion perception and the vestibulo-ocular reflex (VOR) were investigated in healthy subjects during asymmetric whole body yaw plane oscillations while standing on a platform in the dark. Platform oscillation consisted of two half-sinusoidal cycles of the same amplitude (40°) but different duration, featuring a fast (FHC) and a slow half-cycle (SHC). Rotation consisted of four or 20 consecutive cycles to probe adaptation further with the longer duration protocol. Self-motion perception was estimated by subjects tracking with a pointer the remembered position of an earth-fixed visual target. VOR was measured by electro-oculography. The asymmetric stimulation pattern consistently induced a progressive increase of asymmetry in motion perception, whereby the gain of the tracking response gradually increased during FHCs and decreased during SHCs. The effect was observed already during the first few cycles and further increased during 20 cycles, leading to a totally distorted location of the initial straight-ahead. In contrast, after some initial interindividual variability, the gain of the slow phase VOR became symmetric, decreasing for FHCs and increasing for SHCs. These oppositely directed adaptive effects in motion perception and VOR persisted for nearly an hour. Control conditions using prolonged but symmetrical stimuli produced no adaptive effects on either motion perception or VOR. These findings show that prolonged asymmetric activation of the vestibular system leads to opposite patterns of adaptation of self-motion perception and VOR. The results provide strong evidence that semicircular canal inputs are processed centrally by independent mechanisms for perception of body motion and eye movement control. These divergent adaptation mechanisms enhance awareness of movement toward the faster body rotation, while improving the eye stabilizing properties of the VOR. PMID:23318876

  8. Criterion-free measurement of motion transparency perception at different speeds

    PubMed Central

    Rocchi, Francesca; Ledgeway, Timothy; Webb, Ben S.

    2018-01-01

    Transparency perception often occurs when objects within the visual scene partially occlude each other or move at the same time, at different velocities across the same spatial region. Although transparent motion perception has been extensively studied, we still do not understand how the distribution of velocities within a visual scene contribute to transparent perception. Here we use a novel psychophysical procedure to characterize the distribution of velocities in a scene that give rise to transparent motion perception. To prevent participants from adopting a subjective decision criterion when discriminating transparent motion, we used an “odd-one-out,” three-alternative forced-choice procedure. Two intervals contained the standard—a random-dot-kinematogram with dot speeds or directions sampled from a uniform distribution. The other interval contained the comparison—speeds or directions sampled from a distribution with the same range as the standard, but with a notch of different widths removed. Our results suggest that transparent motion perception is driven primarily by relatively slow speeds, and does not emerge when only very fast speeds are present within a visual scene. Transparent perception of moving surfaces is modulated by stimulus-based characteristics, such as the separation between the means of the overlapping distributions or the range of speeds presented within an image. Our work illustrates the utility of using objective, forced-choice methods to reveal the mechanisms underlying motion transparency perception. PMID:29614154

  9. The Coordination Dynamics of Observational Learning: Relative Motion Direction and Relative Phase as Informational Content Linking Action-Perception to Action-Production.

    PubMed

    Buchanan, John J

    2016-01-01

    The primary goal of this chapter is to merge together the visual perception perspective of observational learning and the coordination dynamics theory of pattern formation in perception and action. Emphasis is placed on identifying movement features that constrain and inform action-perception and action-production processes. Two sources of visual information are examined, relative motion direction and relative phase. The visual perception perspective states that the topological features of relative motion between limbs and joints remains invariant across an actor's motion and therefore are available for pickup by an observer. Relative phase has been put forth as an informational variable that links perception to action within the coordination dynamics theory. A primary assumption of the coordination dynamics approach is that environmental information is meaningful only in terms of the behavior it modifies. Across a series of single limb tasks and bimanual tasks it is shown that the relative motion and relative phase between limbs and joints is picked up through visual processes and supports observational learning of motor skills. Moreover, internal estimations of motor skill proficiency and competency are linked to the informational content found in relative motion and relative phase. Thus, the chapter links action to perception and vice versa and also links cognitive evaluations to the coordination dynamics that support action-perception and action-production processes.

  10. A slowly moving foreground can capture an observer's self-motion--a report of a new motion illusion: inverted vection.

    PubMed

    Nakamura, S; Shimojo, S

    2000-01-01

    We investigated interactions between foreground and background stimuli during visually induced perception of self-motion (vection) by using a stimulus composed of orthogonally moving random-dot patterns. The results indicated that, when the foreground moves with a slower speed, a self-motion sensation with a component in the same direction as the foreground is induced. We named this novel component of self-motion perception 'inverted vection'. The robustness of inverted vection was confirmed using various measures of self-motion sensation and under different stimulus conditions. The mechanism underlying inverted vection is discussed with regard to potentially relevant factors, such as relative motion between the foreground and background, and the interaction between the mis-registration of eye-movement information and self-motion perception.

  11. Perception of Visual Speed While Moving

    ERIC Educational Resources Information Center

    Durgin, Frank H.; Gigone, Krista; Scott, Rebecca

    2005-01-01

    During self-motion, the world normally appears stationary. In part, this may be due to reductions in visual motion signals during self-motion. In 8 experiments, the authors used magnitude estimation to characterize changes in visual speed perception as a result of biomechanical self-motion alone (treadmill walking), physical translation alone…

  12. Similar effects of feature-based attention on motion perception and pursuit eye movements at different levels of awareness

    PubMed Central

    Spering, Miriam; Carrasco, Marisa

    2012-01-01

    Feature-based attention enhances visual processing and improves perception, even for visual features that we are not aware of. Does feature-based attention also modulate motor behavior in response to visual information that does or does not reach awareness? Here we compare the effect of feature-based attention on motion perception and smooth pursuit eye movements in response to moving dichoptic plaids–stimuli composed of two orthogonally-drifting gratings, presented separately to each eye–in human observers. Monocular adaptation to one grating prior to the presentation of both gratings renders the adapted grating perceptually weaker than the unadapted grating and decreases the level of awareness. Feature-based attention was directed to either the adapted or the unadapted grating’s motion direction or to both (neutral condition). We show that observers were better in detecting a speed change in the attended than the unattended motion direction, indicating that they had successfully attended to one grating. Speed change detection was also better when the change occurred in the unadapted than the adapted grating, indicating that the adapted grating was perceptually weaker. In neutral conditions, perception and pursuit in response to plaid motion were dissociated: While perception followed one grating’s motion direction almost exclusively (component motion), the eyes tracked the average of both gratings (pattern motion). In attention conditions, perception and pursuit were shifted towards the attended component. These results suggest that attention affects perception and pursuit similarly even though only the former reflects awareness. The eyes can track an attended feature even if observers do not perceive it. PMID:22649238

  13. Similar effects of feature-based attention on motion perception and pursuit eye movements at different levels of awareness.

    PubMed

    Spering, Miriam; Carrasco, Marisa

    2012-05-30

    Feature-based attention enhances visual processing and improves perception, even for visual features that we are not aware of. Does feature-based attention also modulate motor behavior in response to visual information that does or does not reach awareness? Here we compare the effect of feature-based attention on motion perception and smooth-pursuit eye movements in response to moving dichoptic plaids--stimuli composed of two orthogonally drifting gratings, presented separately to each eye--in human observers. Monocular adaptation to one grating before the presentation of both gratings renders the adapted grating perceptually weaker than the unadapted grating and decreases the level of awareness. Feature-based attention was directed to either the adapted or the unadapted grating's motion direction or to both (neutral condition). We show that observers were better at detecting a speed change in the attended than the unattended motion direction, indicating that they had successfully attended to one grating. Speed change detection was also better when the change occurred in the unadapted than the adapted grating, indicating that the adapted grating was perceptually weaker. In neutral conditions, perception and pursuit in response to plaid motion were dissociated: While perception followed one grating's motion direction almost exclusively (component motion), the eyes tracked the average of both gratings (pattern motion). In attention conditions, perception and pursuit were shifted toward the attended component. These results suggest that attention affects perception and pursuit similarly even though only the former reflects awareness. The eyes can track an attended feature even if observers do not perceive it.

  14. Differential responses in dorsal visual cortex to motion and disparity depth cues

    PubMed Central

    Arnoldussen, David M.; Goossens, Jeroen; van den Berg, Albert V.

    2013-01-01

    We investigated how interactions between monocular motion parallax and binocular cues to depth vary in human motion areas for wide-field visual motion stimuli (110 × 100°). We used fMRI with an extensive 2 × 3 × 2 factorial blocked design in which we combined two types of self-motion (translational motion and translational + rotational motion), with three categories of motion inflicted by the degree of noise (self-motion, distorted self-motion, and multiple object-motion), and two different view modes of the flow patterns (stereo and synoptic viewing). Interactions between disparity and motion category revealed distinct contributions to self- and object-motion processing in 3D. For cortical areas V6 and CSv, but not the anterior part of MT+ with bilateral visual responsiveness (MT+/b), we found a disparity-dependent effect of rotational flow and noise: When self-motion perception was degraded by adding rotational flow and moderate levels of noise, the BOLD responses were reduced compared with translational self-motion alone, but this reduction was cancelled by adding stereo information which also rescued the subject's self-motion percept. At high noise levels, when the self-motion percept gave way to a swarm of moving objects, the BOLD signal strongly increased compared to self-motion in areas MT+/b and V6, but only for stereo in the latter. BOLD response did not increase for either view mode in CSv. These different response patterns indicate different contributions of areas V6, MT+/b, and CSv to the processing of self-motion perception and the processing of multiple independent motions. PMID:24339808

  15. Adaptation aftereffects in the perception of gender from biological motion.

    PubMed

    Troje, Nikolaus F; Sadr, Javid; Geyer, Henning; Nakayama, Ken

    2006-07-28

    Human visual perception is highly adaptive. While this has been known and studied for a long time in domains such as color vision, motion perception, or the processing of spatial frequency, a number of more recent studies have shown that adaptation and adaptation aftereffects also occur in high-level visual domains like shape perception and face recognition. Here, we present data that demonstrate a pronounced aftereffect in response to adaptation to the perceived gender of biological motion point-light walkers. A walker that is perceived to be ambiguous in gender under neutral adaptation appears to be male after adaptation with an exaggerated female walker and female after adaptation with an exaggerated male walker. We discuss this adaptation aftereffect as a tool to characterize and probe the mechanisms underlying biological motion perception.

  16. Coherence Motion Perception in Developmental Dyslexia: A Meta-Analysis of Behavioral Studies

    ERIC Educational Resources Information Center

    Benassi, Mariagrazia; Simonelli, Letizia; Giovagnoli, Sara; Bolzani, Roberto

    2010-01-01

    The magnitude of the association between developmental dyslexia (DD) and motion sensitivity is evaluated in 35 studies, which investigated coherence motion perception in DD. A first analysis is conducted on the differences between DD groups and age-matched control (C) groups. In a second analysis, the relationship between motion coherence…

  17. Effects of auditory information on self-motion perception during simultaneous presentation of visual shearing motion

    PubMed Central

    Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu

    2015-01-01

    Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828

  18. Visual processing and social cognition in schizophrenia: relationships among eye movements, biological motion perception, and empathy.

    PubMed

    Matsumoto, Yukiko; Takahashi, Hideyuki; Murai, Toshiya; Takahashi, Hidehiko

    2015-01-01

    Schizophrenia patients have impairments at several levels of cognition including visual attention (eye movements), perception, and social cognition. However, it remains unclear how lower-level cognitive deficits influence higher-level cognition. To elucidate the hierarchical path linking deficient cognitions, we focused on biological motion perception, which is involved in both the early stage of visual perception (attention) and higher social cognition, and is impaired in schizophrenia. Seventeen schizophrenia patients and 18 healthy controls participated in the study. Using point-light walker stimuli, we examined eye movements during biological motion perception in schizophrenia. We assessed relationships among eye movements, biological motion perception and empathy. In the biological motion detection task, schizophrenia patients showed lower accuracy and fixated longer than healthy controls. As opposed to controls, patients exhibiting longer fixation durations and fewer numbers of fixations demonstrated higher accuracy. Additionally, in the patient group, the correlations between accuracy and affective empathy index and between eye movement index and affective empathy index were significant. The altered gaze patterns in patients indicate that top-down attention compensates for impaired bottom-up attention. Furthermore, aberrant eye movements might lead to deficits in biological motion perception and finally link to social cognitive impairments. The current findings merit further investigation for understanding the mechanism of social cognitive training and its development. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  19. Neural representations of kinematic laws of motion: evidence for action-perception coupling.

    PubMed

    Dayan, Eran; Casile, Antonino; Levit-Binnun, Nava; Giese, Martin A; Hendler, Talma; Flash, Tamar

    2007-12-18

    Behavioral and modeling studies have established that curved and drawing human hand movements obey the 2/3 power law, which dictates a strong coupling between movement curvature and velocity. Human motion perception seems to reflect this constraint. The functional MRI study reported here demonstrates that the brain's response to this law of motion is much stronger and more widespread than to other types of motion. Compliance with this law is reflected in the activation of a large network of brain areas subserving motor production, visual motion processing, and action observation functions. Hence, these results strongly support the notion of similar neural coding for motion perception and production. These findings suggest that cortical motion representations are optimally tuned to the kinematic and geometrical invariants characterizing biological actions.

  20. Human Motion Perception and Smooth Eye Movements Show Similar Directional Biases for Elongated Apertures

    NASA Technical Reports Server (NTRS)

    Beutter, Brent R.; Stone, Leland S.

    1997-01-01

    Although numerous studies have examined the relationship between smooth-pursuit eye movements and motion perception, it remains unresolved whether a common motion-processing system subserves both perception and pursuit. To address this question, we simultaneously recorded perceptual direction judgments and the concomitant smooth eye movement response to a plaid stimulus that we have previously shown generates systematic perceptual errors. We measured the perceptual direction biases psychophysically and the smooth eye-movement direction biases using two methods (standard averaging and oculometric analysis). We found that the perceptual and oculomotor biases were nearly identical suggesting that pursuit and perception share a critical motion processing stage, perhaps in area MT or MST of extrastriate visual cortex.

  1. Human motion perception and smooth eye movements show similar directional biases for elongated apertures

    NASA Technical Reports Server (NTRS)

    Beutter, B. R.; Stone, L. S.

    1998-01-01

    Although numerous studies have examined the relationship between smooth-pursuit eye movements and motion perception, it remains unresolved whether a common motion-processing system subserves both perception and pursuit. To address this question, we simultaneously recorded perceptual direction judgments and the concomitant smooth eye-movement response to a plaid stimulus that we have previously shown generates systematic perceptual errors. We measured the perceptual direction biases psychophysically and the smooth eye-movement direction biases using two methods (standard averaging and oculometric analysis). We found that the perceptual and oculomotor biases were nearly identical, suggesting that pursuit and perception share a critical motion processing stage, perhaps in area MT or MST of extrastriate visual cortex.

  2. Accuracy and Tuning of Flow Parsing for Visual Perception of Object Motion During Self-Motion

    PubMed Central

    Niehorster, Diederick C.

    2017-01-01

    How do we perceive object motion during self-motion using visual information alone? Previous studies have reported that the visual system can use optic flow to identify and globally subtract the retinal motion component resulting from self-motion to recover scene-relative object motion, a process called flow parsing. In this article, we developed a retinal motion nulling method to directly measure and quantify the magnitude of flow parsing (i.e., flow parsing gain) in various scenarios to examine the accuracy and tuning of flow parsing for the visual perception of object motion during self-motion. We found that flow parsing gains were below unity for all displays in all experiments; and that increasing self-motion and object motion speed did not alter flow parsing gain. We conclude that visual information alone is not sufficient for the accurate perception of scene-relative motion during self-motion. Although flow parsing performs global subtraction, its accuracy also depends on local motion information in the retinal vicinity of the moving object. Furthermore, the flow parsing gain was constant across common self-motion or object motion speeds. These results can be used to inform and validate computational models of flow parsing. PMID:28567272

  3. Visual-vestibular integration as a function of adaptation to space flight and return to Earth

    NASA Technical Reports Server (NTRS)

    Reschke, Millard R.; Bloomberg, Jacob J.; Harm, Deborah L.; Huebner, William P.; Krnavek, Jody M.; Paloski, William H.; Berthoz, Alan

    1999-01-01

    Research on perception and control of self-orientation and self-motion addresses interactions between action and perception . Self-orientation and self-motion, and the perception of that orientation and motion are required for and modified by goal-directed action. Detailed Supplementary Objective (DSO) 604 Operational Investigation-3 (OI-3) was designed to investigate the integrated coordination of head and eye movements within a structured environment where perception could modify responses and where response could be compensatory for perception. A full understanding of this coordination required definition of spatial orientation models for the microgravity environment encountered during spaceflight.

  4. Integrative cortical dysfunction and pervasive motion perception deficit in fragile X syndrome.

    PubMed

    Kogan, C S; Bertone, A; Cornish, K; Boutet, I; Der Kaloustian, V M; Andermann, E; Faubert, J; Chaudhuri, A

    2004-11-09

    Fragile X syndrome (FXS) is associated with neurologic deficits recently attributed to the magnocellular pathway of the lateral geniculate nucleus. To test the hypotheses that FXS individuals 1) have a pervasive visual motion perception impairment affecting neocortical circuits in the parietal lobe and 2) have deficits in integrative neocortical mechanisms necessary for perception of complex stimuli. Psychophysical tests of visual motion and form perception defined by either first-order (luminance) or second-order (texture) attributes were used to probe early and later occipito-temporal and occipito-parietal functioning. When compared to developmental- and age-matched controls, FXS individuals displayed severe impairments in first- and second-order motion perception. This deficit was accompanied by near normal perception for first-order form stimuli but not second-order form stimuli. Impaired visual motion processing for first- and second-order stimuli suggests that both early- and later-level neurologic function of the parietal lobe are affected in Fragile X syndrome (FXS). Furthermore, this deficit likely stems from abnormal input from the magnocellular compartment of the lateral geniculate nucleus. Impaired visual form and motion processing for complex visual stimuli with normal processing for simple (i.e., first-order) form stimuli suggests that FXS individuals have normal early form processing accompanied by a generalized impairment in neurologic mechanisms necessary for integrating all early visual input.

  5. Motion perception without Nystagmus--a novel manifestation of cerebellar stroke.

    PubMed

    Shaikh, Aasef G

    2014-01-01

    The motion perception and the vestibulo-ocular reflex (VOR) each serve distinct functions. The VOR keeps the gaze steady on the target of interest, whereas vestibular perception serves a number of tasks, including awareness of self-motion and orientation in space. VOR and motion perception might abide the same neurophysiological principles, but their distinct anatomical correlates were proposed. In patients with cerebellar stroke in distribution of medial division of posterior inferior cerebellar artery, we asked whether specific location of the focal lesion in vestibulocerebellum could cause impaired perception of motion but normal eye movements. Thirteen patients were studied, 5 consistently perceived spinning of surrounding environment (vertigo), but the eye movements were normal. This group was called "disease model." Remaining 8 patients were also symptomatic for vertigo, but they had spontaneous nystagmus. The latter group was called "disease control." Magnetic resonance imaging in both groups consistently revealed focal cerebellar infarct affecting posterior cerebellar vermis (lobule IX). In the "disease model" group, only part of lobule IX was affected. In the disease control group, however, complete lobule IX was involved. This study discovered a novel presentation of cerebellar stroke where only motion perception was affected, but there was an absence of objective neurologic signs. Copyright © 2014 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  6. Video-Based Method of Quantifying Performance and Instrument Motion During Simulated Phonosurgery

    PubMed Central

    Conroy, Ellen; Surender, Ketan; Geng, Zhixian; Chen, Ting; Dailey, Seth; Jiang, Jack

    2015-01-01

    Objectives/Hypothesis To investigate the use of the Video-Based Phonomicrosurgery Instrument Tracking System to collect instrument position data during simulated phonomicrosurgery and calculate motion metrics using these data. We used this system to determine if novice subject motion metrics improved over 1 week of training. Study Design Prospective cohort study. Methods Ten subjects performed simulated surgical tasks once per day for 5 days. Instrument position data were collected and used to compute motion metrics (path length, depth perception, and motion smoothness). Data were analyzed to determine if motion metrics improved with practice time. Task outcome was also determined each day, and relationships between task outcome and motion metrics were used to evaluate the validity of motion metrics as indicators of surgical performance. Results Significant decreases over time were observed for path length (P <.001), depth perception (P <.001), and task outcome (P <.001). No significant change was observed for motion smoothness. Significant relationships were observed between task outcome and path length (P <.001), depth perception (P <.001), and motion smoothness (P <.001). Conclusions Our system can estimate instrument trajectory and provide quantitative descriptions of surgical performance. It may be useful for evaluating phonomicrosurgery performance. Path length and depth perception may be particularly useful indicators. PMID:24737286

  7. Default perception of high-speed motion

    PubMed Central

    Wexler, Mark; Glennerster, Andrew; Cavanagh, Patrick; Ito, Hiroyuki; Seno, Takeharu

    2013-01-01

    When human observers are exposed to even slight motion signals followed by brief visual transients—stimuli containing no detectable coherent motion signals—they perceive large and salient illusory jumps. This visually striking effect, which we call “high phi,” challenges well-entrenched assumptions about the perception of motion, namely the minimal-motion principle and the breakdown of coherent motion perception with steps above an upper limit called dmax. Our experiments with transients, such as texture randomization or contrast reversal, show that the magnitude of the jump depends on spatial frequency and transient duration—but not on the speed of the inducing motion signals—and the direction of the jump depends on the duration of the inducer. Jump magnitude is robust across jump directions and different types of transient. In addition, when a texture is actually displaced by a large step beyond the upper step size limit of dmax, a breakdown of coherent motion perception is expected; however, in the presence of an inducer, observers again perceive coherent displacements at or just above dmax. In summary, across a large variety of stimuli, we find that when incoherent motion noise is preceded by a small bias, instead of perceiving little or no motion—as suggested by the minimal-motion principle—observers perceive jumps whose amplitude closely follows their own dmax limits. PMID:23572578

  8. A Pursuit Theory Account for the Perception of Common Motion in Motion Parallax.

    PubMed

    Ratzlaff, Michael; Nawrot, Mark

    2016-09-01

    The visual system uses an extraretinal pursuit eye movement signal to disambiguate the perception of depth from motion parallax. Visual motion in the same direction as the pursuit is perceived nearer in depth while visual motion in the opposite direction as pursuit is perceived farther in depth. This explanation of depth sign applies to either an allocentric frame of reference centered on the fixation point or an egocentric frame of reference centered on the observer. A related problem is that of depth order when two stimuli have a common direction of motion. The first psychophysical study determined whether perception of egocentric depth order is adequately explained by a model employing an allocentric framework, especially when the motion parallax stimuli have common rather than divergent motion. A second study determined whether a reversal in perceived depth order, produced by a reduction in pursuit velocity, is also explained by this model employing this allocentric framework. The results show than an allocentric model can explain both the egocentric perception of depth order with common motion and the perceptual depth order reversal created by a reduction in pursuit velocity. We conclude that an egocentric model is not the only explanation for perceived depth order in these common motion conditions. © The Author(s) 2016.

  9. An Adaptive Neural Mechanism for Acoustic Motion Perception with Varying Sparsity

    PubMed Central

    Shaikh, Danish; Manoonpong, Poramate

    2017-01-01

    Biological motion-sensitive neural circuits are quite adept in perceiving the relative motion of a relevant stimulus. Motion perception is a fundamental ability in neural sensory processing and crucial in target tracking tasks. Tracking a stimulus entails the ability to perceive its motion, i.e., extracting information about its direction and velocity. Here we focus on auditory motion perception of sound stimuli, which is poorly understood as compared to its visual counterpart. In earlier work we have developed a bio-inspired neural learning mechanism for acoustic motion perception. The mechanism extracts directional information via a model of the peripheral auditory system of lizards. The mechanism uses only this directional information obtained via specific motor behaviour to learn the angular velocity of unoccluded sound stimuli in motion. In nature however the stimulus being tracked may be occluded by artefacts in the environment, such as an escaping prey momentarily disappearing behind a cover of trees. This article extends the earlier work by presenting a comparative investigation of auditory motion perception for unoccluded and occluded tonal sound stimuli with a frequency of 2.2 kHz in both simulation and practice. Three instances of each stimulus are employed, differing in their movement velocities–0.5°/time step, 1.0°/time step and 1.5°/time step. To validate the approach in practice, we implement the proposed neural mechanism on a wheeled mobile robot and evaluate its performance in auditory tracking. PMID:28337137

  10. Spatial Disorientation in Gondola Centrifuges Predicted by the Form of Motion as a Whole in 3-D

    PubMed Central

    Holly, Jan E.; Harmon, Katharine J.

    2009-01-01

    INTRODUCTION During a coordinated turn, subjects can misperceive tilts. Subjects accelerating in tilting-gondola centrifuges without external visual reference underestimate the roll angle, and underestimate more when backward-facing than when forward-facing. In addition, during centrifuge deceleration, the perception of pitch can include tumble while paradoxically maintaining a fixed perceived pitch angle. The goal of the present research was to test two competing hypotheses: (1) that components of motion are perceived relatively independently and then combined to form a three-dimensional perception, and (2) that perception is governed by familiarity of motions as a whole in three dimensions, with components depending more strongly on the overall shape of the motion. METHODS Published experimental data were used from existing tilting-gondola centrifuge studies. The two hypotheses were implemented formally in computer models, and centrifuge acceleration and deceleration were simulated. RESULTS The second, whole-motion oriented, hypothesis better predicted subjects' perceptions, including the forward-backward asymmetry and the paradoxical tumble upon deceleration. Important was the predominant stimulus at the beginning of the motion as well as the familiarity of centripetal acceleration. CONCLUSION Three-dimensional perception is better predicted by taking into account familiarity with the form of three-dimensional motion. PMID:19198199

  11. Relation of motion sickness susceptibility to vestibular and behavioral measures of orientation

    NASA Technical Reports Server (NTRS)

    Peterka, Robert J.

    1995-01-01

    The objective is to determine the relationship of motion sickness susceptibility to vestibulo-ocular reflexes (VOR), motion perception, and behavioral utilization of sensory orientation cues for the control of postural equilibrium. The work is focused on reflexes and motion perception associated with pitch and roll movements that stimulate the vertical semicircular canals and otolith organs of the inner ear. This work is relevant to the space motion sickness problem since 0 g related sensory conflicts between vertical canal and otolith motion cues are a likely cause of space motion sickness.

  12. Preadapting to Weightlessness

    NASA Technical Reports Server (NTRS)

    Reschke, M. F.; Parker, D. E.; Arrott, A. P.

    1986-01-01

    Report discusses physiological and physical concepts of proposed training system to precondition astronauts to weightless environment. System prevents motion sickness, often experienced during early part of orbital flight. Also helps prevent seasickness and other forms of terrestrial motion sickness, often experienced during early part of orbital flight. Training affects subject's perception of inner-ear signals, visual signals, and kinesthetic motion perception. Changed perception resembles that of astronauts who spent many days in space and adapted to weightlessness.

  13. Orientation of selective effects of body tilt on visually induced perception of self-motion.

    PubMed

    Nakamura, S; Shimojo, S

    1998-10-01

    We examined the effect of body posture upon visually induced perception of self-motion (vection) with various angles of observer's tilt. The experiment indicated that the tilted body of observer could enhance perceived strength of vertical vection, while there was no effect of body tilt on horizontal vection. This result suggests that there is an interaction between the effects of visual and vestibular information on perception of self-motion.

  14. Posture-based processing in visual short-term memory for actions.

    PubMed

    Vicary, Staci A; Stevens, Catherine J

    2014-01-01

    Visual perception of human action involves both form and motion processing, which may rely on partially dissociable neural networks. If form and motion are dissociable during visual perception, then they may also be dissociable during their retention in visual short-term memory (VSTM). To elicit form-plus-motion and form-only processing of dance-like actions, individual action frames can be presented in the correct or incorrect order. The former appears coherent and should elicit action perception, engaging both form and motion pathways, whereas the latter appears incoherent and should elicit posture perception, engaging form pathways alone. It was hypothesized that, if form and motion are dissociable in VSTM, then recognition of static body posture should be better after viewing incoherent than after viewing coherent actions. However, as VSTM is capacity limited, posture-based encoding of actions may be ineffective with increased number of items or frames. Using a behavioural change detection task, recognition of a single test posture was significantly more likely after studying incoherent than after studying coherent stimuli. However, this effect only occurred for spans of two (but not three) items and for stimuli with five (but not nine) frames. As in perception, posture and motion are dissociable in VSTM.

  15. Curvilinear approach to an intersection and visual detection of a collision.

    PubMed

    Berthelon, C; Mestre, D

    1993-09-01

    Visual motion perception plays a fundamental role in vehicle control. Recent studies have shown that the pattern of optical flow resulting from the observer's self-motion through a stable environment is used by the observer to accurately control his or her movements. However, little is known about the perception of another vehicle during self-motion--for instance, when a car driver approaches an intersection with traffic. In a series of experiments using visual simulations of car driving, we show that observers are able to detect the presence of a moving object during self-motion. However, the perception of the other car's trajectory appears to be strongly dependent on environmental factors, such as the presence of a road sign near the intersection or the shape of the road. These results suggest that local and global visual factors determine the perception of a car's trajectory during self-motion.

  16. Perceptual Training Strongly Improves Visual Motion Perception in Schizophrenia

    ERIC Educational Resources Information Center

    Norton, Daniel J.; McBain, Ryan K.; Ongur, Dost; Chen, Yue

    2011-01-01

    Schizophrenia patients exhibit perceptual and cognitive deficits, including in visual motion processing. Given that cognitive systems depend upon perceptual inputs, improving patients' perceptual abilities may be an effective means of cognitive intervention. In healthy people, motion perception can be enhanced through perceptual learning, but it…

  17. Accuracy of System Step Response Roll Magnitude Estimation from Central and Peripheral Visual Displays and Simulator Cockpit Motion

    NASA Technical Reports Server (NTRS)

    Hosman, R. J. A. W.; Vandervaart, J. C.

    1984-01-01

    An experiment to investigate visual roll attitude and roll rate perception is described. The experiment was also designed to assess the improvements of perception due to cockpit motion. After the onset of the motion, subjects were to make accurate and quick estimates of the final magnitude of the roll angle step response by pressing the appropriate button of a keyboard device. The differing time-histories of roll angle, roll rate and roll acceleration caused by a step response stimulate the different perception processes related the central visual field, peripheral visual field and vestibular organs in different, yet exactly known ways. Experiments with either of the visual displays or cockpit motion and some combinations of these were run to asses the roles of the different perception processes. Results show that the differences in response time are much more pronounced than the differences in perception accuracy.

  18. Phase-linking and the perceived motion during off-vertical axis rotation.

    PubMed

    Holly, Jan E; Wood, Scott J; McCollum, Gin

    2010-01-01

    Human off-vertical axis rotation (OVAR) in the dark typically produces perceived motion about a cone, the amplitude of which changes as a function of frequency. This perception is commonly attributed to the fact that both the OVAR and the conical motion have a gravity vector that rotates about the subject. Little-known, however, is that this rotating-gravity explanation for perceived conical motion is inconsistent with basic observations about self-motion perception: (a) that the perceived vertical moves toward alignment with the gravito-inertial acceleration (GIA) and (b) that perceived translation arises from perceived linear acceleration, as derived from the portion of the GIA not associated with gravity. Mathematically proved in this article is the fact that during OVAR these properties imply mismatched phase of perceived tilt and translation, in contrast to the common perception of matched phases which correspond to conical motion with pivot at the bottom. This result demonstrates that an additional perceptual rule is required to explain perception in OVAR. This study investigates, both analytically and computationally, the phase relationship between tilt and translation at different stimulus rates-slow (45 degrees /s) and fast (180 degrees /s), and the three-dimensional shape of predicted perceived motion, under different sets of hypotheses about self-motion perception. We propose that for human motion perception, there is a phase-linking of tilt and translation movements to construct a perception of one's overall motion path. Alternative hypotheses to achieve the phase match were tested with three-dimensional computational models, comparing the output with published experimental reports. The best fit with experimental data was the hypothesis that the phase of perceived translation was linked to perceived tilt, while the perceived tilt was determined by the GIA. This hypothesis successfully predicted the bottom-pivot cone commonly reported and a reduced sense of tilt during fast OVAR. Similar considerations apply to the hilltop illusion often reported during horizontal linear oscillation. Known response properties of central neurons are consistent with this ability to phase-link translation with tilt. In addition, the competing "standard" model was mathematically proved to be unable to predict the bottom-pivot cone regardless of the values used for parameters in the model.

  19. Visual motion perception predicts driving hazard perception ability.

    PubMed

    Lacherez, Philippe; Au, Sandra; Wood, Joanne M

    2014-02-01

    To examine the basis of previous findings of an association between indices of driving safety and visual motion sensitivity and to examine whether this association could be explained by low-level changes in visual function. A total of 36 visually normal participants (aged 19-80 years) completed a battery of standard vision tests including visual acuity, contrast sensitivity and automated visual fields and two tests of motion perception including sensitivity for movement of a drifting Gabor stimulus and sensitivity for displacement in a random dot kinematogram (Dmin ). Participants also completed a hazard perception test (HPT), which measured participants' response times to hazards embedded in video recordings of real-world driving, which has been shown to be linked to crash risk. Dmin for the random dot stimulus ranged from -0.88 to -0.12 log minutes of arc, and the minimum drift rate for the Gabor stimulus ranged from 0.01 to 0.35 cycles per second. Both measures of motion sensitivity significantly predicted response times on the HPT. In addition, while the relationship involving the HPT and motion sensitivity for the random dot kinematogram was partially explained by the other visual function measures, the relationship with sensitivity for detection of the drifting Gabor stimulus remained significant even after controlling for these variables. These findings suggest that motion perception plays an important role in the visual perception of driving-relevant hazards independent of other areas of visual function and should be further explored as a predictive test of driving safety. Future research should explore the causes of reduced motion perception to develop better interventions to improve road safety. © 2012 The Authors. Acta Ophthalmologica © 2012 Acta Ophthalmologica Scandinavica Foundation.

  20. Perception of Biological Motion in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Freitag, Christine M.; Konrad, Carsten; Haberlen, Melanie; Kleser, Christina; von Gontard, Alexander; Reith, Wolfgang; Troje, Nikolaus F.; Krick, Christoph

    2008-01-01

    In individuals with autism or autism-spectrum-disorder (ASD), conflicting results have been reported regarding the processing of biological motion tasks. As biological motion perception and recognition might be related to impaired imitation, gross motor skills and autism specific psychopathology in individuals with ASD, we performed a functional…

  1. Perception of linear horizontal self-motion induced by peripheral vision /linearvection/ - Basic characteristics and visual-vestibular interactions

    NASA Technical Reports Server (NTRS)

    Berthoz, A.; Pavard, B.; Young, L. R.

    1975-01-01

    The basic characteristics of the sensation of linear horizontal motion have been studied. Objective linear motion was induced by means of a moving cart. Visually induced linear motion perception (linearvection) was obtained by projection of moving images at the periphery of the visual field. Image velocity and luminance thresholds for the appearance of linearvection have been measured and are in the range of those for image motion detection (without sensation of self motion) by the visual system. Latencies of onset are around 1 sec and short term adaptation has been shown. The dynamic range of the visual analyzer as judged by frequency analysis is lower than the vestibular analyzer. Conflicting situations in which visual cues contradict vestibular and other proprioceptive cues show, in the case of linearvection a dominance of vision which supports the idea of an essential although not independent role of vision in self motion perception.

  2. The perception of object versus objectless motion.

    PubMed

    Hock, Howard S; Nichols, David F

    2013-05-01

    Wertheimer, M. (Zeitschrift für Psychologie und Physiologie der Sinnesorgane, 61:161-265, 1912) classical distinction between beta (object) and phi (objectless) motion is elaborated here in a series of experiments concerning competition between two qualitatively different motion percepts, induced by sequential changes in luminance for two-dimensional geometric objects composed of rectangular surfaces. One of these percepts is of spreading-luminance motion that continuously sweeps across the entire object; it exhibits shape invariance and is perceived most strongly for fast speeds. Significantly for the characterization of phi as objectless motion, the spreading luminance does not involve surface boundaries or any other feature; the percept is driven solely by spatiotemporal changes in luminance. Alternatively, and for relatively slow speeds, a discrete series of edge motions can be perceived in the direction opposite to spreading-luminance motion. Akin to beta motion, the edges appear to move through intermediate positions within the object's changing surfaces. Significantly for the characterization of beta as object motion, edge motion exhibits shape dependence and is based on the detection of oppositely signed changes in contrast (i.e., counterchange) for features essential to the determination of an object's shape, the boundaries separating its surfaces. These results are consistent with area MT neurons that differ with respect to speed preference Newsome et al (Journal of Neurophysiology, 55:1340-1351, 1986) and shape dependence Zeki (Journal of Physiology, 236:549-573, 1974).

  3. Unconscious Local Motion Alters Global Image Speed

    PubMed Central

    Khuu, Sieu K.; Chung, Charles Y. L.; Lord, Stephanie; Pearson, Joel

    2014-01-01

    Accurate motion perception of self and object speed is crucial for successful interaction in the world. The context in which we make such speed judgments has a profound effect on their accuracy. Misperceptions of motion speed caused by the context can have drastic consequences in real world situations, but they also reveal much about the underlying mechanisms of motion perception. Here we show that motion signals suppressed from awareness can warp simultaneous conscious speed perception. In Experiment 1, we measured global speed discrimination thresholds using an annulus of 8 local Gabor elements. We show that physically removing local elements from the array attenuated global speed discrimination. However, removing awareness of the local elements only had a small effect on speed discrimination. That is, unconscious local motion elements contributed to global conscious speed perception. In Experiment 2 we measured the global speed of the moving Gabor patterns, when half the elements moved at different speeds. We show that global speed averaging occurred regardless of whether local elements were removed from awareness, such that the speed of invisible elements continued to be averaged together with the visible elements to determine the global speed. These data suggest that contextual motion signals outside of awareness can both boost and affect our experience of motion speed, and suggest that such pooling of motion signals occurs before the conscious extraction of the surround motion speed. PMID:25503603

  4. The effect of occlusion therapy on motion perception deficits in amblyopia.

    PubMed

    Giaschi, Deborah; Chapman, Christine; Meier, Kimberly; Narasimhan, Sathyasri; Regan, David

    2015-09-01

    There is growing evidence for deficits in motion perception in amblyopia, but these are rarely assessed clinically. In this prospective study we examined the effect of occlusion therapy on motion-defined form perception and multiple-object tracking. Participants included children (3-10years old) with unilateral anisometropic and/or strabismic amblyopia who were currently undergoing occlusion therapy and age-matched control children with normal vision. At the start of the study, deficits in motion-defined form perception were present in at least one eye in 69% of the children with amblyopia. These deficits were still present at the end of the study in 55% of the amblyopia group. For multiple-object tracking, deficits were present initially in 64% and finally in 55% of the children with amblyopia, even after completion of occlusion therapy. Many of these deficits persisted in spite of an improvement in amblyopic eye visual acuity in response to occlusion therapy. The prevalence of motion perception deficits in amblyopia as well as their resistance to occlusion therapy, support the need for new approaches to amblyopia treatment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Motion perception tasks as potential correlates to driving difficulty in the elderly

    NASA Astrophysics Data System (ADS)

    Raghuram, A.; Lakshminarayanan, V.

    2006-09-01

    Changes in the demographics indicates that the population older than 65 is on the rise because of the aging of the ‘baby boom’ generation. This aging trend and driving related accident statistics reveal the need for procedures and tests that would assess the driving ability of older adults and predict whether they would be safe or unsafe drivers. Literature shows that an attention based test called the useful field of view (UFOV) was a significant predictor of accident rates compared to any other visual function tests. The present study evaluates a qualitative trend on using motion perception tasks as a potential visual perceptual correlates in screening elderly drivers who might have difficulty in driving. Data was collected from 15 older subjects with a mean age of 71. Motion perception tasks included—speed discrimination with radial and lamellar motion, time to collision using prediction motion and estimating direction of heading. A motion index score was calculated which was indicative of performance on all of the above-mentioned motion tasks. Scores on visual attention was assessed using UFOV. A driving habit questionnaire was also administered for a self report on the driving difficulties and accident rates. A qualitative trend based on frequency distributions show that thresholds on the motion perception tasks are successful in identifying subjects who reported to have had difficulty in certain aspects of driving and had accidents. Correlation between UFOV and motion index scores was not significant indicating that probably different aspects of visual information processing that are crucial to driving behaviour are being tapped by these two paradigms. UFOV and motion perception tasks together can be a better predictor for identifying at risk or safe drivers than just using either one of them.

  6. Contrast effects on speed perception for linear and radial motion.

    PubMed

    Champion, Rebecca A; Warren, Paul A

    2017-11-01

    Speed perception is vital for safe activity in the environment. However, considerable evidence suggests that perceived speed changes as a function of stimulus contrast, with some investigators suggesting that this might have meaningful real-world consequences (e.g. driving in fog). In the present study we investigate whether the neural effects of contrast on speed perception occur at the level of local or global motion processing. To do this we examine both speed discrimination thresholds and contrast-dependent speed perception for two global motion configurations that have matched local spatio-temporal structure. Specifically we compare linear and radial configurations, the latter of which arises very commonly due to self-movement. In experiment 1 the stimuli comprised circular grating patches. In experiment 2, to match stimuli even more closely, motion was presented in multiple local Gabor patches equidistant from central fixation. Each patch contained identical linear motion but the global configuration was either consistent with linear or radial motion. In both experiments 1 and 2, discrimination thresholds and contrast-induced speed biases were similar in linear and radial conditions. These results suggest that contrast-based speed effects occur only at the level of local motion processing, irrespective of global structure. This result is interpreted in the context of previous models of speed perception and evidence suggesting differences in perceived speed of locally matched linear and radial stimuli. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Brief report: altered horizontal binding of single dots to coherent motion in autism.

    PubMed

    David, Nicole; Rose, Michael; Schneider, Till R; Vogeley, Kai; Engel, Andreas K

    2010-12-01

    Individuals with autism often show a fragmented way of perceiving their environment, suggesting a disorder of information integration, possibly due to disrupted communication between brain areas. We investigated thirteen individuals with high-functioning autism (HFA) and thirteen healthy controls using the metastable motion quartet, a stimulus consisting of two dots alternately presented at four locations of a hypothetical square, thereby inducing an apparent motion percept. This percept is vertical or horizontal, the latter requiring binding of motion signals across cerebral hemispheres. Decreasing the horizontal distance between dots could facilitate horizontal percepts. We found evidence for altered horizontal binding in HFA: Individuals with HFA needed stronger facilitation to experience horizontal motion. These data are interpreted in light of reduced cross-hemispheric communication.

  8. Thresholds for the perception of whole-body linear sinusoidal motion in the horizontal plane

    NASA Technical Reports Server (NTRS)

    Mah, Robert W.; Young, Laurence R.; Steele, Charles R.; Schubert, Earl D.

    1989-01-01

    An improved linear sled has been developed to provide precise motion stimuli without generating perceptible extraneous motion cues (a noiseless environment). A modified adaptive forced-choice method was employed to determine perceptual thresholds to whole-body linear sinusoidal motion in 25 subjects. Thresholds for the detection of movement in the horizontal plane were found to be lower than those reported previously. At frequencies of 0.2 to 0.5 Hz, thresholds were shown to be independent of frequency, while at frequencies of 1.0 to 3.0 Hz, thresholds showed a decreasing sensitivity with increasing frequency, indicating that the perceptual process is not sensitive to the rate change of acceleration of the motion stimulus. The results suggest that the perception of motion behaves as an integrating accelerometer with a bandwidth of at least 3 Hz.

  9. Self-motion perception and vestibulo-ocular reflex during whole body yaw rotation in standing subjects: the role of head position and neck proprioception.

    PubMed

    Panichi, Roberto; Botti, Fabio Massimo; Ferraresi, Aldo; Faralli, Mario; Kyriakareli, Artemis; Schieppati, Marco; Pettorossi, Vito Enrico

    2011-04-01

    Self-motion perception and vestibulo-ocular reflex (VOR) were studied during whole body yaw rotation in the dark at different static head positions. Rotations consisted of four cycles of symmetric sinusoidal and asymmetric oscillations. Self-motion perception was evaluated by measuring the ability of subjects to manually track a static remembered target. VOR was recorded separately and the slow phase eye position (SPEP) was computed. Three different head static yaw deviations (active and passive) relative to the trunk (0°, 45° to right and 45° to left) were examined. Active head deviations had a significant effect during asymmetric oscillation: the movement perception was enhanced when the head was kept turned toward the side of body rotation and decreased in the opposite direction. Conversely, passive head deviations had no effect on movement perception. Further, vibration (100 Hz) of the neck muscles splenius capitis and sternocleidomastoideus remarkably influenced perceived rotation during asymmetric oscillation. On the other hand, SPEP of VOR was modulated by active head deviation, but was not influenced by neck muscle vibration. Through its effects on motion perception and reflex gain, head position improved gaze stability and enhanced self-motion perception in the direction of the head deviation. Copyright © 2010 Elsevier B.V. All rights reserved.

  10. Stereomotion speed perception is contrast dependent

    NASA Technical Reports Server (NTRS)

    Brooks, K.

    2001-01-01

    The effect of contrast on the perception of stimulus speed for stereomotion and monocular lateral motion was investigated for successive matches in random-dot stimuli. The familiar 'Thompson effect'--that a reduction in contrast leads to a reduction in perceived speed--was found in similar proportions for both binocular images moving in depth, and for monocular images translating laterally. This result is consistent with the idea that the monocular motion system has a significant input to the stereomotion system, and dominates the speed percept for approaching motion.

  11. Neural dynamics of motion processing and speed discrimination.

    PubMed

    Chey, J; Grossberg, S; Mingolla, E

    1998-09-01

    A neural network model of visual motion perception and speed discrimination is presented. The model shows how a distributed population code of speed tuning, that realizes a size-speed correlation, can be derived from the simplest mechanisms whereby activations of multiple spatially short-range filters of different size are transformed into speed-turned cell responses. These mechanisms use transient cell responses to moving stimuli, output thresholds that covary with filter size, and competition. These mechanisms are proposed to occur in the V1-->MT cortical processing stream. The model reproduces empirically derived speed discrimination curves and simulates data showing how visual speed perception and discrimination can be affected by stimulus contrast, duration, dot density and spatial frequency. Model motion mechanisms are analogous to mechanisms that have been used to model 3-D form and figure-ground perception. The model forms the front end of a larger motion processing system that has been used to simulate how global motion capture occurs, and how spatial attention is drawn to moving forms. It provides a computational foundation for an emerging neural theory of 3-D form and motion perception.

  12. Do we track what we see? Common versus independent processing for motion perception and smooth pursuit eye movements: a review.

    PubMed

    Spering, Miriam; Montagnini, Anna

    2011-04-22

    Many neurophysiological studies in monkeys have indicated that visual motion information for the guidance of perception and smooth pursuit eye movements is - at an early stage - processed in the same visual pathway in the brain, crucially involving the middle temporal area (MT). However, these studies left some questions unanswered: Are perception and pursuit driven by the same or independent neuronal signals within this pathway? Are the perceptual interpretation of visual motion information and the motor response to visual signals limited by the same source of neuronal noise? Here, we review psychophysical studies that were motivated by these questions and compared perception and pursuit behaviorally in healthy human observers. We further review studies that focused on the interaction between perception and pursuit. The majority of results point to similarities between perception and pursuit, but dissociations were also reported. We discuss recent developments in this research area and conclude with suggestions for common and separate principles for the guidance of perceptual and motor responses to visual motion information. Copyright © 2010 Elsevier Ltd. All rights reserved.

  13. Tilt and Translation Motion Perception during Off Vertical Axis Rotation

    NASA Technical Reports Server (NTRS)

    Wood, Scott J.; Reschke, Millard F.; Clement, Gilles

    2006-01-01

    The effect of stimulus frequency on tilt and translation motion perception was studied during constant velocity off-vertical axis rotation (OVAR), and compared to the effect of stimulus frequency on eye movements. Fourteen healthy subjects were rotated in darkness about their longitudinal axis 10deg and 20deg off-vertical at 0.125 Hz, and 20deg offvertical at 0.5 Hz. Oculomotor responses were recorded using videography, and perceived motion was evaluated using verbal reports and a joystick with four degrees of freedom (pitch and roll tilt, mediallateral and anteriorposterior translation). During the lower frequency OVAR, subjects reported the perception of progressing along the edge of a cone. During higher frequency OVAR, subjects reported the perception of progressing along the edge of an upright cylinder. The modulation of both tilt recorded from the joystick and ocular torsion significantly increased as the tilt angle increased from 10deg to 20deg at 0.125 Hz, and then decreased at 0.5 Hz. Both tilt perception and torsion slightly lagged head orientation at 0.125 Hz. The phase lag of torsion increased at 0.5 Hz, while the phase of tilt perception did not change as a function of frequency. The amplitude of both translation perception recorded from the joystick and horizontal eye movements was negligible at 0.125 Hz and increased as a function of stimulus frequency. While the phase lead of horizontal eye movements decreased at 0.5 Hz, the phase of translation perception did not vary with stimulus frequency and was similar to the phase of tilt perception during all conditions. During dynamic linear acceleration in the absence of other sensory input (canal, vision) a change in stimulus frequency alone elicits similar changes in the amplitude of both self motion perception and eye movements. However, in contrast to the eye movements, the phase of both perceived tilt and translation motion is not altered by stimulus frequency. We conclude that the neural processing to distinguish tilt and translation linear acceleration stimuli differs between eye movements and motion perception.

  14. Sensory perception. [role of human vestibular system in dynamic space perception and manual vehicle control

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The effect of motion on the ability of men to perform a variety of control actions was investigated. Special attention was given to experimental and analytical studies of the dynamic characteristics of the otoliths and semicircular canals using a two axis angular motion simulator and a one axis linear motion simulator.

  15. The Perception of Biological and Mechanical Motion in Female Fragile X Premutation Carriers

    ERIC Educational Resources Information Center

    Keri, Szabolcs; Benedek, Gyorgy

    2010-01-01

    Previous studies reported impaired visual information processing in patients with fragile x syndrome and in premutation carriers. In this study, we assessed the perception of biological motion (a walking point-light character) and mechanical motion (a rotating shape) in 25 female fragile x premutation carriers and in 20 healthy non-carrier…

  16. A Role for Mouse Primary Visual Cortex in Motion Perception.

    PubMed

    Marques, Tiago; Summers, Mathew T; Fioreze, Gabriela; Fridman, Marina; Dias, Rodrigo F; Feller, Marla B; Petreanu, Leopoldo

    2018-06-04

    Visual motion is an ethologically important stimulus throughout the animal kingdom. In primates, motion perception relies on specific higher-order cortical regions. Although mouse primary visual cortex (V1) and higher-order visual areas show direction-selective (DS) responses, their role in motion perception remains unknown. Here, we tested whether V1 is involved in motion perception in mice. We developed a head-fixed discrimination task in which mice must report their perceived direction of motion from random dot kinematograms (RDKs). After training, mice made around 90% correct choices for stimuli with high coherence and performed significantly above chance for 16% coherent RDKs. Accuracy increased with both stimulus duration and visual field coverage of the stimulus, suggesting that mice in this task integrate motion information in time and space. Retinal recordings showed that thalamically projecting On-Off DS ganglion cells display DS responses when stimulated with RDKs. Two-photon calcium imaging revealed that neurons in layer (L) 2/3 of V1 display strong DS tuning in response to this stimulus. Thus, RDKs engage motion-sensitive retinal circuits as well as downstream visual cortical areas. Contralateral V1 activity played a key role in this motion direction discrimination task because its reversible inactivation with muscimol led to a significant reduction in performance. Neurometric-psychometric comparisons showed that an ideal observer could solve the task with the information encoded in DS L2/3 neurons. Motion discrimination of RDKs presents a powerful behavioral tool for dissecting the role of retino-forebrain circuits in motion processing. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Phase-linking and the perceived motion during off-vertical axis rotation

    PubMed Central

    Wood, Scott J.; McCollum, Gin

    2010-01-01

    Human off-vertical axis rotation (OVAR) in the dark typically produces perceived motion about a cone, the amplitude of which changes as a function of frequency. This perception is commonly attributed to the fact that both the OVAR and the conical motion have a gravity vector that rotates about the subject. Little-known, however, is that this rotating-gravity explanation for perceived conical motion is inconsistent with basic observations about self-motion perception: (a) that the perceived vertical moves toward alignment with the gravito-inertial acceleration (GIA) and (b) that perceived translation arises from perceived linear acceleration, as derived from the portion of the GIA not associated with gravity. Mathematically proved in this article is the fact that during OVAR these properties imply mismatched phase of perceived tilt and translation, in contrast to the common perception of matched phases which correspond to conical motion with pivot at the bottom. This result demonstrates that an additional perceptual rule is required to explain perception in OVAR. This study investigates, both analytically and computationally, the phase relationship between tilt and translation at different stimulus rates—slow (45°/s) and fast (180°/s), and the three-dimensional shape of predicted perceived motion, under different sets of hypotheses about self-motion perception. We propose that for human motion perception, there is a phase-linking of tilt and translation movements to construct a perception of one’s overall motion path. Alternative hypotheses to achieve the phase match were tested with three-dimensional computational models, comparing the output with published experimental reports. The best fit with experimental data was the hypothesis that the phase of perceived translation was linked to perceived tilt, while the perceived tilt was determined by the GIA. This hypothesis successfully predicted the bottom-pivot cone commonly reported and a reduced sense of tilt during fast OVAR. Similar considerations apply to the hilltop illusion often reported during horizontal linear oscillation. Known response properties of central neurons are consistent with this ability to phase-link translation with tilt. In addition, the competing “standard” model was mathematically proved to be unable to predict the bottom-pivot cone regardless of the values used for parameters in the model. PMID:19937069

  18. Impaired visual recognition of biological motion in schizophrenia.

    PubMed

    Kim, Jejoong; Doop, Mikisha L; Blake, Randolph; Park, Sohee

    2005-09-15

    Motion perception deficits have been suggested to be an important feature of schizophrenia but the behavioral consequences of such deficits are unknown. Biological motion refers to the movements generated by living beings. The human visual system rapidly and effortlessly detects and extracts socially relevant information from biological motion. A deficit in biological motion perception may have significant consequences for detecting and interpreting social information. Schizophrenia patients and matched healthy controls were tested on two visual tasks: recognition of human activity portrayed in point-light animations (biological motion task) and a perceptual control task involving detection of a grouped figure against the background noise (global-form task). Both tasks required detection of a global form against background noise but only the biological motion task required the extraction of motion-related information. Schizophrenia patients performed as well as the controls in the global-form task, but were significantly impaired on the biological motion task. In addition, deficits in biological motion perception correlated with impaired social functioning as measured by the Zigler social competence scale [Zigler, E., Levine, J. (1981). Premorbid competence in schizophrenia: what is being measured? Journal of Consulting and Clinical Psychology, 49, 96-105.]. The deficit in biological motion processing, which may be related to the previously documented deficit in global motion processing, could contribute to abnormal social functioning in schizophrenia.

  19. Stimulus factors in motion perception and spatial orientation

    NASA Technical Reports Server (NTRS)

    Post, R. B.; Johnson, C. A.

    1984-01-01

    The Malcolm horizon utilizes a large projected light stimulus Peripheral Vision Horizon Device (PVHD) as an attitude indicator in order to achieve a more compelling sense of roll than is obtained with smaller devices. The basic principle is that the larger stimulus is more similar to visibility of a real horizon during roll, and does not require fixation and attention to the degree that smaller displays do. Successful implementation of such a device requires adjustment of the parameters of the visual stimulus so that its effects on motion perception and spatial orientation are optimized. With this purpose in mind, the effects of relevant image variables on the perception of object motion, self motion and spatial orientation are reviewed.

  20. Effects of proposed preflight adaptation training on eye movements, self-motion perception, and motion sickness - A progress report

    NASA Technical Reports Server (NTRS)

    Parker, D. E.; Reschke, M. F.; Von Gierke, H. E.; Lessard, C. S.

    1987-01-01

    The preflight adaptation trainer (PAT) was designed to produce rearranged relationships between visual and otolith signals analogous to those experienced in space. Investigations have been undertaken with three prototype trainers. The results indicated that exposure to the PAT sensory rearrangement altered self-motion perception, induced motion sickness, and changed the amplitude and phase of the horizontal eye movements evoked by roll stimulation. However, the changes were inconsistent.

  1. Inferring the direction of implied motion depends on visual awareness

    PubMed Central

    Faivre, Nathan; Koch, Christof

    2014-01-01

    Visual awareness of an event, object, or scene is, by essence, an integrated experience, whereby different visual features composing an object (e.g., orientation, color, shape) appear as an unified percept and are processed as a whole. Here, we tested in human observers whether perceptual integration of static motion cues depends on awareness by measuring the capacity to infer the direction of motion implied by a static visible or invisible image under continuous flash suppression. Using measures of directional adaptation, we found that visible but not invisible implied motion adaptors biased the perception of real motion probes. In a control experiment, we found that invisible adaptors implying motion primed the perception of subsequent probes when they were identical (i.e., repetition priming), but not when they only shared the same direction (i.e., direction priming). Furthermore, using a model of visual processing, we argue that repetition priming effects are likely to arise as early as in the primary visual cortex. We conclude that although invisible images implying motion undergo some form of nonconscious processing, visual awareness is necessary to make inferences about motion direction. PMID:24706951

  2. Inferring the direction of implied motion depends on visual awareness.

    PubMed

    Faivre, Nathan; Koch, Christof

    2014-04-04

    Visual awareness of an event, object, or scene is, by essence, an integrated experience, whereby different visual features composing an object (e.g., orientation, color, shape) appear as an unified percept and are processed as a whole. Here, we tested in human observers whether perceptual integration of static motion cues depends on awareness by measuring the capacity to infer the direction of motion implied by a static visible or invisible image under continuous flash suppression. Using measures of directional adaptation, we found that visible but not invisible implied motion adaptors biased the perception of real motion probes. In a control experiment, we found that invisible adaptors implying motion primed the perception of subsequent probes when they were identical (i.e., repetition priming), but not when they only shared the same direction (i.e., direction priming). Furthermore, using a model of visual processing, we argue that repetition priming effects are likely to arise as early as in the primary visual cortex. We conclude that although invisible images implying motion undergo some form of nonconscious processing, visual awareness is necessary to make inferences about motion direction.

  3. Modeling a space-variant cortical representation for apparent motion.

    PubMed

    Wurbs, Jeremy; Mingolla, Ennio; Yazdanbakhsh, Arash

    2013-08-06

    Receptive field sizes of neurons in early primate visual areas increase with eccentricity, as does temporal processing speed. The fovea is evidently specialized for slow, fine movements while the periphery is suited for fast, coarse movements. In either the fovea or periphery discrete flashes can produce motion percepts. Grossberg and Rudd (1989) used traveling Gaussian activity profiles to model long-range apparent motion percepts. We propose a neural model constrained by physiological data to explain how signals from retinal ganglion cells to V1 affect the perception of motion as a function of eccentricity. Our model incorporates cortical magnification, receptive field overlap and scatter, and spatial and temporal response characteristics of retinal ganglion cells for cortical processing of motion. Consistent with the finding of Baker and Braddick (1985), in our model the maximum flash distance that is perceived as an apparent motion (Dmax) increases linearly as a function of eccentricity. Baker and Braddick (1985) made qualitative predictions about the functional significance of both stimulus and visual system parameters that constrain motion perception, such as an increase in the range of detectable motions as a function of eccentricity and the likely role of higher visual processes in determining Dmax. We generate corresponding quantitative predictions for those functional dependencies for individual aspects of motion processing. Simulation results indicate that the early visual pathway can explain the qualitative linear increase of Dmax data without reliance on extrastriate areas, but that those higher visual areas may serve as a modulatory influence on the exact Dmax increase.

  4. A Rotational Motion Perception Neural Network Based on Asymmetric Spatiotemporal Visual Information Processing.

    PubMed

    Hu, Bin; Yue, Shigang; Zhang, Zhuhong

    All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.

  5. Effects of Frequency and Motion Paradigm on Perception of Tilt and Translation During Periodic Linear Acceleration

    NASA Technical Reports Server (NTRS)

    Beaton, K. H.; Holly, J. E.; Clement, G. R.; Wood, Scott J.

    2009-01-01

    Previous studies have demonstrated an effect of frequency on the gain of tilt and translation perception. Results from different motion paradigms are often combined to extend the stimulus frequency range. For example, Off-Vertical Axis Rotation (OVAR) and Variable Radius Centrifugation (VRC) are useful to test low frequencies of linear acceleration at amplitudes that would require impractical sled lengths. The purpose of this study was to compare roll-tilt and lateral translation motion perception in 12 healthy subjects across four paradigms: OVAR, VRC, sled translation and rotation about an earth-horizontal axis. Subjects were oscillated in darkness at six frequencies from 0.01875 to 0.6 Hz (peak acceleration equivalent to 10 deg, less for sled motion below 0.15 Hz). Subjects verbally described the amplitude of perceived tilt and translation, and used a joystick to indicate the direction of motion. Consistent with previous reports, tilt perception gain decreased as a function of stimulus frequency in the motion paradigms without concordant canal tilt cues (OVAR, VRC and Sled). Translation perception gain was negligible at low stimulus frequencies and increased at higher frequencies. There were no significant differences between the phase of tilt and translation, nor did the phase significantly vary across stimulus frequency. There were differences in perception gain across the different paradigms. Paradigms that included actual tilt stimuli had the larger tilt gains, and paradigms that included actual translation stimuli had larger translation gains. In addition, the frequency at which there was a crossover of tilt and translation gains appeared to vary across motion paradigm between 0.15 and 0.3 Hz. Since the linear acceleration in the head lateral plane was equivalent across paradigms, differences in gain may be attributable to the presence of linear accelerations in orthogonal directions and/or cognitive aspects based on the expected motion paths.

  6. Object motion perception is shaped by the motor control mechanism of ocular pursuit.

    PubMed

    Schweigart, G; Mergner, T; Barnes, G R

    2003-02-01

    It is still a matter of debate whether the control of smooth pursuit eye movements involves an internal drive signal from object motion perception. We measured human target velocity and target position perceptions and compared them with the presumed pursuit control mechanism (model simulations). We presented normal subjects (Ns) and vestibular loss patients (Ps) with visual target motion in space. Concurrently, a visual background was presented, which was kept stationary or was moved with or against the target (five combinations). The motion stimuli consisted of smoothed ramp displacements with different dominant frequencies and peak velocities (0.05, 0.2, 0.8 Hz; 0.2-25.6 degrees /s). Subjects always pursued the target with their eyes. In a first experiment they gave verbal magnitude estimates of perceived target velocity in space and of self-motion in space. The target velocity estimates of both Ns and Ps tended to saturate at 0.8 Hz and with peak velocities >3 degrees /s. Below these ranges the velocity estimates showed a pronounced modulation in relation to the relative target-to-background motion ('background effect'; for example, 'background with'-motion decreased and 'against'-motion increased perceived target velocity). Pronounced only in Ps and not in Ns, there was an additional modulation in relation to the relative head-to-background motion, which co-varied with an illusion of self-motion in space (circular vection, CV) in Ps. In a second experiment, subjects performed retrospective reproduction of perceived target start and end positions with the same stimuli. Perceived end position was essentially veridical in both Ns and Ps (apart from a small constant offset). Reproduced start position showed an almost negligible background effect in Ns. In contrast, it showed a pronounced modulation in Ps, which again was related to CV. The results were compared with simulations of a model that we have recently presented for velocity control of eye pursuit. We found that the main features of target velocity perception (in terms of dynamics and modulation by background) closely correspond to those of the internal drive signal for target pursuit, compatible with the notion of a common source of both the perception and the drive signal. In contrast, the eye pursuit movement is almost free of the background effect. As an explanation, we postulate that the target-to-background component in the target pursuit drive signal largely neutralises the background-to-eye retinal slip signal (optokinetic reflex signal) that feeds into the eye premotor mechanism as a competitor of the target retinal slip signal. An extension of the model allowed us to simulate also the findings of the target position perception. It is assumed to be represented in a perceptual channel that is distinct from the velocity perception, building on an efference copy of the essentially accurate eye position. We hold that other visuomotor behaviour, such as target reaching with the hand, builds mainly on this target position percept and therefore is not contaminated by the background effect in the velocity percept. Generally, the coincidence of an erroneous velocity percept and an almost perfect eye pursuit movement during background motion is discussed as an instructive example of an action-perception dissociation. This dissociation cannot be taken to indicate that the two functions are internally represented in separate brain control systems, but rather reflects the intimate coupling between both functions.

  7. Spatiotemporal Processing in Crossmodal Interactions for Perception of the External World: A Review

    PubMed Central

    Hidaka, Souta; Teramoto, Wataru; Sugita, Yoichi

    2015-01-01

    Research regarding crossmodal interactions has garnered much interest in the last few decades. A variety of studies have demonstrated that multisensory information (vision, audition, tactile sensation, and so on) can perceptually interact with each other in the spatial and temporal domains. Findings regarding crossmodal interactions in the spatiotemporal domain (i.e., motion processing) have also been reported, with updates in the last few years. In this review, we summarize past and recent findings on spatiotemporal processing in crossmodal interactions regarding perception of the external world. A traditional view regarding crossmodal interactions holds that vision is superior to audition in spatial processing, but audition is dominant over vision in temporal processing. Similarly, vision is considered to have dominant effects over the other sensory modalities (i.e., visual capture) in spatiotemporal processing. However, recent findings demonstrate that sound could have a driving effect on visual motion perception. Moreover, studies regarding perceptual associative learning reported that, after association is established between a sound sequence without spatial information and visual motion information, the sound sequence could trigger visual motion perception. Other sensory information, such as motor action or smell, has also exhibited similar driving effects on visual motion perception. Additionally, recent brain imaging studies demonstrate that similar activation patterns could be observed in several brain areas, including the motion processing areas, between spatiotemporal information from different sensory modalities. Based on these findings, we suggest that multimodal information could mutually interact in spatiotemporal processing in the percept of the external world and that common perceptual and neural underlying mechanisms would exist for spatiotemporal processing. PMID:26733827

  8. Object Manipulation and Motion Perception: Evidence of an Influence of Action Planning on Visual Processing

    ERIC Educational Resources Information Center

    Lindemann, Oliver; Bekkering, Harold

    2009-01-01

    In 3 experiments, the authors investigated the bidirectional coupling of perception and action in the context of object manipulations and motion perception. Participants prepared to grasp an X-shaped object along one of its 2 diagonals and to rotate it in a clockwise or a counterclockwise direction. Action execution had to be delayed until the…

  9. Altered perception of apparent motion in schizophrenia spectrum disorder.

    PubMed

    Tschacher, Wolfgang; Dubouloz, Priscilla; Meier, Rahel; Junghan, Uli

    2008-06-30

    Apparent motion (AM), the Gestalt perception of motion in the absence of physical motion, was used to study perceptual organization and neurocognitive binding in schizophrenia. Associations between AM perception and psychopathology as well as meaningful subgroups were sought. Circular and stroboscopic AM stimuli were presented to 68 schizophrenia spectrum patients and healthy participants. Psychopathology was measured using the Positive and Negative Syndrome Scale (PANSS). Psychopathology was related to AM perception differentially: Positive and disorganization symptoms were linked to reduced gestalt stability; negative symptoms, excitement and depression had opposite regression weights. Dimensions of psychopathology thus have opposing effects on gestalt perception. It was generally found that AM perception was closely associated with psychopathology. No difference existed between patients and controls, but two latent classes were found. Class A members who had low levels of AM stability made up the majority of inpatients and control subjects; such participants were generally young and male, with short reaction times. Class B typically contained outpatients and some control subjects; participants in class B were older and showed longer reaction times. Hence AM perceptual dysfunctions are not specific for schizophrenia, yet AM may be a promising stage marker.

  10. Individual differences in visual motion perception and neurotransmitter concentrations in the human brain.

    PubMed

    Takeuchi, Tatsuto; Yoshimoto, Sanae; Shimada, Yasuhiro; Kochiyama, Takanori; Kondo, Hirohito M

    2017-02-19

    Recent studies have shown that interindividual variability can be a rich source of information regarding the mechanism of human visual perception. In this study, we examined the mechanisms underlying interindividual variability in the perception of visual motion, one of the fundamental components of visual scene analysis, by measuring neurotransmitter concentrations using magnetic resonance spectroscopy. First, by psychophysically examining two types of motion phenomena-motion assimilation and contrast-we found that, following the presentation of the same stimulus, some participants perceived motion assimilation, while others perceived motion contrast. Furthermore, we found that the concentration of the excitatory neurotransmitter glutamate-glutamine (Glx) in the dorsolateral prefrontal cortex (Brodmann area 46) was positively correlated with the participant's tendency to motion assimilation over motion contrast; however, this effect was not observed in the visual areas. The concentration of the inhibitory neurotransmitter γ-aminobutyric acid had only a weak effect compared with that of Glx. We conclude that excitatory process in the suprasensory area is important for an individual's tendency to determine antagonistically perceived visual motion phenomena.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).

  11. Vestibular signals in primate cortex for self-motion perception.

    PubMed

    Gu, Yong

    2018-04-21

    The vestibular peripheral organs in our inner ears detect transient motion of the head in everyday life. This information is sent to the central nervous system for automatic processes such as vestibulo-ocular reflexes, balance and postural control, and higher cognitive functions including perception of self-motion and spatial orientation. Recent neurophysiological studies have discovered a prominent vestibular network in the primate cerebral cortex. Many of the areas involved are multisensory: their neurons are modulated by both vestibular signals and visual optic flow, potentially facilitating more robust heading estimation through cue integration. Combining psychophysics, computation, physiological recording and causal manipulation techniques, recent work has addressed both the encoding and decoding of vestibular signals for self-motion perception. Copyright © 2018. Published by Elsevier Ltd.

  12. Motion sickness severity and physiological correlates during repeated exposures to a rotating optokinetic drum

    NASA Technical Reports Server (NTRS)

    Hu, Senqi; Grant, Wanda F.; Stern, Robert M.; Koch, Kenneth L.

    1991-01-01

    Fifty-two subjects were exposed to a rotating optokinetic drum. Ten of these subjects who became motion sick during the first session completed two additional sessions. Subjects' symptoms of motion sickness, perception of self-motion, electrogastrograms (EGGs), heart rate, mean successive differences of R-R intervals (RRI), and skin conductance were recorded for each session. The results from the first session indicated that the development of motion sickness was accompanied by increased EGG 4-9 cpm activity (gastric tachyarrhythmia), decreased mean succesive differences of RRI, increased skin conductance levels, and increased self-motion perception. The results from the subjects who had three repeated sessions showed that 4-9 cpm EGG activity, skin conductance levels, perception of self-motion, and symptoms of motion sickness all increased significantly during the drum rotation period of the first session, but increased significantly less during the following sessions. Mean successive differences of RRI decreased significantly during the drum rotation period for the first session, but decreased significantly less during the following sessions. Results show that the development of motion sickness is accompanied by an increase in gastric tachyarrhythmia, and an increase in sympathetic activity and a decrease in parasympathetic activity, and that adaptation to motion sickness is accompanied by the recovery of autonomic nervous system balance.

  13. Tuning self-motion perception in virtual reality with visual illusions.

    PubMed

    Bruder, Gerd; Steinicke, Frank; Wieland, Phil; Lappe, Markus

    2012-07-01

    Motion perception in immersive virtual environments significantly differs from the real world. For example, previous work has shown that users tend to underestimate travel distances in virtual environments (VEs). As a solution to this problem, researchers proposed to scale the mapped virtual camera motion relative to the tracked real-world movement of a user until real and virtual motion are perceived as equal, i.e., real-world movements could be mapped with a larger gain to the VE in order to compensate for the underestimation. However, introducing discrepancies between real and virtual motion can become a problem, in particular, due to misalignments of both worlds and distorted space cognition. In this paper, we describe a different approach that introduces apparent self-motion illusions by manipulating optic flow fields during movements in VEs. These manipulations can affect self-motion perception in VEs, but omit a quantitative discrepancy between real and virtual motions. In particular, we consider to which regions of the virtual view these apparent self-motion illusions can be applied, i.e., the ground plane or peripheral vision. Therefore, we introduce four illusions and show in experiments that optic flow manipulation can significantly affect users' self-motion judgments. Furthermore, we show that with such manipulations of optic flow fields the underestimation of travel distances can be compensated.

  14. Relation of motion sickness susceptibility to vestibular and behavioral measures of orientation

    NASA Technical Reports Server (NTRS)

    Peterka, Robert J.

    1994-01-01

    The objective of this proposal is to determine the relationship of motion sickness susceptibility to vestibulo-ocular reflexes (VOR), motion perception, and behavioral utilization of sensory orientation cues for the control of postural equilibrium. The work is focused on reflexes and motion perception associated with pitch and roll movements that stimulate the vertical semicircular canals and otolith organs of the inner ear. This work is relevant to the space motion sickness problem since 0 g related sensory conflicts between vertical canal and otolith motion cues are a likely cause of space motion sickness. Results of experimentation are summarized and modifications to a two-axis rotation device are described. Abstracts of a number of papers generated during the reporting period are appended.

  15. Sex differences in the development of brain mechanisms for processing biological motion.

    PubMed

    Anderson, L C; Bolling, D Z; Schelinski, S; Coffman, M C; Pelphrey, K A; Kaiser, M D

    2013-12-01

    Disorders related to social functioning including autism and schizophrenia differ drastically in incidence and severity between males and females. Little is known about the neural systems underlying these sex-linked differences in risk and resiliency. Using functional magnetic resonance imaging and a task involving the visual perception of point-light displays of coherent and scrambled biological motion, we discovered sex differences in the development of neural systems for basic social perception. In adults, we identified enhanced activity during coherent biological motion perception in females relative to males in a network of brain regions previously implicated in social perception including amygdala, medial temporal gyrus, and temporal pole. These sex differences were less pronounced in our sample of school-age youth. We hypothesize that the robust neural circuitry supporting social perception in females, which diverges from males beginning in childhood, may underlie sex differences in disorders related to social processing. © 2013 Elsevier Inc. All rights reserved.

  16. Sex Differences in the Development of Brain Mechanisms for Processing Biological Motion

    PubMed Central

    Anderson, L.C.; Bolling, D.Z.; Schelinski, S.; Coffman, M.C.; Pelphrey, K.A.; Kaiser, M.D.

    2013-01-01

    Disorders related to social functioning including autism and schizophrenia differ drastically in incidence and severity between males and females. Little is known about the neural systems underlying these sex-linked differences in risk and resiliency. Using functional magnetic resonance imaging and a task involving the visual perception of point-light displays of coherent and scrambled biological motion, we discovered sex differences in the development of neural systems for basic social perception. In adults, we identified enhanced activity during coherent biological motion perception in females relative to males in a network of brain regions previously implicated in social perception including amygdala, medial temporal gyrus, and temporal pole. These sex differences were less pronounced in our sample of school-age youth. We hypothesize that the robust neural circuitry supporting social perception in females, which diverges from males beginning in childhood, may underlie sex differences in disorders related to social processing. PMID:23876243

  17. Perception of Elasticity in the Kinetic Illusory Object with Phase Differences in Inducer Motion

    PubMed Central

    Masuda, Tomohiro; Sato, Kazuki; Murakoshi, Takuma; Utsumi, Ken; Kimura, Atsushi; Shirai, Nobu; Kanazawa, So; Yamaguchi, Masami K.; Wada, Yuji

    2013-01-01

    Background It is known that subjective contours are perceived even when a figure involves motion. However, whether this includes the perception of rigidity or deformation of an illusory surface remains unknown. In particular, since most visual stimuli used in previous studies were generated in order to induce illusory rigid objects, the potential perception of material properties such as rigidity or elasticity in these illusory surfaces has not been examined. Here, we elucidate whether the magnitude of phase difference in oscillation influences the visual impressions of an object's elasticity (Experiment 1) and identify whether such elasticity perceptions are accompanied by the shape of the subjective contours, which can be assumed to be strongly correlated with the perception of rigidity (Experiment 2). Methodology/Principal Findings In Experiment 1, the phase differences in the oscillating motion of inducers were controlled to investigate whether they influenced the visual impression of an illusory object's elasticity. The results demonstrated that the impression of the elasticity of an illusory surface with subjective contours was systematically flipped with the degree of phase difference. In Experiment 2, we examined whether the subjective contours of a perceived object appeared linear or curved using multi-dimensional scaling analysis. The results indicated that the contours of a moving illusory object were perceived as more curved than linear in all phase-difference conditions. Conclusions/Significance These findings suggest that the phase difference in an object's motion is a significant factor in the material perception of motion-related elasticity. PMID:24205281

  18. Priming with real motion biases visual cortical response to bistable apparent motion

    PubMed Central

    Zhang, Qing-fang; Wen, Yunqing; Zhang, Deng; She, Liang; Wu, Jian-young; Dan, Yang; Poo, Mu-ming

    2012-01-01

    Apparent motion quartet is an ambiguous stimulus that elicits bistable perception, with the perceived motion alternating between two orthogonal paths. In human psychophysical experiments, the probability of perceiving motion in each path is greatly enhanced by a brief exposure to real motion along that path. To examine the neural mechanism underlying this priming effect, we used voltage-sensitive dye (VSD) imaging to measure the spatiotemporal activity in the primary visual cortex (V1) of awake mice. We found that a brief real motion stimulus transiently biased the cortical response to subsequent apparent motion toward the spatiotemporal pattern representing the real motion. Furthermore, intracellular recording from V1 neurons in anesthetized mice showed a similar increase in subthreshold depolarization in the neurons representing the path of real motion. Such short-term plasticity in early visual circuits may contribute to the priming effect in bistable visual perception. PMID:23188797

  19. Stimulation of PPC Affects the Mapping between Motion and Force Signals for Stiffness Perception But Not Motion Control

    PubMed Central

    Mawase, Firas; Karniel, Amir; Donchin, Opher; Rothwell, John; Nisky, Ilana; Davare, Marco

    2016-01-01

    How motion and sensory inputs are combined to assess an object's stiffness is still unknown. Here, we provide evidence for the existence of a stiffness estimator in the human posterior parietal cortex (PPC). We showed previously that delaying force feedback with respect to motion when interacting with an object caused participants to underestimate its stiffness. We found that applying theta-burst transcranial magnetic stimulation (TMS) over the PPC, but not the dorsal premotor cortex, enhances this effect without affecting movement control. We explain this enhancement as an additional lag in force signals. This is the first causal evidence that the PPC is not only involved in motion control, but also has an important role in perception that is disassociated from action. We provide a computational model suggesting that the PPC integrates position and force signals for perception of stiffness and that TMS alters the synchronization between the two signals causing lasting consequences on perceptual behavior. SIGNIFICANCE STATEMENT When selecting an object such as a ripe fruit or sofa, we need to assess the object's stiffness. Because we lack dedicated stiffness sensors, we rely on an as yet unknown mechanism that generates stiffness percepts by combining position and force signals. Here, we found that the posterior parietal cortex (PPC) contributes to combining position and force signals for stiffness estimation. This finding challenges the classical view about the role of the PPC in regulating position signals only for motion control because we highlight a key role of the PPC in perception that is disassociated from action. Altogether this sheds light on brain mechanisms underlying the interaction between action and perception and may help in the development of better teleoperation systems and rehabilitation of patients with sensory impairments. PMID:27733607

  20. Stimulation of PPC Affects the Mapping between Motion and Force Signals for Stiffness Perception But Not Motion Control.

    PubMed

    Leib, Raz; Mawase, Firas; Karniel, Amir; Donchin, Opher; Rothwell, John; Nisky, Ilana; Davare, Marco

    2016-10-12

    How motion and sensory inputs are combined to assess an object's stiffness is still unknown. Here, we provide evidence for the existence of a stiffness estimator in the human posterior parietal cortex (PPC). We showed previously that delaying force feedback with respect to motion when interacting with an object caused participants to underestimate its stiffness. We found that applying theta-burst transcranial magnetic stimulation (TMS) over the PPC, but not the dorsal premotor cortex, enhances this effect without affecting movement control. We explain this enhancement as an additional lag in force signals. This is the first causal evidence that the PPC is not only involved in motion control, but also has an important role in perception that is disassociated from action. We provide a computational model suggesting that the PPC integrates position and force signals for perception of stiffness and that TMS alters the synchronization between the two signals causing lasting consequences on perceptual behavior. When selecting an object such as a ripe fruit or sofa, we need to assess the object's stiffness. Because we lack dedicated stiffness sensors, we rely on an as yet unknown mechanism that generates stiffness percepts by combining position and force signals. Here, we found that the posterior parietal cortex (PPC) contributes to combining position and force signals for stiffness estimation. This finding challenges the classical view about the role of the PPC in regulating position signals only for motion control because we highlight a key role of the PPC in perception that is disassociated from action. Altogether this sheds light on brain mechanisms underlying the interaction between action and perception and may help in the development of better teleoperation systems and rehabilitation of patients with sensory impairments. Copyright © 2016 Leib et al.

  1. Motion Perception and Manual Control Performance During Passive Tilt and Translation Following Space Flight

    NASA Technical Reports Server (NTRS)

    Clement, Gilles; Wood, Scott J.

    2010-01-01

    This joint ESA-NASA study is examining changes in motion perception following Space Shuttle flights and the operational implications of post-flight tilt-translation ambiguity for manual control performance. Vibrotactile feedback of tilt orientation is also being evaluated as a countermeasure to improve performance during a closed-loop nulling task. METHODS. Data has been collected on 5 astronaut subjects during 3 preflight sessions and during the first 8 days after Shuttle landings. Variable radius centrifugation (216 deg/s) combined with body translation (12-22 cm, peak-to-peak) is utilized to elicit roll-tilt perception (equivalent to 20 deg, peak-to-peak). A forward-backward moving sled (24-390 cm, peak-to-peak) with or without chair tilting in pitch is utilized to elicit pitch tilt perception (equivalent to 20 deg, peak-to-peak). These combinations are elicited at 0.15, 0.3, and 0.6 Hz for evaluating the effect of motion frequency on tilt-translation ambiguity. In both devices, a closed-loop nulling task is also performed during pseudorandom motion with and without vibrotactile feedback of tilt. All tests are performed in complete darkness. PRELIMINARY RESULTS. Data collection is currently ongoing. Results to date suggest there is a trend for translation motion perception to be increased at the low and medium frequencies on landing day compared to pre-flight. Manual control performance is improved with vibrotactile feedback. DISCUSSION. The results of this study indicate that post-flight recovery of motion perception and manual control performance is complete within 8 days following short-duration space missions. Vibrotactile feedback of tilt improves manual control performance both before and after flight.

  2. Perception of Motion in Statistically-Defined Displays.

    DTIC Science & Technology

    1988-02-15

    motion encoding (Reichardt, 1961; Barlow and Levick , 1963; van Doorn and Koenderink, 1982a, b ; van de Grind, Koenderink, van Doorn, 1983). A bilocal...stimelu toetini motion onet Lca perception. Psychological Review, 87, 435-469.bo. Barlow H. B . and Levick W. R. (1963) The mechanisms of directionally...REPORT NUMBER(S) 5. MONITORING ORGANIZATIOLA OAT yOSl R 6. NAME OF PERFORMING ORGANIZATION b . OFFICE SYMBOL 7@. NAME OF MONITORING ORGANIZATION (If

  3. Mental Rotation Meets the Motion Aftereffect: The Role of hV5/MT+ in Visual Mental Imagery

    ERIC Educational Resources Information Center

    Seurinck, Ruth; de Lange, Floris P.; Achten, Erik; Vingerhoets, Guy

    2011-01-01

    A growing number of studies show that visual mental imagery recruits the same brain areas as visual perception. Although the necessity of hV5/MT+ for motion perception has been revealed by means of TMS, its relevance for motion imagery remains unclear. We induced a direction-selective adaptation in hV5/MT+ by means of an MAE while subjects…

  4. Perception of Motion in Statistically-Defined Displays

    DTIC Science & Technology

    1989-04-15

    psychophysical study before. He was paid $7.50/hour for his participation. Also, to insure high motivation , he received an additional one cent for every...correct response. This was the same motivational device used in the earlier work on motion discrimination (Ball and Sekuler, 1982). The observer...scientists, physiologists, and people interested in computer vision. Finally, one of the main motives for studying motion perception is a desire to

  5. Development of Visual Motion Perception for Prospective Control: Brain and Behavioral Studies in Infants

    PubMed Central

    Agyei, Seth B.; van der Weel, F. R. (Ruud); van der Meer, Audrey L. H.

    2016-01-01

    During infancy, smart perceptual mechanisms develop allowing infants to judge time-space motion dynamics more efficiently with age and locomotor experience. This emerging capacity may be vital to enable preparedness for upcoming events and to be able to navigate in a changing environment. Little is known about brain changes that support the development of prospective control and about processes, such as preterm birth, that may compromise it. As a function of perception of visual motion, this paper will describe behavioral and brain studies with young infants investigating the development of visual perception for prospective control. By means of the three visual motion paradigms of occlusion, looming, and optic flow, our research shows the importance of including behavioral data when studying the neural correlates of prospective control. PMID:26903908

  6. Asymmetric vestibular stimulation reveals persistent disruption of motion perception in unilateral vestibular lesions.

    PubMed

    Panichi, R; Faralli, M; Bruni, R; Kiriakarely, A; Occhigrossi, C; Ferraresi, A; Bronstein, A M; Pettorossi, V E

    2017-11-01

    Self-motion perception was studied in patients with unilateral vestibular lesions (UVL) due to acute vestibular neuritis at 1 wk and 4, 8, and 12 mo after the acute episode. We assessed vestibularly mediated self-motion perception by measuring the error in reproducing the position of a remembered visual target at the end of four cycles of asymmetric whole-body rotation. The oscillatory stimulus consists of a slow (0.09 Hz) and a fast (0.38 Hz) half cycle. A large error was present in UVL patients when the slow half cycle was delivered toward the lesion side, but minimal toward the healthy side. This asymmetry diminished over time, but it remained abnormally large at 12 mo. In contrast, vestibulo-ocular reflex responses showed a large direction-dependent error only initially, then they normalized. Normalization also occurred for conventional reflex vestibular measures (caloric tests, subjective visual vertical, and head shaking nystagmus) and for perceptual function during symmetric rotation. Vestibular-related handicap, measured with the Dizziness Handicap Inventory (DHI) at 12 mo correlated with self-motion perception asymmetry but not with abnormalities in vestibulo-ocular function. We conclude that 1 ) a persistent self-motion perceptual bias is revealed by asymmetric rotation in UVLs despite vestibulo-ocular function becoming symmetric over time, 2 ) this dissociation is caused by differential perceptual-reflex adaptation to high- and low-frequency rotations when these are combined as with our asymmetric stimulus, 3 ) the findings imply differential central compensation for vestibuloperceptual and vestibulo-ocular reflex functions, and 4 ) self-motion perception disruption may mediate long-term vestibular-related handicap in UVL patients. NEW & NOTEWORTHY A novel vestibular stimulus, combining asymmetric slow and fast sinusoidal half cycles, revealed persistent vestibuloperceptual dysfunction in unilateral vestibular lesion (UVL) patients. The compensation of motion perception after UVL was slower than that of vestibulo-ocular reflex. Perceptual but not vestibulo-ocular reflex deficits correlated with dizziness-related handicap. Copyright © 2017 the American Physiological Society.

  7. Global motion perception deficits in autism are reflected as early as primary visual cortex

    PubMed Central

    Thomas, Cibu; Kravitz, Dwight J.; Wallace, Gregory L.; Baron-Cohen, Simon; Martin, Alex; Baker, Chris I.

    2014-01-01

    Individuals with autism are often characterized as ‘seeing the trees, but not the forest’—attuned to individual details in the visual world at the expense of the global percept they compose. Here, we tested the extent to which global processing deficits in autism reflect impairments in (i) primary visual processing; or (ii) decision-formation, using an archetypal example of global perception, coherent motion perception. In an event-related functional MRI experiment, 43 intelligence quotient and age-matched male participants (21 with autism, age range 15–27 years) performed a series of coherent motion perception judgements in which the amount of local motion signals available to be integrated into a global percept was varied by controlling stimulus viewing duration (0.2 or 0.6 s) and the proportion of dots moving in the correct direction (coherence: 4%, 15%, 30%, 50%, or 75%). Both typical participants and those with autism evidenced the same basic pattern of accuracy in judging the direction of motion, with performance decreasing with reduced coherence and shorter viewing durations. Critically, these effects were exaggerated in autism: despite equal performance at the long duration, performance was more strongly reduced by shortening viewing duration in autism (P < 0.015) and decreasing stimulus coherence (P < 0.008). To assess the neural correlates of these effects we focused on the responses of primary visual cortex and the middle temporal area, critical in the early visual processing of motion signals, as well as a region in the intraparietal sulcus thought to be involved in perceptual decision-making. The behavioural results were mirrored in both primary visual cortex and the middle temporal area, with a greater reduction in response at short, compared with long, viewing durations in autism compared with controls (both P < 0.018). In contrast, there was no difference between the groups in the intraparietal sulcus (P > 0.574). These findings suggest that reduced global motion perception in autism is driven by an atypical response early in visual processing and may reflect a fundamental perturbation in neural circuitry. PMID:25060095

  8. Neural dynamics of motion perception: direction fields, apertures, and resonant grouping.

    PubMed

    Grossberg, S; Mingolla, E

    1993-03-01

    A neural network model of global motion segmentation by visual cortex is described. Called the motion boundary contour system (BCS), the model clarifies how ambiguous local movements on a complex moving shape are actively reorganized into a coherent global motion signal. Unlike many previous researchers, we analyze how a coherent motion signal is imparted to all regions of a moving figure, not only to regions at which unambiguous motion signals exist. The model hereby suggests a solution to the global aperture problem. The motion BCS describes how preprocessing of motion signals by a motion oriented contrast (MOC) filter is joined to long-range cooperative grouping mechanisms in a motion cooperative-competitive (MOCC) loop to control phenomena such as motion capture. The motion BCS is computed in parallel with the static BCS of Grossberg and Mingolla (1985a, 1985b, 1987). Homologous properties of the motion BCS and the static BCS, specialized to process motion directions and static orientations, respectively, support a unified explanation of many data about static form perception and motion form perception that have heretofore been unexplained or treated separately. Predictions about microscopic computational differences of the parallel cortical streams V1-->MT and V1-->V2-->MT are made--notably, the magnocellular thick stripe and parvocellular interstripe streams. It is shown how the motion BCS can compute motion directions that may be synthesized from multiple orientations with opposite directions of contrast. Interactions of model simple cells, complex cells, hyper-complex cells, and bipole cells are described, with special emphasis given to new functional roles in direction disambiguation for endstopping at multiple processing stages and to the dynamic interplay of spatially short-range and long-range interactions.

  9. Schematic and realistic biological motion identification in children with high-functioning autism spectrum disorder

    PubMed Central

    Wright, Kristyn; Kelley, Elizabeth; Poulin-Dubois, Diane

    2014-01-01

    Research investigating biological motion perception in children with ASD has revealed conflicting findings concerning whether impairments in biological motion perception exist. The current study investigated how children with high-functioning ASD (HF-ASD) performed on two tasks of biological motion identification: a novel schematic motion identification task and a point-light biological motion identification task. Twenty-two HFASD children were matched with 21 TD children on gender, non-verbal mental, and chronological, age (M years = 6.72). On both tasks, HF-ASD children performed with similar accuracy as TD children. Across groups, children performed better on animate than on inanimate trials of both tasks. These findings suggest that HF-ASD children's identification of both realistic and schematic biological motion identification is unimpaired. PMID:25395988

  10. Motion direction discrimination training reduces perceived motion repulsion.

    PubMed

    Jia, Ke; Li, Sheng

    2017-04-01

    Participants often exaggerate the perceived angular separation between two simultaneously presented motion stimuli, which is referred to as motion repulsion. The overestimation helps participants differentiate between the two superimposed motion directions, yet it causes the impairment of direction perception. Since direction perception can be refined through perceptual training, we here attempted to investigate whether the training of a direction discrimination task changes the amount of motion repulsion. Our results showed a direction-specific learning effect, which was accompanied by a reduced amount of motion repulsion both for the trained and the untrained directions. The reduction of the motion repulsion disappeared when the participants were trained on a luminance discrimination task (control experiment 1) or a speed discrimination task (control experiment 2), ruling out any possible interpretation in terms of adaptation or training-induced attentional bias. Furthermore, training with a direction discrimination task along a direction 150° away from both directions in the transparent stimulus (control experiment 3) also had little effect on the amount of motion repulsion, ruling out the contribution of task learning. The changed motion repulsion observed in the main experiment was consistent with the prediction of the recurrent model of perceptual learning. Therefore, our findings demonstrate that training in direction discrimination can benefit the precise direction perception of the transparent stimulus and provide new evidence for the recurrent model of perceptual learning.

  11. Visually guided control of movement in the context of multimodal stimulation

    NASA Technical Reports Server (NTRS)

    Riccio, Gary E.

    1991-01-01

    Flight simulation has been almost exclusively concerned with simulating the motions of the aircraft. Physically distinct subsystems are often combined to simulate the varieties of aircraft motion. Visual display systems simulate the motion of the aircraft relative to remote objects and surfaces (e.g., other aircraft and the terrain). 'Motion platform' simulators recreate aircraft motion relative to the gravitoinertial vector (i.e., correlated rotation and tilt as opposed to the 'coordinated turn' in flight). 'Control loaders' attempt to simulate the resistance of the aerodynamic medium to aircraft motion. However, there are few operational systems that attempt to simulate the motion of the pilot relative to the aircraft and the gravitoinertial vector. The design and use of all simulators is limited by poor understanding of postural control in the aircraft and its effect on the perception and control of flight. Analysis of the perception and control of flight (real or simulated) must consider that: (1) the pilot is not rigidly attached to the aircraft; and (2) the pilot actively monitors and adjusts body orientation and configuration in the aircraft. It is argued that this more complete approach to flight simulation requires that multimodal perception be considered as the rule rather than the exception. Moreover, the necessity of multimodal perception is revealed by emphasizing the complementarity rather than the redundancy among perceptual systems. Finally, an outline is presented for an experiment to be conducted at NASA ARC. The experiment explicitly considers possible consequences of coordination between postural and vehicular control.

  12. Deciding what to see: the role of intention and attention in the perception of apparent motion.

    PubMed

    Kohler, Axel; Haddad, Leila; Singer, Wolf; Muckli, Lars

    2008-03-01

    Apparent motion is an illusory perception of movement that can be induced by alternating presentations of static objects. Already in Wertheimer's early investigation of the phenomenon [Wertheimer, M. (1912). Experimentelle Studien über das Sehen von Bewegung. Zeitschrift fur Psychologie, 61, 161-265], he mentions that voluntary attention can influence the way in which an ambiguous apparent motion display is perceived. But until now, few studies have investigated how strong the modulation of apparent motion through attention can be under different stimulus and task conditions. We used bistable motion quartets of two different sizes, where the perception of vertical and horizontal motion is equally likely. Eleven observers participated in two experiments. In Experiment 1, participants were instructed to either (a) hold the current movement direction as long as possible, (b) passively view the stimulus, or (c) switch the movement directions as quickly as possible. With the respective instructions, observers could almost double phase durations in (a) and more than halve durations in (c) relative to the passive condition. This modulation effect was stronger for the large quartets. In Experiment 2, observers' attention was diverted from the stimulus by a detection task at fixation while they still had to report their conscious perception. This manipulation prolonged dominance durations for up to 100%. The experiments reveal a high susceptibility of ambiguous apparent motion to attentional modulation. We discuss how feature- and space-based attention mechanisms might contribute to those effects.

  13. Altered perceptual sensitivity to kinematic invariants in Parkinson's disease.

    PubMed

    Dayan, Eran; Inzelberg, Rivka; Flash, Tamar

    2012-01-01

    Ample evidence exists for coupling between action and perception in neurologically healthy individuals, yet the precise nature of the internal representations shared between these domains remains unclear. One experimentally derived view is that the invariant properties and constraints characterizing movement generation are also manifested during motion perception. One prominent motor invariant is the "two-third power law," describing the strong relation between the kinematics of motion and the geometrical features of the path followed by the hand during planar drawing movements. The two-thirds power law not only characterizes various movement generation tasks but also seems to constrain visual perception of motion. The present study aimed to assess whether motor invariants, such as the two thirds power law also constrain motion perception in patients with Parkinson's disease (PD). Patients with PD and age-matched controls were asked to observe the movement of a light spot rotating on an elliptical path and to modify its velocity until it appeared to move most uniformly. As in previous reports controls tended to choose those movements close to obeying the two-thirds power law as most uniform. Patients with PD displayed a more variable behavior, choosing on average, movements closer but not equal to a constant velocity. Our results thus demonstrate impairments in how the two-thirds power law constrains motion perception in patients with PD, where this relationship between velocity and curvature appears to be preserved but scaled down. Recent hypotheses on the role of the basal ganglia in motor timing may explain these irregularities. Alternatively, these impairments in perception of movement may reflect similar deficits in motor production.

  14. Normal form from biological motion despite impaired ventral stream function.

    PubMed

    Gilaie-Dotan, S; Bentin, S; Harel, M; Rees, G; Saygin, A P

    2011-04-01

    We explored the extent to which biological motion perception depends on ventral stream integration by studying LG, an unusual case of developmental visual agnosia. LG has significant ventral stream processing deficits but no discernable structural cortical abnormality. LG's intermediate visual areas and object-sensitive regions exhibit abnormal activation during visual object perception, in contrast to area V5/MT+ which responds normally to visual motion (Gilaie-Dotan, Perry, Bonneh, Malach, & Bentin, 2009). Here, in three studies we used point light displays, which require visual integration, in adaptive threshold experiments to examine LG's ability to detect form from biological and non-biological motion cues. LG's ability to detect and discriminate form from biological motion was similar to healthy controls. In contrast, he was significantly deficient in processing form from non-biological motion. Thus, LG can rely on biological motion cues to perceive human forms, but is considerably impaired in extracting form from non-biological motion. Finally, we found that while LG viewed biological motion, activity in a network of brain regions associated with processing biological motion was functionally correlated with his V5/MT+ activity, indicating that normal inputs from V5/MT+ might suffice to activate his action perception system. These results indicate that processing of biologically moving form can dissociate from other form processing in the ventral pathway. Furthermore, the present results indicate that integrative ventral stream processing is necessary for uncompromised processing of non-biological form from motion. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Premotor cortex is sensitive to auditory-visual congruence for biological motion.

    PubMed

    Wuerger, Sophie M; Parkes, Laura; Lewis, Penelope A; Crocker-Buque, Alex; Rutschmann, Roland; Meyer, Georg F

    2012-03-01

    The auditory and visual perception systems have developed special processing strategies for ecologically valid motion stimuli, utilizing some of the statistical properties of the real world. A well-known example is the perception of biological motion, for example, the perception of a human walker. The aim of the current study was to identify the cortical network involved in the integration of auditory and visual biological motion signals. We first determined the cortical regions of auditory and visual coactivation (Experiment 1); a conjunction analysis based on unimodal brain activations identified four regions: middle temporal area, inferior parietal lobule, ventral premotor cortex, and cerebellum. The brain activations arising from bimodal motion stimuli (Experiment 2) were then analyzed within these regions of coactivation. Auditory footsteps were presented concurrently with either an intact visual point-light walker (biological motion) or a scrambled point-light walker; auditory and visual motion in depth (walking direction) could either be congruent or incongruent. Our main finding is that motion incongruency (across modalities) increases the activity in the ventral premotor cortex, but only if the visual point-light walker is intact. Our results extend our current knowledge by providing new evidence consistent with the idea that the premotor area assimilates information across the auditory and visual modalities by comparing the incoming sensory input with an internal representation.

  16. Eye Movements in Darkness Modulate Self-Motion Perception.

    PubMed

    Clemens, Ivar Adrianus H; Selen, Luc P J; Pomante, Antonella; MacNeilage, Paul R; Medendorp, W Pieter

    2017-01-01

    During self-motion, humans typically move the eyes to maintain fixation on the stationary environment around them. These eye movements could in principle be used to estimate self-motion, but their impact on perception is unknown. We had participants judge self-motion during different eye-movement conditions in the absence of full-field optic flow. In a two-alternative forced choice task, participants indicated whether the second of two successive passive lateral whole-body translations was longer or shorter than the first. This task was used in two experiments. In the first ( n = 8), eye movements were constrained differently in the two translation intervals by presenting either a world-fixed or body-fixed fixation point or no fixation point at all (allowing free gaze). Results show that perceived translations were shorter with a body-fixed than a world-fixed fixation point. A linear model indicated that eye-movement signals received a weight of ∼25% for the self-motion percept. This model was independently validated in the trials without a fixation point (free gaze). In the second experiment ( n = 10), gaze was free during both translation intervals. Results show that the translation with the larger eye-movement excursion was judged more often to be larger than chance, based on an oculomotor choice probability analysis. We conclude that eye-movement signals influence self-motion perception, even in the absence of visual stimulation.

  17. Eye Movements in Darkness Modulate Self-Motion Perception

    PubMed Central

    Pomante, Antonella

    2017-01-01

    Abstract During self-motion, humans typically move the eyes to maintain fixation on the stationary environment around them. These eye movements could in principle be used to estimate self-motion, but their impact on perception is unknown. We had participants judge self-motion during different eye-movement conditions in the absence of full-field optic flow. In a two-alternative forced choice task, participants indicated whether the second of two successive passive lateral whole-body translations was longer or shorter than the first. This task was used in two experiments. In the first (n = 8), eye movements were constrained differently in the two translation intervals by presenting either a world-fixed or body-fixed fixation point or no fixation point at all (allowing free gaze). Results show that perceived translations were shorter with a body-fixed than a world-fixed fixation point. A linear model indicated that eye-movement signals received a weight of ∼25% for the self-motion percept. This model was independently validated in the trials without a fixation point (free gaze). In the second experiment (n = 10), gaze was free during both translation intervals. Results show that the translation with the larger eye-movement excursion was judged more often to be larger than chance, based on an oculomotor choice probability analysis. We conclude that eye-movement signals influence self-motion perception, even in the absence of visual stimulation. PMID:28144623

  18. Modification of Motion Perception and Manual Control Following Short-Durations Spaceflight

    NASA Technical Reports Server (NTRS)

    Wood, S. J.; Vanya, R. D.; Esteves, J. T.; Rupert, A. H.; Clement, G.

    2011-01-01

    Adaptive changes during space flight in how the brain integrates vestibular cues with other sensory information can lead to impaired movement coordination and spatial disorientation following G-transitions. This ESA-NASA study was designed to examine both the physiological basis and operational implications for disorientation and tilt-translation disturbances following short-duration spaceflights. The goals of this study were to (1) examine the effects of stimulus frequency on adaptive changes in motion perception during passive tilt and translation motion, (2) quantify decrements in manual control of tilt motion, and (3) evaluate vibrotactile feedback as a sensorimotor countermeasure.

  19. Stereoscopic advantages for vection induced by radial, circular, and spiral optic flows.

    PubMed

    Palmisano, Stephen; Summersby, Stephanie; Davies, Rodney G; Kim, Juno

    2016-11-01

    Although observer motions project different patterns of optic flow to our left and right eyes, there has been surprisingly little research into potential stereoscopic contributions to self-motion perception. This study investigated whether visually induced illusory self-motion (i.e., vection) is influenced by the addition of consistent stereoscopic information to radial, circular, and spiral (i.e., combined radial + circular) patterns of optic flow. Stereoscopic vection advantages were found for radial and spiral (but not circular) flows when monocular motion signals were strong. Under these conditions, stereoscopic benefits were greater for spiral flow than for radial flow. These effects can be explained by differences in the motion aftereffects generated by these displays, which suggest that the circular motion component in spiral flow selectively reduced adaptation to stereoscopic motion-in-depth. Stereoscopic vection advantages were not observed for circular flow when monocular motion signals were strong, but emerged when monocular motion signals were weakened. These findings show that stereoscopic information can contribute to visual self-motion perception in multiple ways.

  20. Self-motion Perception Training: Thresholds Improve in the Light but not in the Dark

    PubMed Central

    Hartmann, Matthias; Furrer, Sarah; Herzog, Michael H.; Merfeld, Daniel M.; Mast, Fred W.

    2014-01-01

    We investigated perceptual learning in self-motion perception. Blindfolded participants were displaced leftward or rightward by means of a motion platform, and asked to indicate the direction of motion. A total of eleven participants underwent 3360 practice trials, distributed over twelve (Experiment 1) or six days (Experiment 2). We found no improvement in motion discrimination in both experiments. These results are surprising since perceptual learning has been demonstrated for visual, auditory, and somatosensory discrimination. Improvements in the same task were found when visual input was provided (Experiment 3). The multisensory nature of vestibular information is discussed as a possible explanation of the absence of perceptual learning in darkness. PMID:23392475

  1. Applications of computer-graphics animation for motion-perception research

    NASA Technical Reports Server (NTRS)

    Proffitt, D. R.; Kaiser, M. K.

    1986-01-01

    The advantages and limitations of using computer animated stimuli in studying motion perception are presented and discussed. Most current programs of motion perception research could not be pursued without the use of computer graphics animation. Computer generated displays afford latitudes of freedom and control that are almost impossible to attain through conventional methods. There are, however, limitations to this presentational medium. At present, computer generated displays present simplified approximations of the dynamics in natural events. Very little is known about how the differences between natural events and computer simulations influence perceptual processing. In practice, the differences are assumed to be irrelevant to the questions under study, and that findings with computer generated stimuli will generalize to natural events.

  2. Neurophysiological and Behavioural Correlates of Coherent Motion Perception in Dyslexia

    ERIC Educational Resources Information Center

    Taroyan, Naira A.; Nicolson, Roderick I.; Buckley, David

    2011-01-01

    Coherent motion perception was tested in nine adolescents with dyslexia and 10 control participants matched for age and IQ using low contrast stimuli with three levels of coherence (10%, 25% and 40%). Event-related potentials (ERPs) and behavioural performance data were obtained. No significant between-group differences were found in performance…

  3. Effects of changes in size, speed and distance on the perception of curved 3D trajectories

    PubMed Central

    Zhang, Junjun; Braunstein, Myron L.; Andersen, George J.

    2012-01-01

    Previous research on the perception of 3D object motion has considered time to collision, time to passage, collision detection and judgments of speed and direction of motion, but has not directly studied the perception of the overall shape of the motion path. We examined the perception of the magnitude of curvature and sign of curvature of the motion path for objects moving at eye level in a horizontal plane parallel to the line of sight. We considered two sources of information for the perception of motion trajectories: changes in angular size and changes in angular speed. Three experiments examined judgments of relative curvature for objects moving at different distances. At the closest distance studied, accuracy was high with size information alone but near chance with speed information alone. At the greatest distance, accuracy with size information alone decreased sharply but accuracy for displays with both size and speed information remained high. We found similar results in two experiments with judgments of sign of curvature. Accuracy was higher for displays with both size and speed information than with size information alone, even when the speed information was based on parallel projections and was not informative about sign of curvature. For both magnitude of curvature and sign of curvature judgments, information indicating that the trajectory was curved increased accuracy, even when this information was not directly relevant to the required judgment. PMID:23007204

  4. Visual Motion Processing Subserves Faster Visuomotor Reaction in Badminton Players.

    PubMed

    Hülsdünker, Thorben; Strüder, Heiko K; Mierau, Andreas

    2017-06-01

    Athletes participating in ball or racquet sports have to respond to visual stimuli under critical time pressure. Previous studies used visual contrast stimuli to determine visual perception and visuomotor reaction in athletes and nonathletes; however, ball and racquet sports are characterized by motion rather than contrast visual cues. Because visual contrast and motion signals are processed in different cortical regions, this study aimed to determine differences in perception and processing of visual motion between athletes and nonathletes. Twenty-five skilled badminton players and 28 age-matched nonathletic controls participated in this study. Using a 64-channel EEG system, we investigated visual motion perception/processing in the motion-sensitive middle temporal (MT) cortical area in response to radial motion of different velocities. In a simple visuomotor reaction task, visuomotor transformation in Brodmann area 6 (BA6) and BA4 as well as muscular activation (EMG onset) and visuomotor reaction time (VMRT) were investigated. Stimulus- and response-locked potentials were determined to differentiate between perceptual and motor-related processes. As compared with nonathletes, athletes showed earlier EMG onset times (217 vs 178 ms, P < 0.001), accompanied by a faster VMRT (274 vs 243 ms, P < 0.001). Furthermore, athletes showed an earlier stimulus-locked peak activation of MT (200 vs 182 ms, P = 0.002) and BA6 (161 vs 137 ms, P = 0.009). Response-locked peak activation in MT was later in athletes (-7 vs 26 ms, P < 0.001), whereas no group differences were observed in BA6 and BA4. Multiple regression analyses with stimulus- and response-locked cortical potentials predicted EMG onset (r = 0.83) and VMRT (r = 0.77). The athletes' superior visuomotor performance in response to visual motion is primarily related to visual perception and, to a minor degree, to motor-related processes.

  5. Motion Perception and Driving: Predicting Performance Through Testing and Shortening Braking Reaction Times Through Training

    DTIC Science & Technology

    2013-12-01

    brake reaction time on the EB test from pre-post while there was no significant change for the control group : t(38)=2.24, p=0.03. Tests of 3D motion...0.61). In experiment 2, the motion perception training group had a significant decrease in brake reaction time on the EB test from pre- to...the following. The experiment was divided into 8 phases: a pretest , six training blocks (once per week), and a posttest . Participants were allocated

  6. Global motion perception deficits in autism are reflected as early as primary visual cortex.

    PubMed

    Robertson, Caroline E; Thomas, Cibu; Kravitz, Dwight J; Wallace, Gregory L; Baron-Cohen, Simon; Martin, Alex; Baker, Chris I

    2014-09-01

    Individuals with autism are often characterized as 'seeing the trees, but not the forest'-attuned to individual details in the visual world at the expense of the global percept they compose. Here, we tested the extent to which global processing deficits in autism reflect impairments in (i) primary visual processing; or (ii) decision-formation, using an archetypal example of global perception, coherent motion perception. In an event-related functional MRI experiment, 43 intelligence quotient and age-matched male participants (21 with autism, age range 15-27 years) performed a series of coherent motion perception judgements in which the amount of local motion signals available to be integrated into a global percept was varied by controlling stimulus viewing duration (0.2 or 0.6 s) and the proportion of dots moving in the correct direction (coherence: 4%, 15%, 30%, 50%, or 75%). Both typical participants and those with autism evidenced the same basic pattern of accuracy in judging the direction of motion, with performance decreasing with reduced coherence and shorter viewing durations. Critically, these effects were exaggerated in autism: despite equal performance at the long duration, performance was more strongly reduced by shortening viewing duration in autism (P < 0.015) and decreasing stimulus coherence (P < 0.008). To assess the neural correlates of these effects we focused on the responses of primary visual cortex and the middle temporal area, critical in the early visual processing of motion signals, as well as a region in the intraparietal sulcus thought to be involved in perceptual decision-making. The behavioural results were mirrored in both primary visual cortex and the middle temporal area, with a greater reduction in response at short, compared with long, viewing durations in autism compared with controls (both P < 0.018). In contrast, there was no difference between the groups in the intraparietal sulcus (P > 0.574). These findings suggest that reduced global motion perception in autism is driven by an atypical response early in visual processing and may reflect a fundamental perturbation in neural circuitry. © The Author (2014). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Do rhesus monkeys (Macaca mulatta) perceive illusory motion?

    PubMed

    Agrillo, Christian; Gori, Simone; Beran, Michael J

    2015-07-01

    During the last decade, visual illusions have been used repeatedly to understand similarities and differences in visual perception of human and non-human animals. However, nearly all studies have focused only on illusions not related to motion perception, and to date, it is unknown whether non-human primates perceive any kind of motion illusion. In the present study, we investigated whether rhesus monkeys (Macaca mulatta) perceived one of the most popular motion illusions in humans, the Rotating Snake illusion (RSI). To this purpose, we set up four experiments. In Experiment 1, subjects initially were trained to discriminate static versus dynamic arrays. Once reaching the learning criterion, they underwent probe trials in which we presented the RSI and a control stimulus identical in overall configuration with the exception that the order of the luminance sequence was changed in a way that no apparent motion is perceived by humans. The overall performance of monkeys indicated that they spontaneously classified RSI as a dynamic array. Subsequently, we tested adult humans in the same task with the aim of directly comparing the performance of human and non-human primates (Experiment 2). In Experiment 3, we found that monkeys can be successfully trained to discriminate between the RSI and a control stimulus. Experiment 4 showed that a simple change in luminance sequence in the two arrays could not explain the performance reported in Experiment 3. These results suggest that some rhesus monkeys display a human-like perception of this motion illusion, raising the possibility that the neurocognitive systems underlying motion perception may be similar between human and non-human primates.

  8. Moving from spatially segregated to transparent motion: a modelling approach

    PubMed Central

    Durant, Szonya; Donoso-Barrera, Alejandra; Tan, Sovira; Johnston, Alan

    2005-01-01

    Motion transparency, in which patterns of moving elements group together to give the impression of lacy overlapping surfaces, provides an important challenge to models of motion perception. It has been suggested that we perceive transparent motion when the shape of the velocity histogram of the stimulus is bimodal. To investigate this further, random-dot kinematogram motion sequences were created to simulate segregated (perceptually spatially separated) and transparent (perceptually overlapping) motion. The motion sequences were analysed using the multi-channel gradient model (McGM) to obtain the speed and direction at every pixel of each frame of the motion sequences. The velocity histograms obtained were found to be quantitatively similar and all were bimodal. However, the spatial and temporal properties of the velocity field differed between segregated and transparent stimuli. Transparent stimuli produced patches of rightward and leftward motion that varied in location over time. This demonstrates that we can successfully differentiate between these two types of motion on the basis of the time varying local velocity field. However, the percept of motion transparency cannot be based simply on the presence of a bimodal velocity histogram. PMID:17148338

  9. Action Video Games Improve Direction Discrimination of Parafoveal Translational Global Motion but Not Reaction Times.

    PubMed

    Pavan, Andrea; Boyce, Matthew; Ghin, Filippo

    2016-10-01

    Playing action video games enhances visual motion perception. However, there is psychophysical evidence that action video games do not improve motion sensitivity for translational global moving patterns presented in fovea. This study investigates global motion perception in action video game players and compares their performance to that of non-action video game players and non-video game players. Stimuli were random dot kinematograms presented in the parafovea. Observers discriminated the motion direction of a target random dot kinematogram presented in one of the four visual quadrants. Action video game players showed lower motion coherence thresholds than the other groups. However, when the task was performed at threshold, we did not find differences between groups in terms of distributions of reaction times. These results suggest that action video games improve visual motion sensitivity in the near periphery of the visual field, rather than speed response. © The Author(s) 2016.

  10. Recovery of biological motion perception and network plasticity after cerebellar tumor removal.

    PubMed

    Sokolov, Arseny A; Erb, Michael; Grodd, Wolfgang; Tatagiba, Marcos S; Frackowiak, Richard S J; Pavlova, Marina A

    2014-10-01

    Visual perception of body motion is vital for everyday activities such as social interaction, motor learning or car driving. Tumors to the left lateral cerebellum impair visual perception of body motion. However, compensatory potential after cerebellar damage and underlying neural mechanisms remain unknown. In the present study, visual sensitivity to point-light body motion was psychophysically assessed in patient SL with dysplastic gangliocytoma (Lhermitte-Duclos disease) to the left cerebellum before and after neurosurgery, and in a group of healthy matched controls. Brain activity during processing of body motion was assessed by functional magnetic resonance imaging (MRI). Alterations in underlying cerebro-cerebellar circuitry were studied by psychophysiological interaction (PPI) analysis. Visual sensitivity to body motion in patient SL before neurosurgery was substantially lower than in controls, with significant improvement after neurosurgery. Functional MRI in patient SL revealed a similar pattern of cerebellar activation during biological motion processing as in healthy participants, but located more medially, in the left cerebellar lobules III and IX. As in normalcy, PPI analysis showed cerebellar communication with a region in the superior temporal sulcus, but located more anteriorly. The findings demonstrate a potential for recovery of visual body motion processing after cerebellar damage, likely mediated by topographic shifts within the corresponding cerebro-cerebellar circuitry induced by cerebellar reorganization. The outcome is of importance for further understanding of cerebellar plasticity and neural circuits underpinning visual social cognition.

  11. The Relative Importance of Spatial Versus Temporal Structure in the Perception of Biological Motion: An Event-Related Potential Study

    ERIC Educational Resources Information Center

    Hirai, Masahiro; Hiraki, Kazuo

    2006-01-01

    We investigated how the spatiotemporal structure of animations of biological motion (BM) affects brain activity. We measured event-related potentials (ERPs) during the perception of BM under four conditions: normal spatial and temporal structure; scrambled spatial and normal temporal structure; normal spatial and scrambled temporal structure; and…

  12. Visual and Non-Visual Contributions to the Perception of Object Motion during Self-Motion

    PubMed Central

    Fajen, Brett R.; Matthis, Jonathan S.

    2013-01-01

    Many locomotor tasks involve interactions with moving objects. When observer (i.e., self-)motion is accompanied by object motion, the optic flow field includes a component due to self-motion and a component due to object motion. For moving observers to perceive the movement of other objects relative to the stationary environment, the visual system could recover the object-motion component – that is, it could factor out the influence of self-motion. In principle, this could be achieved using visual self-motion information, non-visual self-motion information, or a combination of both. In this study, we report evidence that visual information about the speed (Experiment 1) and direction (Experiment 2) of self-motion plays a role in recovering the object-motion component even when non-visual self-motion information is also available. However, the magnitude of the effect was less than one would expect if subjects relied entirely on visual self-motion information. Taken together with previous studies, we conclude that when self-motion is real and actively generated, both visual and non-visual self-motion information contribute to the perception of object motion. We also consider the possible role of this process in visually guided interception and avoidance of moving objects. PMID:23408983

  13. Can walking motions improve visually induced rotational self-motion illusions in virtual reality?

    PubMed

    Riecke, Bernhard E; Freiberg, Jacob B; Grechkin, Timofey Y

    2015-02-04

    Illusions of self-motion (vection) can provide compelling sensations of moving through virtual environments without the need for complex motion simulators or large tracked physical walking spaces. Here we explore the interaction between biomechanical cues (stepping along a rotating circular treadmill) and visual cues (viewing simulated self-rotation) for providing stationary users a compelling sensation of rotational self-motion (circular vection). When tested individually, biomechanical and visual cues were similarly effective in eliciting self-motion illusions. However, in combination they yielded significantly more intense self-motion illusions. These findings provide the first compelling evidence that walking motions can be used to significantly enhance visually induced rotational self-motion perception in virtual environments (and vice versa) without having to provide for physical self-motion or motion platforms. This is noteworthy, as linear treadmills have been found to actually impair visually induced translational self-motion perception (Ash, Palmisano, Apthorp, & Allison, 2013). Given the predominant focus on linear walking interfaces for virtual-reality locomotion, our findings suggest that investigating circular and curvilinear walking interfaces offers a promising direction for future research and development and can help to enhance self-motion illusions, presence and immersion in virtual-reality systems. © 2015 ARVO.

  14. Motion interactive video games in home training for children with cerebral palsy: parents' perceptions.

    PubMed

    Sandlund, Marlene; Dock, Katarina; Häger, Charlotte K; Waterworth, Eva Lindh

    2012-01-01

    To explore parents' perceptions of using low-cost motion interactive video games as home training for their children with mild/moderate cerebral palsy. Semi-structured interviews were carried out with parents from 15 families after participation in an intervention where motion interactive games were used daily in home training for their child. A qualitative content analysis approach was applied. The parents' perception of the training was very positive. They expressed the view that motion interactive video games may promote positive experiences of physical training in rehabilitation, where the social aspects of gaming were especially valued. Further, the parents experienced less need to take on coaching while gaming stimulated independent training. However, there was a desire for more controlled and individualized games to better challenge the specific rehabilitative need of each child. Low-cost motion interactive games may provide increased motivation and social interaction to home training and promote independent training with reduced coaching efforts for the parents. In future designs of interactive games for rehabilitation purposes, it is important to preserve the motivational and social features of games while optimizing the individualized physical exercise.

  15. Central Inhibition Ability Modulates Attention-Induced Motion Blindness

    ERIC Educational Resources Information Center

    Milders, Maarten; Hay, Julia; Sahraie, Arash; Niedeggen, Michael

    2004-01-01

    Impaired motion perception can be induced in normal observers in a rapid serial visual presentation task. Essential for this effect is the presence of motion distractors prior to the motion target, and we proposed that this attention-induced motion blindness results from high-level inhibition produced by the distractors. To investigate this, we…

  16. Self Motion Perception and Motion Sickness

    NASA Technical Reports Server (NTRS)

    Fox, Robert A. (Principal Investigator)

    1991-01-01

    The studies conducted in this research project examined several aspects of motion sickness in animal models. A principle objective of these studies was to investigate the neuroanatomy that is important in motion sickness with the objectives of examining both the utility of putative models and defining neural mechanisms that are important in motion sickness.

  17. Age-related changes in perception of movement in driving scenes.

    PubMed

    Lacherez, Philippe; Turner, Laura; Lester, Robert; Burns, Zoe; Wood, Joanne M

    2014-07-01

    Age-related changes in motion sensitivity have been found to relate to reductions in various indices of driving performance and safety. The aim of this study was to investigate the basis of this relationship in terms of determining which aspects of motion perception are most relevant to driving. Participants included 61 regular drivers (age range 22-87 years). Visual performance was measured binocularly. Measures included visual acuity, contrast sensitivity and motion sensitivity assessed using four different approaches: (1) threshold minimum drift rate for a drifting Gabor patch, (2) Dmin from a random dot display, (3) threshold coherence from a random dot display, and (4) threshold drift rate for a second-order (contrast modulated) sinusoidal grating. Participants then completed the Hazard Perception Test (HPT) in which they were required to identify moving hazards in videos of real driving scenes, and also a Direction of Heading task (DOH) in which they identified deviations from normal lane keeping in brief videos of driving filmed from the interior of a vehicle. In bivariate correlation analyses, all motion sensitivity measures significantly declined with age. Motion coherence thresholds, and minimum drift rate threshold for the first-order stimulus (Gabor patch) both significantly predicted HPT performance even after controlling for age, visual acuity and contrast sensitivity. Bootstrap mediation analysis showed that individual differences in DOH accuracy partly explained these relationships, where those individuals with poorer motion sensitivity on the coherence and Gabor tests showed decreased ability to perceive deviations in motion in the driving videos, which related in turn to their ability to detect the moving hazards. The ability to detect subtle movements in the driving environment (as determined by the DOH task) may be an important contributor to effective hazard perception, and is associated with age, and an individuals' performance on tests of motion sensitivity. The locus of the processing deficits appears to lie in first-order, rather than second-order motion pathways. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.

  18. On the Visual Input Driving Human Smooth-Pursuit Eye Movements

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Beutter, Brent R.; Lorenceau, Jean

    1996-01-01

    Current computational models of smooth-pursuit eye movements assume that the primary visual input is local retinal-image motion (often referred to as retinal slip). However, we show that humans can pursue object motion with considerable accuracy, even in the presence of conflicting local image motion. This finding indicates that the visual cortical area(s) controlling pursuit must be able to perform a spatio-temporal integration of local image motion into a signal related to object motion. We also provide evidence that the object-motion signal that drives pursuit is related to the signal that supports perception. We conclude that current models of pursuit should be modified to include a visual input that encodes perceived object motion and not merely retinal image motion. Finally, our findings suggest that the measurement of eye movements can be used to monitor visual perception, with particular value in applied settings as this non-intrusive approach would not require interrupting ongoing work or training.

  19. Visual Depth from Motion Parallax and Eye Pursuit

    PubMed Central

    Stroyan, Keith; Nawrot, Mark

    2012-01-01

    A translating observer viewing a rigid environment experiences “motion parallax,” the relative movement upon the observer’s retina of variously positioned objects in the scene. This retinal movement of images provides a cue to the relative depth of objects in the environment, however retinal motion alone cannot mathematically determine relative depth of the objects. Visual perception of depth from lateral observer translation uses both retinal image motion and eye movement. In (Nawrot & Stroyan, 2009, Vision Res. 49, p.1969) we showed mathematically that the ratio of the rate of retinal motion over the rate of smooth eye pursuit mathematically determines depth relative to the fixation point in central vision. We also reported on psychophysical experiments indicating that this ratio is the important quantity for perception. Here we analyze the motion/pursuit cue for the more general, and more complicated, case when objects are distributed across the horizontal viewing plane beyond central vision. We show how the mathematical motion/pursuit cue varies with different points across the plane and with time as an observer translates. If the time varying retinal motion and smooth eye pursuit are the only signals used for this visual process, it is important to know what is mathematically possible to derive about depth and structure. Our analysis shows that the motion/pursuit ratio determines an excellent description of depth and structure in these broader stimulus conditions, provides a detailed quantitative hypothesis of these visual processes for the perception of depth and structure from motion parallax, and provides a computational foundation to analyze the dynamic geometry of future experiments. PMID:21695531

  20. Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction

    PubMed Central

    Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta

    2018-01-01

    The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research. PMID:29599739

  1. Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction.

    PubMed

    Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta

    2018-01-01

    The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research.

  2. Motion perception: behavior and neural substrate.

    PubMed

    Mather, George

    2011-05-01

    Visual motion perception is vital for survival. Single-unit recordings in primate primary visual cortex (V1) have revealed the existence of specialized motion sensing neurons; perceptual effects such as the motion after-effect demonstrate their importance for motion perception. Human psychophysical data on motion detection can be explained by a computational model of cortical motion sensors. Both psychophysical and physiological data reveal at least two classes of motion sensor capable of sensing motion in luminance-defined and texture-defined patterns, respectively. Psychophysical experiments also reveal that motion can be seen independently of motion sensor output, based on attentive tracking of visual features. Sensor outputs are inherently ambiguous, due to the problem of univariance in neural responses. In order to compute stimulus direction and speed, the visual system must compare the responses of many different sensors sensitive to different directions and speeds. Physiological data show that this computation occurs in the visual middle temporal (MT) area. Recent psychophysical studies indicate that information about spatial form may also play a role in motion computations. Adaptation studies show that the human visual system is selectively sensitive to large-scale optic flow patterns, and physiological studies indicate that cells in the middle superior temporal (MST) area derive this sensitivity from the combined responses of many MT cells. Extraretinal signals used to control eye movements are an important source of signals to cancel out the retinal motion responses generated by eye movements, though visual information also plays a role. A number of issues remain to be resolved at all levels of the motion-processing hierarchy. WIREs Cogni Sci 2011 2 305-314 DOI: 10.1002/wcs.110 For further resources related to this article, please visit the WIREs website Additional Supporting Information may be found in http://www.lifesci.sussex.ac.uk/home/George_Mather/Motion/index.html. Copyright © 2010 John Wiley & Sons, Ltd.

  3. Rocking or Rolling – Perception of Ambiguous Motion after Returning from Space

    PubMed Central

    Clément, Gilles; Wood, Scott J.

    2014-01-01

    The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive an accurate representation of spatial orientation. Adaptive changes during spaceflight in how the brain integrates vestibular cues with other sensory information can lead to impaired movement coordination, vertigo, spatial disorientation, and perceptual illusions after return to Earth. The purpose of this study was to compare tilt and translation motion perception in astronauts before and after returning from spaceflight. We hypothesized that these stimuli would be the most ambiguous in the low-frequency range (i.e., at about 0.3 Hz) where the linear acceleration can be interpreted either as a translation or as a tilt relative to gravity. Verbal reports were obtained in eleven astronauts tested using a motion-based tilt-translation device and a variable radius centrifuge before and after flying for two weeks on board the Space Shuttle. Consistent with previous studies, roll tilt perception was overestimated shortly after spaceflight and then recovered with 1–2 days. During dynamic linear acceleration (0.15–0.6 Hz, ±1.7 m/s2) perception of translation was also overestimated immediately after flight. Recovery to baseline was observed after 2 days for lateral translation and 8 days for fore–aft translation. These results suggest that there was a shift in the frequency dynamic of tilt-translation motion perception after adaptation to weightlessness. These results have implications for manual control during landing of a space vehicle after exposure to microgravity, as it will be the case for human asteroid and Mars missions. PMID:25354042

  4. Rocking or rolling--perception of ambiguous motion after returning from space.

    PubMed

    Clément, Gilles; Wood, Scott J

    2014-01-01

    The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive an accurate representation of spatial orientation. Adaptive changes during spaceflight in how the brain integrates vestibular cues with other sensory information can lead to impaired movement coordination, vertigo, spatial disorientation, and perceptual illusions after return to Earth. The purpose of this study was to compare tilt and translation motion perception in astronauts before and after returning from spaceflight. We hypothesized that these stimuli would be the most ambiguous in the low-frequency range (i.e., at about 0.3 Hz) where the linear acceleration can be interpreted either as a translation or as a tilt relative to gravity. Verbal reports were obtained in eleven astronauts tested using a motion-based tilt-translation device and a variable radius centrifuge before and after flying for two weeks on board the Space Shuttle. Consistent with previous studies, roll tilt perception was overestimated shortly after spaceflight and then recovered with 1-2 days. During dynamic linear acceleration (0.15-0.6 Hz, ±1.7 m/s2) perception of translation was also overestimated immediately after flight. Recovery to baseline was observed after 2 days for lateral translation and 8 days for fore-aft translation. These results suggest that there was a shift in the frequency dynamic of tilt-translation motion perception after adaptation to weightlessness. These results have implications for manual control during landing of a space vehicle after exposure to microgravity, as it will be the case for human asteroid and Mars missions.

  5. Self-motion perception compresses time experienced in return travel.

    PubMed

    Seno, Takeharu; Ito, Hiroyuki; Shoji, Sunaga

    2011-01-01

    It is often anecdotally reported that time experienced in return travel (back to the start point) seems shorter than time spent in outward travel (travel to a new destination). Here, we report the first experimental results showing that return travel time is experienced as shorter than the actual time. This discrepancy is induced by the existence of self-motion perception.

  6. An Assessment of the Impact of a Science Outreach Program, Science In Motion, on Student Achievement, Teacher Efficacy, and Teacher Perception

    ERIC Educational Resources Information Center

    Herring, Phillip Allen

    2009-01-01

    The purpose of the study was to analyze the science outreach program, Science In Motion (SIM), located in Mobile, Alabama. This research investigated what impact the SIM program has on student cognitive functioning and teacher efficacy and also investigated teacher perceptions and attitudes regarding the program. To investigate student…

  7. He Throws like a Girl (but Only when He's Sad): Emotion Affects Sex-Decoding of Biological Motion Displays

    ERIC Educational Resources Information Center

    Johnson, Kerri L.; McKay, Lawrie S.; Pollick, Frank E.

    2011-01-01

    Gender stereotypes have been implicated in sex-typed perceptions of facial emotion. Such interpretations were recently called into question because facial cues of emotion are confounded with sexually dimorphic facial cues. Here we examine the role of visual cues and gender stereotypes in perceptions of biological motion displays, thus overcoming…

  8. Perception of Biological Motion in Schizophrenia and Healthy Individuals: A Behavioral and fMRI Study

    PubMed Central

    Kim, Jejoong; Park, Sohee; Blake, Randolph

    2011-01-01

    Background Anomalous visual perception is a common feature of schizophrenia plausibly associated with impaired social cognition that, in turn, could affect social behavior. Past research suggests impairment in biological motion perception in schizophrenia. Behavioral and functional magnetic resonance imaging (fMRI) experiments were conducted to verify the existence of this impairment, to clarify its perceptual basis, and to identify accompanying neural concomitants of those deficits. Methodology/Findings In Experiment 1, we measured ability to detect biological motion portrayed by point-light animations embedded within masking noise. Experiment 2 measured discrimination accuracy for pairs of point-light biological motion sequences differing in the degree of perturbation of the kinematics portrayed in those sequences. Experiment 3 measured BOLD signals using event-related fMRI during a biological motion categorization task. Compared to healthy individuals, schizophrenia patients performed significantly worse on both the detection (Experiment 1) and discrimination (Experiment 2) tasks. Consistent with the behavioral results, the fMRI study revealed that healthy individuals exhibited strong activation to biological motion, but not to scrambled motion in the posterior portion of the superior temporal sulcus (STSp). Interestingly, strong STSp activation was also observed for scrambled or partially scrambled motion when the healthy participants perceived it as normal biological motion. On the other hand, STSp activation in schizophrenia patients was not selective to biological or scrambled motion. Conclusion Schizophrenia is accompanied by difficulties discriminating biological from non-biological motion, and associated with those difficulties are altered patterns of neural responses within brain area STSp. The perceptual deficits exhibited by schizophrenia patients may be an exaggerated manifestation of neural events within STSp associated with perceptual errors made by healthy observers on these same tasks. The present findings fit within the context of theories of delusion involving perceptual and cognitive processes. PMID:21625492

  9. Psilocybin impairs high-level but not low-level motion perception.

    PubMed

    Carter, Olivia L; Pettigrew, John D; Burr, David C; Alais, David; Hasler, Felix; Vollenweider, Franz X

    2004-08-26

    The hallucinogenic serotonin(1A&2A) agonist psilocybin is known for its ability to induce illusions of motion in otherwise stationary objects or textured surfaces. This study investigated the effect of psilocybin on local and global motion processing in nine human volunteers. Using a forced choice direction of motion discrimination task we show that psilocybin selectively impairs coherence sensitivity for random dot patterns, likely mediated by high-level global motion detectors, but not contrast sensitivity for drifting gratings, believed to be mediated by low-level detectors. These results are in line with those observed within schizophrenic populations and are discussed in respect to the proposition that psilocybin may provide a model to investigate clinical psychosis and the pharmacological underpinnings of visual perception in normal populations.

  10. 3D surface perception from motion involves a temporal–parietal network

    PubMed Central

    Beer, Anton L.; Watanabe, Takeo; Ni, Rui; Sasaki, Yuka; Andersen, George J.

    2010-01-01

    Previous research has suggested that three-dimensional (3D) structure-from-motion (SFM) perception in humans involves several motion-sensitive occipital and parietal brain areas. By contrast, SFM perception in nonhuman primates seems to involve the temporal lobe including areas MT, MST and FST. The present functional magnetic resonance imaging study compared several motion-sensitive regions of interest including the superior temporal sulcus (STS) while human observers viewed horizontally moving dots that defined either a 3D corrugated surface or a 3D random volume. Low-level stimulus features such as dot density and velocity vectors as well as attention were tightly controlled. Consistent with previous research we found that 3D corrugated surfaces elicited stronger responses than random motion in occipital and parietal brain areas including area V3A, the ventral and dorsal intraparietal sulcus, the lateral occipital sulcus and the fusiform gyrus. Additionally, 3D corrugated surfaces elicited stronger activity in area MT and the STS but not in area MST. Brain activity in the STS but not in area MT correlated with interindividual differences in 3D surface perception. Our findings suggest that area MT is involved in the analysis of optic flow patterns such as speed gradients and that the STS in humans plays a greater role in the analysis of 3D SFM than previously thought. PMID:19674088

  11. Video quality assessment using a statistical model of human visual speed perception.

    PubMed

    Wang, Zhou; Li, Qiang

    2007-12-01

    Motion is one of the most important types of information contained in natural video, but direct use of motion information in the design of video quality assessment algorithms has not been deeply investigated. Here we propose to incorporate a recent model of human visual speed perception [Nat. Neurosci. 9, 578 (2006)] and model visual perception in an information communication framework. This allows us to estimate both the motion information content and the perceptual uncertainty in video signals. Improved video quality assessment algorithms are obtained by incorporating the model as spatiotemporal weighting factors, where the weight increases with the information content and decreases with the perceptual uncertainty. Consistent improvement over existing video quality assessment algorithms is observed in our validation with the video quality experts group Phase I test data set.

  12. Kinesthetic information disambiguates visual motion signals.

    PubMed

    Hu, Bo; Knill, David C

    2010-05-25

    Numerous studies have shown that extra-retinal signals can disambiguate motion information created by movements of the eye or head. We report a new form of cross-modal sensory integration in which the kinesthetic information generated by active hand movements essentially captures ambiguous visual motion information. Several previous studies have shown that active movement can bias observers' percepts of bi-stable stimuli; however, these effects seem to be best explained by attentional mechanisms. We show that kinesthetic information can change an otherwise stable perception of motion, providing evidence of genuine fusion between visual and kinesthetic information. The experiments take advantage of the aperture problem, in which the motion of a one-dimensional grating pattern behind an aperture, while geometrically ambiguous, appears to move stably in the grating normal direction. When actively moving the pattern, however, the observer sees the motion to be in the hand movement direction. Copyright 2010 Elsevier Ltd. All rights reserved.

  13. The Verriest Lecture: Color lessons from space, time, and motion

    PubMed Central

    Shevell, Steven K.

    2012-01-01

    The appearance of a chromatic stimulus depends on more than the wavelengths composing it. The scientific literature has countless examples showing that spatial and temporal features of light influence the colors we see. Studying chromatic stimuli that vary over space, time or direction of motion has a further benefit beyond predicting color appearance: the unveiling of otherwise concealed neural processes of color vision. Spatial or temporal stimulus variation uncovers multiple mechanisms of brightness and color perception at distinct levels of the visual pathway. Spatial variation in chromaticity and luminance can change perceived three-dimensional shape, an example of chromatic signals that affect a percept other than color. Chromatic objects in motion expose the surprisingly weak link between the chromaticity of objects and their physical direction of motion, and the role of color in inducing an illusory motion direction. Space, time and motion – color’s colleagues – reveal the richness of chromatic neural processing. PMID:22330398

  14. Spared Ability to Perceive Direction of Locomotor Heading and Scene-Relative Object Movement Despite Inability to Perceive Relative Motion

    PubMed Central

    Vaina, Lucia M.; Buonanno, Ferdinando; Rushton, Simon K.

    2014-01-01

    Background All contemporary models of perception of locomotor heading from optic flow (the characteristic patterns of retinal motion that result from self-movement) begin with relative motion. Therefore it would be expected that an impairment on perception of relative motion should impact on the ability to judge heading and other 3D motion tasks. Material/Methods We report two patients with occipital lobe lesions whom we tested on a battery of motion tasks. Patients were impaired on all tests that involved relative motion in plane (motion discontinuity, form from differences in motion direction or speed). Despite this they retained the ability to judge their direction of heading relative to a target. A potential confound is that observers can derive information about heading from scale changes bypassing the need to use optic flow. Therefore we ran further experiments in which we isolated optic flow and scale change. Results Patients’ performance was in normal ranges on both tests. The finding that ability to perceive heading can be retained despite an impairment on ability to judge relative motion questions the assumption that heading perception proceeds from initial processing of relative motion. Furthermore, on a collision detection task, SS and SR’s performance was significantly better for simulated forward movement of the observer in the 3D scene, than for the static observer. This suggests that in spite of severe deficits on relative motion in the frontoparlel (xy) plane, information from self-motion helped identification objects moving along an intercept 3D relative motion trajectory. Conclusions This result suggests a potential use of a flow parsing strategy to detect in a 3D world the trajectory of moving objects when the observer is moving forward. These results have implications for developing rehabilitation strategies for deficits in visually guided navigation. PMID:25183375

  15. Motion and Actions in Language: Semantic Representations in Occipito-Temporal Cortex

    ERIC Educational Resources Information Center

    Humphreys, Gina F.; Newling, Katherine; Jennings, Caroline; Gennari, Silvia P.

    2013-01-01

    Understanding verbs typically activates posterior temporal regions and, in some circumstances, motion perception area V5. However, the nature and role of this activation remains unclear: does language alone indeed activate V5? And are posterior temporal representations modality-specific motion representations, or supra-modal motion-independent…

  16. The neural encoding of self-generated and externally applied movement: implications for the perception of self-motion and spatial memory

    PubMed Central

    Cullen, Kathleen E.

    2014-01-01

    The vestibular system is vital for maintaining an accurate representation of self-motion. As one moves (or is moved) toward a new place in the environment, signals from the vestibular sensors are relayed to higher-order centers. It is generally assumed the vestibular system provides a veridical representation of head motion to these centers for the perception of self-motion and spatial memory. In support of this idea, evidence from lesion studies suggests that vestibular inputs are required for the directional tuning of head direction cells in the limbic system as well as neurons in areas of multimodal association cortex. However, recent investigations in monkeys and mice challenge the notion that early vestibular pathways encode an absolute representation of head motion. Instead, processing at the first central stage is inherently multimodal. This minireview highlights recent progress that has been made towards understanding how the brain processes and interprets self-motion signals encoded by the vestibular otoliths and semicircular canals during everyday life. The following interrelated questions are considered. What information is available to the higher-order centers that contribute to self-motion perception? How do we distinguish between our own self-generated movements and those of the external world? And lastly, what are the implications of differences in the processing of these active vs. passive movements for spatial memory? PMID:24454282

  17. Multimodal Perception and Multicriterion Control of Nested Systems. 1; Coordination of Postural Control and Vehicular Control

    NASA Technical Reports Server (NTRS)

    Riccio, Gary E.; McDonald, P. Vernon

    1998-01-01

    The purpose of this report is to identify the essential characteristics of goal-directed whole-body motion. The report is organized into three major sections (Sections 2, 3, and 4). Section 2 reviews general themes from ecological psychology and control-systems engineering that are relevant to the perception and control of whole-body motion. These themes provide an organizational framework for analyzing the complex and interrelated phenomena that are the defining characteristics of whole-body motion. Section 3 of this report applies the organization framework from the first section to the problem of perception and control of aircraft motion. This is a familiar problem in control-systems engineering and ecological psychology. Section 4 examines an essential but generally neglected aspect of vehicular control: coordination of postural control and vehicular control. To facilitate presentation of this new idea, postural control and its coordination with vehicular control are analyzed in terms of conceptual categories that are familiar in the analysis of vehicular control.

  18. Motion facilitates face perception across changes in viewpoint and expression in older adults.

    PubMed

    Maguinness, Corrina; Newell, Fiona N

    2014-12-01

    Faces are inherently dynamic stimuli. However, face perception in younger adults appears to be mediated by the ability to extract structural cues from static images and a benefit of motion is inconsistent. In contrast, static face processing is poorer and more image-dependent in older adults. We therefore compared the role of facial motion in younger and older adults to assess whether motion can enhance perception when static cues are insufficient. In our studies, older and younger adults learned faces presented in motion or in a sequence of static images, containing rigid (viewpoint) or nonrigid (expression) changes. Immediately following learning, participants matched a static test image to the learned face which varied by viewpoint (Experiment 1) or expression (Experiment 2) and was either learned or novel. First, we found an age effect with better face matching performance in younger than in older adults. However, we observed face matching performance improved in the older adult group, across changes in viewpoint and expression, when faces were learned in motion relative to static presentation. There was no benefit for facial (nonrigid) motion when the task involved matching inverted faces (Experiment 3), suggesting that the ability to use dynamic face information for the purpose of recognition reflects motion encoding which is specific to upright faces. Our results suggest that ageing may offer a unique insight into how dynamic cues support face processing, which may not be readily observed in younger adults' performance. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  19. Role of orientation reference selection in motion sickness

    NASA Technical Reports Server (NTRS)

    Peterka, Robert J.; Black, F. Owen

    1992-01-01

    The overall objective of this proposal is to understand the relationship between human orientation control and motion sickness susceptibility. Three areas related to orientation control will be investigated. These three areas are (1) reflexes associated with the control of eye movements and posture, (2) the perception of body rotation and position with respect to gravity, and (3) the strategies used to resolve sensory conflict situations which arise when different sensory systems provide orientation cues which are not consistent with one another or with previous experience. Of particular interest is the possibility that a subject may be able to ignore an inaccurate sensory modality in favor of one or more other sensory modalities which do provide accurate orientation reference information. We refer to this process as sensory selection. This proposal will attempt to quantify subjects' sensory selection abilities and determine if this ability confers some immunity to the development of motion sickness symptoms. Measurements of reflexes, motion perception, sensory selection abilities, and motion sickness susceptibility will concentrate on pitch and roll motions since these seem most relevant to the space motion sickness problem. Vestibulo-ocular (VOR) and oculomotor reflexes will be measured using a unique two-axis rotation device developed in our laboratory over the last seven years. Posture control reflexes will be measured using a movable posture platform capable of independently altering proprioceptive and visual orientation cues. Motion perception will be quantified using closed loop feedback technique developed by Zacharias and Young (Exp Brain Res, 1981). This technique requires a subject to null out motions induced by the experimenter while being exposed to various confounding sensory orientation cues. A subject's sensory selection abilities will be measured by the magnitude and timing of his reactions to changes in sensory environments. Motion sickness susceptibility will be measured by the time required to induce characteristic changes in the pattern of electrogastrogram recordings while exposed to various sensory environments during posture and motion perception tests. The results of this work are relevant to NASA's interest in understanding the etiology of space motion sickness. If any of the reflex, perceptual, or sensory selection abilities of subjects are found to correlate with motion sickness susceptibility, this work may be an important step in suggesting a method of predicting motion sickness susceptibility. If sensory selection can provide a means to avoid sensory conflict, then further work may lead to training programs which could enhance a subject's sensory selection ability and therefore minimize motion sickness susceptibility.

  20. Shared sensory estimates for human motion perception and pursuit eye movements.

    PubMed

    Mukherjee, Trishna; Battifarano, Matthew; Simoncini, Claudio; Osborne, Leslie C

    2015-06-03

    Are sensory estimates formed centrally in the brain and then shared between perceptual and motor pathways or is centrally represented sensory activity decoded independently to drive awareness and action? Questions about the brain's information flow pose a challenge because systems-level estimates of environmental signals are only accessible indirectly as behavior. Assessing whether sensory estimates are shared between perceptual and motor circuits requires comparing perceptual reports with motor behavior arising from the same sensory activity. Extrastriate visual cortex both mediates the perception of visual motion and provides the visual inputs for behaviors such as smooth pursuit eye movements. Pursuit has been a valuable testing ground for theories of sensory information processing because the neural circuits and physiological response properties of motion-responsive cortical areas are well studied, sensory estimates of visual motion signals are formed quickly, and the initiation of pursuit is closely coupled to sensory estimates of target motion. Here, we analyzed variability in visually driven smooth pursuit and perceptual reports of target direction and speed in human subjects while we manipulated the signal-to-noise level of motion estimates. Comparable levels of variability throughout viewing time and across conditions provide evidence for shared noise sources in the perception and action pathways arising from a common sensory estimate. We found that conditions that create poor, low-gain pursuit create a discrepancy between the precision of perception and that of pursuit. Differences in pursuit gain arising from differences in optic flow strength in the stimulus reconcile much of the controversy on this topic. Copyright © 2015 the authors 0270-6474/15/358515-16$15.00/0.

  1. Shared Sensory Estimates for Human Motion Perception and Pursuit Eye Movements

    PubMed Central

    Mukherjee, Trishna; Battifarano, Matthew; Simoncini, Claudio

    2015-01-01

    Are sensory estimates formed centrally in the brain and then shared between perceptual and motor pathways or is centrally represented sensory activity decoded independently to drive awareness and action? Questions about the brain's information flow pose a challenge because systems-level estimates of environmental signals are only accessible indirectly as behavior. Assessing whether sensory estimates are shared between perceptual and motor circuits requires comparing perceptual reports with motor behavior arising from the same sensory activity. Extrastriate visual cortex both mediates the perception of visual motion and provides the visual inputs for behaviors such as smooth pursuit eye movements. Pursuit has been a valuable testing ground for theories of sensory information processing because the neural circuits and physiological response properties of motion-responsive cortical areas are well studied, sensory estimates of visual motion signals are formed quickly, and the initiation of pursuit is closely coupled to sensory estimates of target motion. Here, we analyzed variability in visually driven smooth pursuit and perceptual reports of target direction and speed in human subjects while we manipulated the signal-to-noise level of motion estimates. Comparable levels of variability throughout viewing time and across conditions provide evidence for shared noise sources in the perception and action pathways arising from a common sensory estimate. We found that conditions that create poor, low-gain pursuit create a discrepancy between the precision of perception and that of pursuit. Differences in pursuit gain arising from differences in optic flow strength in the stimulus reconcile much of the controversy on this topic. PMID:26041919

  2. Motion coherence and direction discrimination in healthy aging.

    PubMed

    Pilz, Karin S; Miller, Louisa; Agnew, Hannah C

    2017-01-01

    Perceptual functions change with age, particularly motion perception. With regard to healthy aging, previous studies mostly measured motion coherence thresholds for coarse motion direction discrimination along cardinal axes of motion. Here, we investigated age-related changes in the ability to discriminate between small angular differences in motion directions, which allows for a more specific assessment of age-related decline and its underlying mechanisms. We first assessed older (>60 years) and younger (<30 years) participants' ability to discriminate coarse horizontal (left/right) and vertical (up/down) motion at 100% coherence and a stimulus duration of 400 ms. In a second step, we determined participants' motion coherence thresholds for vertical and horizontal coarse motion direction discrimination. In a third step, we used the individually determined motion coherence thresholds and tested fine motion direction discrimination for motion clockwise away from horizontal and vertical motion. Older adults performed as well as younger adults for discriminating motion away from vertical. Surprisingly, performance for discriminating motion away from horizontal was strongly decreased. Further analyses, however, showed a relationship between motion coherence thresholds for horizontal coarse motion direction discrimination and fine motion direction discrimination performance in older adults. In a control experiment, using motion coherence above threshold for all conditions, the difference in performance for horizontal and vertical fine motion direction discrimination for older adults disappeared. These results clearly contradict the notion of an overall age-related decline in motion perception, and, most importantly, highlight the importance of taking into account individual differences when assessing age-related changes in perceptual functions.

  3. Gravito-Inertial Force Resolution in Perception of Synchronized Tilt and Translation

    NASA Technical Reports Server (NTRS)

    Wood, Scott J.; Holly, Jan; Zhang, Guen-Lu

    2011-01-01

    Natural movements in the sagittal plane involve pitch tilt relative to gravity combined with translation motion. The Gravito-Inertial Force (GIF) resolution hypothesis states that the resultant force on the body is perceptually resolved into tilt and translation consistently with the laws of physics. The purpose of this study was to test this hypothesis for human perception during combined tilt and translation motion. EXPERIMENTAL METHODS: Twelve subjects provided verbal reports during 0.3 Hz motion in the dark with 4 types of tilt and/or translation motion: 1) pitch tilt about an interaural axis at +/-10deg or +/-20deg, 2) fore-aft translation with acceleration equivalent to +/-10deg or +/-20deg, 3) combined "in phase" tilt and translation motion resulting in acceleration equivalent to +/-20deg, and 4) "out of phase" tilt and translation motion that maintained the resultant gravito-inertial force aligned with the longitudinal body axis. The amplitude of perceived pitch tilt and translation at the head were obtained during separate trials. MODELING METHODS: Three-dimensional mathematical modeling was performed to test the GIF-resolution hypothesis using a dynamical model. The model encoded GIF-resolution using the standard vector equation, and used an internal model of motion parameters, including gravity. Differential equations conveyed time-varying predictions. The six motion profiles were tested, resulting in predicted perceived amplitude of tilt and translation for each. RESULTS: The modeling results exhibited the same pattern as the experimental results. Most importantly, both modeling and experimental results showed greater perceived tilt during the "in phase" profile than the "out of phase" profile, and greater perceived tilt during combined "in phase" motion than during pure tilt of the same amplitude. However, the model did not predict as much perceived translation as reported by subjects during pure tilt. CONCLUSION: Human perception is consistent with the GIF-resolution hypothesis even when the gravito-inertial force vector remains aligned with the body during periodic motion. Perception is also consistent with GIF-resolution in the opposite condition, when the gravito-inertial force vector angle is enhanced by synchronized tilt and translation.

  4. A selective impairment of perception of sound motion direction in peripheral space: A case study.

    PubMed

    Thaler, Lore; Paciocco, Joseph; Daley, Mark; Lesniak, Gabriella D; Purcell, David W; Fraser, J Alexander; Dutton, Gordon N; Rossit, Stephanie; Goodale, Melvyn A; Culham, Jody C

    2016-01-08

    It is still an open question if the auditory system, similar to the visual system, processes auditory motion independently from other aspects of spatial hearing, such as static location. Here, we report psychophysical data from a patient (female, 42 and 44 years old at the time of two testing sessions), who suffered a bilateral occipital infarction over 12 years earlier, and who has extensive damage in the occipital lobe bilaterally, extending into inferior posterior temporal cortex bilaterally and into right parietal cortex. We measured the patient's spatial hearing ability to discriminate static location, detect motion and perceive motion direction in both central (straight ahead), and right and left peripheral auditory space (50° to the left and right of straight ahead). Compared to control subjects, the patient was impaired in her perception of direction of auditory motion in peripheral auditory space, and the deficit was more pronounced on the right side. However, there was no impairment in her perception of the direction of auditory motion in central space. Furthermore, detection of motion and discrimination of static location were normal in both central and peripheral space. The patient also performed normally in a wide battery of non-spatial audiological tests. Our data are consistent with previous neuropsychological and neuroimaging results that link posterior temporal cortex and parietal cortex with the processing of auditory motion. Most importantly, however, our data break new ground by suggesting a division of auditory motion processing in terms of speed and direction and in terms of central and peripheral space. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Does language guide event perception? Evidence from eye movements

    PubMed Central

    Papafragou, Anna; Hulbert, Justin; Trueswell, John

    2008-01-01

    Languages differ in how they encode motion. When describing bounded motion, English speakers typically use verbs that convey information about manner (e.g., slide, skip, walk) rather than path (e.g., approach, ascend), whereas Greek speakers do the opposite. We investigated whether this strong cross-language difference influences how people allocate attention during motion perception. We compared eye movements from Greek and English speakers as they viewed motion events while (a) preparing verbal descriptions, or (b) memorizing the events. During the verbal description task, speakers’ eyes rapidly focused on the event components typically encoded in their native language, generating significant cross-language differences even during the first second of motion onset. However, when freely inspecting ongoing events, as in the memorization task, people allocated attention similarly regardless of the language they speak. Differences between language groups arose only after the motion stopped, such that participants spontaneously studied those aspects of the scene that their language does not routinely encode in verbs. These findings offer a novel perspective on the relation between language and perceptual/cognitive processes. They indicate that attention allocation during event perception is not affected by the perceiver’s native language; effects of language arise only when linguistic forms are recruited to achieve the task, such as when committing facts to memory. PMID:18395705

  6. MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data.

    PubMed

    Jang, Sujin; Elmqvist, Niklas; Ramani, Karthik

    2016-01-01

    Pattern analysis of human motions, which is useful in many research areas, requires understanding and comparison of different styles of motion patterns. However, working with human motion tracking data to support such analysis poses great challenges. In this paper, we propose MotionFlow, a visual analytics system that provides an effective overview of various motion patterns based on an interactive flow visualization. This visualization formulates a motion sequence as transitions between static poses, and aggregates these sequences into a tree diagram to construct a set of motion patterns. The system also allows the users to directly reflect the context of data and their perception of pose similarities in generating representative pose states. We provide local and global controls over the partition-based clustering process. To support the users in organizing unstructured motion data into pattern groups, we designed a set of interactions that enables searching for similar motion sequences from the data, detailed exploration of data subsets, and creating and modifying the group of motion patterns. To evaluate the usability of MotionFlow, we conducted a user study with six researchers with expertise in gesture-based interaction design. They used MotionFlow to explore and organize unstructured motion tracking data. Results show that the researchers were able to easily learn how to use MotionFlow, and the system effectively supported their pattern analysis activities, including leveraging their perception and domain knowledge.

  7. How long did it last? You would better ask a human

    PubMed Central

    Lacquaniti, Francesco; Carrozzo, Mauro; d’Avella, Andrea; La Scaleia, Barbara; Moscatelli, Alessandro; Zago, Myrka

    2014-01-01

    In the future, human-like robots will live among people to provide company and help carrying out tasks in cooperation with humans. These interactions require that robots understand not only human actions, but also the way in which we perceive the world. Human perception heavily relies on the time dimension, especially when it comes to processing visual motion. Critically, human time perception for dynamic events is often inaccurate. Robots interacting with humans may want to see the world and tell time the way humans do: if so, they must incorporate human-like fallacy. Observers asked to judge the duration of brief scenes are prone to errors: perceived duration often does not match the physical duration of the event. Several kinds of temporal distortions have been described in the specialized literature. Here we review the topic with a special emphasis on our work dealing with time perception of animate actors versus inanimate actors. This work shows the existence of specialized time bases for different categories of targets. The time base used by the human brain to process visual motion appears to be calibrated against the specific predictions regarding the motion of human figures in case of animate motion, while it can be calibrated against the predictions of motion of passive objects in case of inanimate motion. Human perception of time appears to be strictly linked with the mechanisms used to control movements. Thus, neural time can be entrained by external cues in a similar manner for both perceptual judgments of elapsed time and in motor control tasks. One possible strategy could be to implement in humanoids a unique architecture for dealing with time, which would apply the same specialized mechanisms to both perception and action, similarly to humans. This shared implementation might render the humanoids more acceptable to humans, thus facilitating reciprocal interactions. PMID:24478694

  8. How long did it last? You would better ask a human.

    PubMed

    Lacquaniti, Francesco; Carrozzo, Mauro; d'Avella, Andrea; La Scaleia, Barbara; Moscatelli, Alessandro; Zago, Myrka

    2014-01-01

    In the future, human-like robots will live among people to provide company and help carrying out tasks in cooperation with humans. These interactions require that robots understand not only human actions, but also the way in which we perceive the world. Human perception heavily relies on the time dimension, especially when it comes to processing visual motion. Critically, human time perception for dynamic events is often inaccurate. Robots interacting with humans may want to see the world and tell time the way humans do: if so, they must incorporate human-like fallacy. Observers asked to judge the duration of brief scenes are prone to errors: perceived duration often does not match the physical duration of the event. Several kinds of temporal distortions have been described in the specialized literature. Here we review the topic with a special emphasis on our work dealing with time perception of animate actors versus inanimate actors. This work shows the existence of specialized time bases for different categories of targets. The time base used by the human brain to process visual motion appears to be calibrated against the specific predictions regarding the motion of human figures in case of animate motion, while it can be calibrated against the predictions of motion of passive objects in case of inanimate motion. Human perception of time appears to be strictly linked with the mechanisms used to control movements. Thus, neural time can be entrained by external cues in a similar manner for both perceptual judgments of elapsed time and in motor control tasks. One possible strategy could be to implement in humanoids a unique architecture for dealing with time, which would apply the same specialized mechanisms to both perception and action, similarly to humans. This shared implementation might render the humanoids more acceptable to humans, thus facilitating reciprocal interactions.

  9. Effects of prolonged weightlessness on self-motion perception and eye movements evoked by roll and pitch

    NASA Technical Reports Server (NTRS)

    Reschke, Millard F.; Parker, Donald E.

    1987-01-01

    Seven astronauts reported translational self-motion during roll simulation 1-3 h after landing following 5-7 d of orbital flight. Two reported strong translational self-motion perception when they performed pitch head motions during entry and while the orbiter was stationary on the runway. One of two astronauts from whom adequate data were collected exhibited a 132-deg shift in the phase angle between roll stimulation and horizontal eye position 2 h after landing. Neither of two from whom adequate data were collected exhibited increased horizontal eye movement amplitude or disturbance of voluntary pitch or roll body motion immediately postflight. These results are generally consistent with an otolith tilt-translation reinterpretation model and are being applied to the development of apparatus and procedures intended to preadapt astronauts to the sensory rearrangement of weightlessness.

  10. Auditorily-induced illusory self-motion: a review.

    PubMed

    Väljamäe, Aleksander

    2009-10-01

    The aim of this paper is to provide a first review of studies related to auditorily-induced self-motion (vection). These studies have been scarce and scattered over the years and over several research communities including clinical audiology, multisensory perception of self-motion and its neural correlates, ergonomics, and virtual reality. The reviewed studies provide evidence that auditorily-induced vection has behavioral, physiological and neural correlates. Although the sound contribution to self-motion perception appears to be weaker than the visual modality, specific acoustic cues appear to be instrumental for a number of domains including posture prosthesis, navigation in unusual gravitoinertial environments (in the air, in space, or underwater), non-visual navigation, and multisensory integration during self-motion. A number of open research questions are highlighted opening avenue for more active and systematic studies in this area.

  11. Perception of biological motion from size-invariant body representations.

    PubMed

    Lappe, Markus; Wittinghofer, Karin; de Lussanet, Marc H E

    2015-01-01

    The visual recognition of action is one of the socially most important and computationally demanding capacities of the human visual system. It combines visual shape recognition with complex non-rigid motion perception. Action presented as a point-light animation is a striking visual experience for anyone who sees it for the first time. Information about the shape and posture of the human body is sparse in point-light animations, but it is essential for action recognition. In the posturo-temporal filter model of biological motion perception posture information is picked up by visual neurons tuned to the form of the human body before body motion is calculated. We tested whether point-light stimuli are processed through posture recognition of the human body form by using a typical feature of form recognition, namely size invariance. We constructed a point-light stimulus that can only be perceived through a size-invariant mechanism. This stimulus changes rapidly in size from one image to the next. It thus disrupts continuity of early visuo-spatial properties but maintains continuity of the body posture representation. Despite this massive manipulation at the visuo-spatial level, size-changing point-light figures are spontaneously recognized by naive observers, and support discrimination of human body motion.

  12. Suppressive mechanisms in visual motion processing: from perception to intelligence

    PubMed Central

    Tadin, Duje

    2015-01-01

    Perception operates on an immense amount of incoming information that greatly exceeds the brain's processing capacity. Because of this fundamental limitation, the ability to suppress irrelevant information is a key determinant of perceptual efficiency. Here, I will review a series of studies investigating suppressive mechanisms in visual motion processing, namely perceptual suppression of large, background-like motions. These spatial suppression mechanisms are adaptive, operating only when sensory inputs are sufficiently robust to guarantee visibility. Converging correlational and causal evidence links these behavioral results with inhibitory center-surround mechanisms, namely those in cortical area MT. Spatial suppression is abnormally weak in several special populations, including the elderly and those with schizophrenia—a deficit that is evidenced by better-than-normal direction discriminations of large moving stimuli. Theoretical work shows that this abnormal weakening of spatial suppression should result in motion segregation deficits, but direct behavioral support of this hypothesis is lacking. Finally, I will argue that the ability to suppress information is a fundamental neural process that applies not only to perception but also to cognition in general. Supporting this argument, I will discuss recent research that shows individual differences in spatial suppression of motion signals strongly predict individual variations in IQ scores. PMID:26299386

  13. Self-Motion Perception: Assessment by Real-Time Computer Generated Animations

    NASA Technical Reports Server (NTRS)

    Parker, Donald E.

    1999-01-01

    Our overall goal is to develop materials and procedures for assessing vestibular contributions to spatial cognition. The specific objective of the research described in this paper is to evaluate computer-generated animations as potential tools for studying self-orientation and self-motion perception. Specific questions addressed in this study included the following. First, does a non- verbal perceptual reporting procedure using real-time animations improve assessment of spatial orientation? Are reports reliable? Second, do reports confirm expectations based on stimuli to vestibular apparatus? Third, can reliable reports be obtained when self-motion description vocabulary training is omitted?

  14. Modeling depth from motion parallax with the motion/pursuit ratio

    PubMed Central

    Nawrot, Mark; Ratzlaff, Michael; Leonard, Zachary; Stroyan, Keith

    2014-01-01

    The perception of unambiguous scaled depth from motion parallax relies on both retinal image motion and an extra-retinal pursuit eye movement signal. The motion/pursuit ratio represents a dynamic geometric model linking these two proximal cues to the ratio of depth to viewing distance. An important step in understanding the visual mechanisms serving the perception of depth from motion parallax is to determine the relationship between these stimulus parameters and empirically determined perceived depth magnitude. Observers compared perceived depth magnitude of dynamic motion parallax stimuli to static binocular disparity comparison stimuli at three different viewing distances, in both head-moving and head-stationary conditions. A stereo-viewing system provided ocular separation for stereo stimuli and monocular viewing of parallax stimuli. For each motion parallax stimulus, a point of subjective equality (PSE) was estimated for the amount of binocular disparity that generates the equivalent magnitude of perceived depth from motion parallax. Similar to previous results, perceived depth from motion parallax had significant foreshortening. Head-moving conditions produced even greater foreshortening due to the differences in the compensatory eye movement signal. An empirical version of the motion/pursuit law, termed the empirical motion/pursuit ratio, which models perceived depth magnitude from these stimulus parameters, is proposed. PMID:25339926

  15. Slow and fast visual motion channels have independent binocular-rivalry stages.

    PubMed Central

    van de Grind, W. A.; van Hof, P.; van der Smagt, M. J.; Verstraten, F. A.

    2001-01-01

    We have previously reported a transparent motion after-effect indicating that the human visual system comprises separate slow and fast motion channels. Here, we report that the presentation of a fast motion in one eye and a slow motion in the other eye does not result in binocular rivalry but in a clear percept of transparent motion. We call this new visual phenomenon 'dichoptic motion transparency' (DMT). So far only the DMT phenomenon and the two motion after-effects (the 'classical' motion after-effect, seen after motion adaptation on a static test pattern, and the dynamic motion after-effect, seen on a dynamic-noise test pattern) appear to isolate the channels completely. The speed ranges of the slow and fast channels overlap strongly and are observer dependent. A model is presented that links after-effect durations of an observer to the probability of rivalry or DMT as a function of dichoptic velocity combinations. Model results support the assumption of two highly independent channels showing only within-channel rivalry, and no rivalry or after-effect interactions between the channels. The finding of two independent motion vision channels, each with a separate rivalry stage and a private line to conscious perception, might be helpful in visualizing or analysing pathways to consciousness. PMID:11270442

  16. Discriminating Rigid from Nonrigid Motion

    DTIC Science & Technology

    1989-07-31

    motion can be given a three-dimensional interpretation using a constraint of rigidity. Kruppa’s result and others (Faugeras & Maybank , 1989; Huang...Experimental Psychology: Human Perception and Performance, 10, 1-11. Faugeras, 0., & Maybank , S. (1989). Motion from point matches: multiplicity of

  17. Congruity Effects in Time and Space: Behavioral and ERP Measures

    ERIC Educational Resources Information Center

    Teuscher, Ursina; McQuire, Marguerite; Collins, Jennifer; Coulson, Seana

    2008-01-01

    Two experiments investigated whether motion metaphors for time affected the perception of spatial motion. Participants read sentences either about literal motion through space or metaphorical motion through time written from either the ego-moving or object-moving perspective. Each sentence was followed by a cartoon clip. Smiley-moving clips showed…

  18. Multiresolution motion planning for autonomous agents via wavelet-based cell decompositions.

    PubMed

    Cowlagi, Raghvendra V; Tsiotras, Panagiotis

    2012-10-01

    We present a path- and motion-planning scheme that is "multiresolution" both in the sense of representing the environment with high accuracy only locally and in the sense of addressing the vehicle kinematic and dynamic constraints only locally. The proposed scheme uses rectangular multiresolution cell decompositions, efficiently generated using the wavelet transform. The wavelet transform is widely used in signal and image processing, with emerging applications in autonomous sensing and perception systems. The proposed motion planner enables the simultaneous use of the wavelet transform in both the perception and in the motion-planning layers of vehicle autonomy, thus potentially reducing online computations. We rigorously prove the completeness of the proposed path-planning scheme, and we provide numerical simulation results to illustrate its efficacy.

  19. Audio aided electro-tactile perception training for finger posture biofeedback.

    PubMed

    Vargas, Jose Gonzalez; Yu, Wenwei

    2008-01-01

    Visual information is one of the prerequisites for most biofeedback studies. The aim of this study is to explore how the usage of an audio aided training helps in the learning process of dynamical electro-tactile perception without any visual feedback. In this research, the electrical simulation patterns associated with the experimenter's finger postures and motions were presented to the subjects. Along with the electrical stimulation patterns 2 different types of information, verbal and audio information on finger postures and motions, were presented to the verbal training subject group (group 1) and audio training subject group (group 2), respectively. The results showed an improvement in the ability to distinguish and memorize electrical stimulation patterns correspondent to finger postures and motions without visual feedback, and with audio tones aid, the learning was faster and the perception became more precise after training. Thus, this study clarified that, as a substitution to visual presentation, auditory information could help effectively in the formation of electro-tactile perception. Further research effort needed to make clear the difference between the visual guided and audio aided training in terms of information compilation, post-training effect and robustness of the perception.

  20. Local and global aspects of biological motion perception in children born at very low birth weight

    PubMed Central

    Williamson, K. E.; Jakobson, L. S.; Saunders, D. R.; Troje, N. F.

    2015-01-01

    Biological motion perception can be assessed using a variety of tasks. In the present study, 8- to 11-year-old children born prematurely at very low birth weight (<1500 g) and matched, full-term controls completed tasks that required the extraction of local motion cues, the ability to perceptually group these cues to extract information about body structure, and the ability to carry out higher order processes required for action recognition and person identification. Preterm children exhibited difficulties in all 4 aspects of biological motion perception. However, intercorrelations between test scores were weak in both full-term and preterm children—a finding that supports the view that these processes are relatively independent. Preterm children also displayed more autistic-like traits than full-term peers. In preterm (but not full-term) children, these traits were negatively correlated with performance in the task requiring structure-from-motion processing, r(30) = −.36, p < .05), but positively correlated with the ability to extract identity, r(30) = .45, p < .05). These findings extend previous reports of vulnerability in systems involved in processing dynamic cues in preterm children and suggest that a core deficit in social perception/cognition may contribute to the development of the social and behavioral difficulties even in members of this population who are functioning within the normal range intellectually. The results could inform the development of screening, diagnostic, and intervention tools. PMID:25103588

  1. Visual Motion Perception and Visual Attentive Processes.

    DTIC Science & Technology

    1988-04-01

    88-0551 Visual Motion Perception and Visual Attentive Processes George Spering , New YorkUnivesity A -cesson For DTIC TAB rant AFOSR 85-0364... Spering . HIPSt: A Unix-based image processing syslem. Computer Vision, Graphics, and Image Processing, 1984,25. 331-347. ’HIPS is the Human Information...Processing Laboratory’s Image Processing System. 1985 van Santen, Jan P. It, and George Spering . Elaborated Reichardt detectors. Journal of the Optical

  2. Methodology for estimating human perception to tremors in high-rise buildings

    NASA Astrophysics Data System (ADS)

    Du, Wenqi; Goh, Key Seng; Pan, Tso-Chien

    2017-07-01

    Human perception to tremors during earthquakes in high-rise buildings is usually associated with psychological discomfort such as fear and anxiety. This paper presents a methodology for estimating the level of perception to tremors for occupants living in high-rise buildings subjected to ground motion excitations. Unlike other approaches based on empirical or historical data, the proposed methodology performs a regression analysis using the analytical results of two generic models of 15 and 30 stories. The recorded ground motions in Singapore are collected and modified for structural response analyses. Simple predictive models are then developed to estimate the perception level to tremors based on a proposed ground motion intensity parameter—the average response spectrum intensity in the period range between 0.1 and 2.0 s. These models can be used to predict the percentage of occupants in high-rise buildings who may perceive the tremors at a given ground motion intensity. Furthermore, the models are validated with two recent tremor events reportedly felt in Singapore. It is found that the estimated results match reasonably well with the reports in the local newspapers and from the authorities. The proposed methodology is applicable to urban regions where people living in high-rise buildings might feel tremors during earthquakes.

  3. Feature-Based Attention in Early Vision for the Modulation of Figure–Ground Segregation

    PubMed Central

    Wagatsuma, Nobuhiko; Oki, Megumi; Sakai, Ko

    2013-01-01

    We investigated psychophysically whether feature-based attention modulates the perception of figure–ground (F–G) segregation and, based on the results, we investigated computationally the neural mechanisms underlying attention modulation. In the psychophysical experiments, the attention of participants was drawn to a specific motion direction and they were then asked to judge the side of figure in an ambiguous figure with surfaces consisting of distinct motion directions. The results of these experiments showed that the surface consisting of the attended direction of motion was more frequently observed as figure, with a degree comparable to that of spatial attention (Wagatsuma et al., 2008). These experiments also showed that perception was dependent on the distribution of feature contrast, specifically the motion direction differences. These results led us to hypothesize that feature-based attention functions in a framework similar to that of spatial attention. We proposed a V1–V2 model in which feature-based attention modulates the contrast of low-level feature in V1, and this modulation of contrast changes directly the surround modulation of border-ownership-selective cells in V2; thus, perception of F–G is biased. The model exhibited good agreement with human perception in the magnitude of attention modulation and its invariance among stimuli. These results indicate that early-level features that are modified by feature-based attention alter subsequent processing along afferent pathway, and that such modification could even change the perception of object. PMID:23515841

  4. Feature-based attention in early vision for the modulation of figure-ground segregation.

    PubMed

    Wagatsuma, Nobuhiko; Oki, Megumi; Sakai, Ko

    2013-01-01

    We investigated psychophysically whether feature-based attention modulates the perception of figure-ground (F-G) segregation and, based on the results, we investigated computationally the neural mechanisms underlying attention modulation. In the psychophysical experiments, the attention of participants was drawn to a specific motion direction and they were then asked to judge the side of figure in an ambiguous figure with surfaces consisting of distinct motion directions. The results of these experiments showed that the surface consisting of the attended direction of motion was more frequently observed as figure, with a degree comparable to that of spatial attention (Wagatsuma et al., 2008). These experiments also showed that perception was dependent on the distribution of feature contrast, specifically the motion direction differences. These results led us to hypothesize that feature-based attention functions in a framework similar to that of spatial attention. We proposed a V1-V2 model in which feature-based attention modulates the contrast of low-level feature in V1, and this modulation of contrast changes directly the surround modulation of border-ownership-selective cells in V2; thus, perception of F-G is biased. The model exhibited good agreement with human perception in the magnitude of attention modulation and its invariance among stimuli. These results indicate that early-level features that are modified by feature-based attention alter subsequent processing along afferent pathway, and that such modification could even change the perception of object.

  5. On the road to somewhere: Brain potentials reflect language effects on motion event perception.

    PubMed

    Flecken, Monique; Athanasopoulos, Panos; Kuipers, Jan Rouke; Thierry, Guillaume

    2015-08-01

    Recent studies have identified neural correlates of language effects on perception in static domains of experience such as colour and objects. The generalization of such effects to dynamic domains like motion events remains elusive. Here, we focus on grammatical differences between languages relevant for the description of motion events and their impact on visual scene perception. Two groups of native speakers of German or English were presented with animated videos featuring a dot travelling along a trajectory towards a geometrical shape (endpoint). English is a language with grammatical aspect in which attention is drawn to trajectory and endpoint of motion events equally. German, in contrast, is a non-aspect language which highlights endpoints. We tested the comparative perceptual saliency of trajectory and endpoint of motion events by presenting motion event animations (primes) followed by a picture symbolising the event (target): In 75% of trials, the animation was followed by a mismatching picture (both trajectory and endpoint were different); in 10% of trials, only the trajectory depicted in the picture matched the prime; in 10% of trials, only the endpoint matched the prime; and in 5% of trials both trajectory and endpoint were matching, which was the condition requiring a response from the participant. In Experiment 1 we recorded event-related brain potentials elicited by the picture in native speakers of German and native speakers of English. German participants exhibited a larger P3 wave in the endpoint match than the trajectory match condition, whereas English speakers showed no P3 amplitude difference between conditions. In Experiment 2 participants performed a behavioural motion matching task using the same stimuli as those used in Experiment 1. German and English participants did not differ in response times showing that motion event verbalisation cannot readily account for the difference in P3 amplitude found in the first experiment. We argue that, even in a non-verbal context, the grammatical properties of the native language and associated sentence-level patterns of event encoding influence motion event perception, such that attention is automatically drawn towards aspects highlighted by the grammar. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  6. Motion illusion – evidence towards human vestibulo-thalamic projections

    PubMed Central

    Shaikh, Aasef G.; Straumann, Dominik; Palla, Antonella

    2017-01-01

    Introduction Contemporary studies speculated that cerebellar network responsible for motion perception projects to the cerebral cortex via vestibulo-thalamus. Here we sought for the physiological properties of vestibulo-thalamic pathway responsible for the motion perception. Methods Healthy subjects and the patient with focal vestibulo-thalamic lacunar stroke spun a hand-held rheostat to approximate the value of perceived angular velocity during whole-body passive earth-vertical axis rotations in yaw plane. Vestibulo-ocular reflex was simultaneously measured with high-resolution search coils (paradigm 1). In primates the vestibulo-thalamic projections remain medial and then dorsomedial to the subthalamus. Therefore the paradigm 2 assessed the effects of high-frequency subthalamic nucleus electrical stimulation through the medial and caudal deep brain stimulation electrode in five subjects with Parkinson’s disease. Results Paradigm 1 discovered directional mismatch of perceived rotation in a patient with vestiblo-thalamic lacune. There was no such mismatch in vestibulo-ocular reflex. Healthy subjects did not have such directional discrepancy of perceived motion. The results confirmed that perceived angular motion is relayed through the thalamus. Stimulation through medial and caudal-most electrode of subthalamic deep brain stimulator in paradigm 2 resulted in perception of rotational motion in the horizontal semicircular canal plane. One patient perceived riding a swing, a complex motion, possibly the combination of vertical canal and otolith derived signals representing pitch and fore-aft motion respectively. Conclusion The results examined physiological properties of the vestibulo-thalamic pathway that passes in proximity to the subthalamic nucleus conducting pure semicircular canal signals and convergent signals from the semicircular canals and the otoliths. PMID:28127679

  7. The development of global motion discrimination in school aged children

    PubMed Central

    Bogfjellmo, Lotte-Guri; Bex, Peter J.; Falkenberg, Helle K.

    2014-01-01

    Global motion perception matures during childhood and involves the detection of local directional signals that are integrated across space. We examine the maturation of local directional selectivity and global motion integration with an equivalent noise paradigm applied to direction discrimination. One hundred and three observers (6–17 years) identified the global direction of motion in a 2AFC task. The 8° central stimuli consisted of 100 dots of 10% Michelson contrast moving 2.8°/s or 9.8°/s. Local directional selectivity and global sampling efficiency were estimated from direction discrimination thresholds as a function of external directional noise, speed, and age. Direction discrimination thresholds improved gradually until the age of 14 years (linear regression, p < 0.05) for both speeds. This improvement was associated with a gradual increase in sampling efficiency (linear regression, p < 0.05), with no significant change in internal noise. Direction sensitivity was lower for dots moving at 2.8°/s than at 9.8°/s for all ages (paired t test, p < 0.05) and is mainly due to lower sampling efficiency. Global motion perception improves gradually during development and matures by age 14. There was no change in internal noise after the age of 6, suggesting that local direction selectivity is mature by that age. The improvement in global motion perception is underpinned by a steady increase in the efficiency with which direction signals are pooled, suggesting that global motion pooling processes mature for longer and later than local motion processing. PMID:24569985

  8. Two-year-olds with autism orient to non-social contingencies rather than biological motion.

    PubMed

    Klin, Ami; Lin, David J; Gorrindo, Phillip; Ramsay, Gordon; Jones, Warren

    2009-05-14

    Typically developing human infants preferentially attend to biological motion within the first days of life. This ability is highly conserved across species and is believed to be critical for filial attachment and for detection of predators. The neural underpinnings of biological motion perception are overlapping with brain regions involved in perception of basic social signals such as facial expression and gaze direction, and preferential attention to biological motion is seen as a precursor to the capacity for attributing intentions to others. However, in a serendipitous observation, we recently found that an infant with autism failed to recognize point-light displays of biological motion, but was instead highly sensitive to the presence of a non-social, physical contingency that occurred within the stimuli by chance. This observation raised the possibility that perception of biological motion may be altered in children with autism from a very early age, with cascading consequences for both social development and the lifelong impairments in social interaction that are a hallmark of autism spectrum disorders. Here we show that two-year-olds with autism fail to orient towards point-light displays of biological motion, and their viewing behaviour when watching these point-light displays can be explained instead as a response to non-social, physical contingencies--physical contingencies that are disregarded by control children. This observation has far-reaching implications for understanding the altered neurodevelopmental trajectory of brain specialization in autism.

  9. Two-year-olds with autism orient to nonsocial contingencies rather than biological motion

    PubMed Central

    Klin, Ami; Lin, David J.; Gorrindo, Phillip; Ramsay, Gordon; Jones, Warren

    2009-01-01

    Typically-developing human infants preferentially attend to biological motion within the first days of life1. This ability is highly conserved across species2,3 and is believed to be critical for filial attachment and for detection of predators4. The neural underpinnings of biological motion perception are overlapping with brain regions involved in perception of basic social signals such as facial expression and gaze direction5, and preferential attention to biological motion is seen as a precursor to the capacity for attributing intentions to others6. However, in a serendipitous observation7, we recently found that an infant with autism failed to recognize point-light displays of biological motion but was instead highly sensitive to the presence of a non-social, physical contingency that occurred within the stimuli by chance. This observation raised the hypothesis that perception of biological motion may be altered in children with autism from a very early age, with cascading consequences for both social development and for the lifelong impairments in social interaction that are a hallmark of autism spectrum disorders8. Here we show that two-year-olds with autism fail to orient towards point-light displays of biological motion, and that their viewing behavior when watching these point-light displays can be explained instead as a response to non-social, physical contingencies physical contingencies that are disregarded by control children. This observation has far-reaching implications for understanding the altered neurodevelopmental trajectory of brain specialization in autism9. PMID:19329996

  10. Global motion perception in children with amblyopia as a function of spatial and temporal stimulus parameters.

    PubMed

    Meier, Kimberly; Sum, Brian; Giaschi, Deborah

    2016-10-01

    Global motion sensitivity in typically developing children depends on the spatial (Δx) and temporal (Δt) displacement parameters of the motion stimulus. Specifically, sensitivity for small Δx values matures at a later age, suggesting it may be the most vulnerable to damage by amblyopia. To explore this possibility, we compared motion coherence thresholds of children with amblyopia (7-14years old) to age-matched controls. Three Δx values were used with two Δt values, yielding six conditions covering a range of speeds (0.3-30deg/s). We predicted children with amblyopia would show normal coherence thresholds for the same parameters on which 5-year-olds previously demonstrated mature performance, and elevated coherence thresholds for parameters on which 5-year-olds demonstrated immaturities. Consistent with this, we found that children with amblyopia showed deficits with amblyopic eye viewing compared to controls for small and medium Δx values, regardless of Δt value. The fellow eye showed similar results at the smaller Δt. These results confirm that global motion perception in children with amblyopia is particularly deficient at the finer spatial scales that typically mature later in development. An additional implication is that carefully designed stimuli that are adequately sensitive must be used to assess global motion function in developmental disorders. Stimulus parameters for which performance matures early in life may not reveal global motion perception deficits. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. The role of eye movements in depth from motion parallax during infancy

    PubMed Central

    Nawrot, Elizabeth; Nawrot, Mark

    2013-01-01

    Motion parallax is a motion-based, monocular depth cue that uses an object's relative motion and velocity as a cue to relative depth. In adults, and in monkeys, a smooth pursuit eye movement signal is used to disambiguate the depth-sign provided by these relative motion cues. The current study investigates infants' perception of depth from motion parallax and the development of two oculomotor functions, smooth pursuit and the ocular following response (OFR) eye movements. Infants 8 to 20 weeks of age were presented with three tasks in a single session: depth from motion parallax, smooth pursuit tracking, and OFR to translation. The development of smooth pursuit was significantly related to age, as was sensitivity to motion parallax. OFR eye movements also corresponded to both age and smooth pursuit gain, with groups of infants demonstrating asymmetric function in both types of eye movements. These results suggest that the development of the eye movement system may play a crucial role in the sensitivity to depth from motion parallax in infancy. Moreover, describing the development of these oculomotor functions in relation to depth perception may aid in the understanding of certain visual dysfunctions. PMID:24353309

  12. Sparing of Sensitivity to Biological Motion but Not of Global Motion after Early Visual Deprivation

    ERIC Educational Resources Information Center

    Hadad, Bat-Sheva; Maurer, Daphne; Lewis, Terri L.

    2012-01-01

    Patients deprived of visual experience during infancy by dense bilateral congenital cataracts later show marked deficits in the perception of global motion (dorsal visual stream) and global form (ventral visual stream). We expected that they would also show marked deficits in sensitivity to biological motion, which is normally processed in the…

  13. Multisensory effects on somatosensation: a trimodal visuo-vestibular-tactile interaction

    PubMed Central

    Kaliuzhna, Mariia; Ferrè, Elisa Raffaella; Herbelin, Bruno; Blanke, Olaf; Haggard, Patrick

    2016-01-01

    Vestibular information about self-motion is combined with other sensory signals. Previous research described both visuo-vestibular and vestibular-tactile bilateral interactions, but the simultaneous interaction between all three sensory modalities has not been explored. Here we exploit a previously reported visuo-vestibular integration to investigate multisensory effects on tactile sensitivity in humans. Tactile sensitivity was measured during passive whole body rotations alone or in conjunction with optic flow, creating either purely vestibular or visuo-vestibular sensations of self-motion. Our results demonstrate that tactile sensitivity is modulated by perceived self-motion, as provided by a combined visuo-vestibular percept, and not by the visual and vestibular cues independently. We propose a hierarchical multisensory interaction that underpins somatosensory modulation: visual and vestibular cues are first combined to produce a multisensory self-motion percept. Somatosensory processing is then enhanced according to the degree of perceived self-motion. PMID:27198907

  14. Aural-Visual-Kinesthetic Imagery in Motion Media.

    ERIC Educational Resources Information Center

    Allan, David W.

    Motion media refers to film, television, and other forms of kinesthetic media including computerized multimedia technologies and virtual reality. Imagery reproduced by motion media carries a multisensory amalgamation of mental experiences. The blending of these experiences phenomenologically intersects with the reality and perception of words,…

  15. Visible propagation from invisible exogenous cueing.

    PubMed

    Lin, Zhicheng; Murray, Scott O

    2013-09-20

    Perception and performance is affected not just by what we see but also by what we do not see-inputs that escape our awareness. While conscious processing and unconscious processing have been assumed to be separate and independent, here we report the propagation of unconscious exogenous cueing as determined by conscious motion perception. In a paradigm combining masked exogenous cueing and apparent motion, we show that, when an onset cue was rendered invisible, the unconscious exogenous cueing effect traveled, manifesting at uncued locations (4° apart) in accordance with conscious perception of visual motion; the effect diminished when the cue-to-target distance was 8° apart. In contrast, conscious exogenous cueing manifested in both distances. Further evidence reveals that the unconscious and conscious nonretinotopic effects could not be explained by an attentional gradient, nor by bottom-up, energy-based motion mechanisms, but rather they were subserved by top-down, tracking-based motion mechanisms. We thus term these effects mobile cueing. Taken together, unconscious mobile cueing effects (a) demonstrate a previously unknown degree of flexibility of unconscious exogenous attention; (b) embody a simultaneous dissociation and association of attention and consciousness, in which exogenous attention can occur without cue awareness ("dissociation"), yet at the same time its effect is contingent on conscious motion tracking ("association"); and (c) underscore the interaction of conscious and unconscious processing, providing evidence for an unconscious effect that is not automatic but controlled.

  16. The application of biological motion research: biometrics, sport, and the military.

    PubMed

    Steel, Kylie; Ellem, Eathan; Baxter, David

    2015-02-01

    The body of research that examines the perception of biological motion is extensive and explores the factors that are perceived from biological motion and how this information is processed. This research demonstrates that individuals are able to use relative (temporal and spatial) information from a person's movement to recognize factors, including gender, age, deception, emotion, intention, and action. The research also demonstrates that movement presents idiosyncratic properties that allow individual discrimination, thus providing the basis for significant exploration in the domain of biometrics and social signal processing. Medical forensics, safety garments, and victim selection domains also have provided a history of research on the perception of biological motion applications; however, a number of additional domains present opportunities for application that have not been explored in depth. Therefore, the purpose of this paper is to present an overview of the current applications of biological motion-based research and to propose a number of areas where biological motion research, specific to recognition, could be applied in the future.

  17. “What Women Like”: Influence of Motion and Form on Esthetic Body Perception

    PubMed Central

    Cazzato, Valentina; Siega, Serena; Urgesi, Cosimo

    2012-01-01

    Several studies have shown the distinct contribution of motion and form to the esthetic evaluation of female bodies. Here, we investigated how variations of implied motion and body size interact in the esthetic evaluation of female and male bodies in a sample of young healthy women. Participants provided attractiveness, beauty, and liking ratings for the shape and posture of virtual renderings of human bodies with variable body size and implied motion. The esthetic judgments for both shape and posture of human models were influenced by body size and implied motion, with a preference for thinner and more dynamic stimuli. Implied motion, however, attenuated the impact of extreme body size on the esthetic evaluation of body postures, while body size variations did not affect the preference for more dynamic stimuli. Results show that body form and action cues interact in esthetic perception, but the final esthetic appreciation of human bodies is predicted by a mixture of perceptual and affective evaluative components. PMID:22866044

  18. Blindsight modulation of motion perception.

    PubMed

    Intriligator, James M; Xie, Ruiman; Barton, Jason J S

    2002-11-15

    Monkey data suggest that of all perceptual abilities, motion perception is the most likely to survive striate damage. The results of studies on motion blindsight in humans, though, are mixed. We used an indirect strategy to examine how responses to visible stimuli were modulated by blind-field stimuli. In a 26-year-old man with focal striate lesions, discrimination of visible optic flow was enhanced about 7% by blind-field flow, even though discrimination of optic flow in the blind field alone (the direct strategy) was at chance. Pursuit of an imagined target using peripheral cues showed reduced variance but not increased gain with blind-field cues. Preceding blind-field prompts shortened reaction times to visible targets by about 10 msec, but there was no attentional crowding of visible stimuli by blind-field distractors. A similar efficacy of indirect blind-field optic flow modulation was found in a second patient with residual vision after focal striate damage, but not in a third with more extensive medial occipito-temporal damage. We conclude that indirect modulatory strategies are more effective than direct forced-choice methods at revealing residual motion perception after focal striate lesions.

  19. Human Perception of Ambiguous Inertial Motion Cues

    NASA Technical Reports Server (NTRS)

    Zhang, Guan-Lu

    2010-01-01

    Human daily activities on Earth involve motions that elicit both tilt and translation components of the head (i.e. gazing and locomotion). With otolith cues alone, tilt and translation can be ambiguous since both motions can potentially displace the otolithic membrane by the same magnitude and direction. Transitions between gravity environments (i.e. Earth, microgravity and lunar) have demonstrated to alter the functions of the vestibular system and exacerbate the ambiguity between tilt and translational motion cues. Symptoms of motion sickness and spatial disorientation can impair human performances during critical mission phases. Specifically, Space Shuttle landing records show that particular cases of tilt-translation illusions have impaired the performance of seasoned commanders. This sensorimotor condition is one of many operational risks that may have dire implications on future human space exploration missions. The neural strategy with which the human central nervous system distinguishes ambiguous inertial motion cues remains the subject of intense research. A prevailing theory in the neuroscience field proposes that the human brain is able to formulate a neural internal model of ambiguous motion cues such that tilt and translation components can be perceptually decomposed in order to elicit the appropriate bodily response. The present work uses this theory, known as the GIF resolution hypothesis, as the framework for experimental hypothesis. Specifically, two novel motion paradigms are employed to validate the neural capacity of ambiguous inertial motion decomposition in ground-based human subjects. The experimental setup involves the Tilt-Translation Sled at Neuroscience Laboratory of NASA JSC. This two degree-of-freedom motion system is able to tilt subjects in the pitch plane and translate the subject along the fore-aft axis. Perception data will be gathered through subject verbal reports. Preliminary analysis of perceptual data does not indicate that the GIF resolution hypothesis is completely valid for non-rotational periodic motions. Additionally, human perception of translation is impaired without visual or spatial reference. The performance of ground-base subjects in estimating tilt after brief training is comparable with that of crewmembers without training.

  20. Activation of the Human MT Complex by Motion in Depth Induced by a Moving Cast Shadow

    PubMed Central

    Katsuyama, Narumi; Usui, Nobuo; Taira, Masato

    2016-01-01

    A moving cast shadow is a powerful monocular depth cue for motion perception in depth. For example, when a cast shadow moves away from or toward an object in a two-dimensional plane, the object appears to move toward or away from the observer in depth, respectively, whereas the size and position of the object are constant. Although the cortical mechanisms underlying motion perception in depth by cast shadow are unknown, the human MT complex (hMT+) is likely involved in the process, as it is sensitive to motion in depth represented by binocular depth cues. In the present study, we examined this possibility by using a functional magnetic resonance imaging (fMRI) technique. First, we identified the cortical regions sensitive to the motion of a square in depth represented via binocular disparity. Consistent with previous studies, we observed significant activation in the bilateral hMT+, and defined functional regions of interest (ROIs) there. We then investigated the activity of the ROIs during observation of the following stimuli: 1) a central square that appeared to move back and forth via a moving cast shadow (mCS); 2) a segmented and scrambled cast shadow presented beside the square (sCS); and 3) no cast shadow (nCS). Participants perceived motion of the square in depth in the mCS condition only. The activity of the hMT+ was significantly higher in the mCS compared with the sCS and nCS conditions. Moreover, the hMT+ was activated equally in both hemispheres in the mCS condition, despite presentation of the cast shadow in the bottom-right quadrant of the stimulus. Perception of the square moving in depth across visual hemifields may be reflected in the bilateral activation of the hMT+. We concluded that the hMT+ is involved in motion perception in depth induced by moving cast shadow and by binocular disparity. PMID:27597999

  1. Auditory motion processing after early blindness

    PubMed Central

    Jiang, Fang; Stecker, G. Christopher; Fine, Ione

    2014-01-01

    Studies showing that occipital cortex responds to auditory and tactile stimuli after early blindness are often interpreted as demonstrating that early blind subjects “see” auditory and tactile stimuli. However, it is not clear whether these occipital responses directly mediate the perception of auditory/tactile stimuli, or simply modulate or augment responses within other sensory areas. We used fMRI pattern classification to categorize the perceived direction of motion for both coherent and ambiguous auditory motion stimuli. In sighted individuals, perceived motion direction was accurately categorized based on neural responses within the planum temporale (PT) and right lateral occipital cortex (LOC). Within early blind individuals, auditory motion decisions for both stimuli were successfully categorized from responses within the human middle temporal complex (hMT+), but not the PT or right LOC. These findings suggest that early blind responses within hMT+ are associated with the perception of auditory motion, and that these responses in hMT+ may usurp some of the functions of nondeprived PT. Thus, our results provide further evidence that blind individuals do indeed “see” auditory motion. PMID:25378368

  2. Sigmund Exner's (1887) Einige Beobachtungen über Bewegungsnachbilder (Some Observations on Movement Aftereffects): An Illustrated Translation With Commentary.

    PubMed

    Verstraten, Frans A J; Niehorster, Diederick C; van de Grind, Wim A; Wade, Nicholas J

    2015-10-01

    In his original contribution, Exner's principal concern was a comparison between the properties of different aftereffects, and particularly to determine whether aftereffects of motion were similar to those of color and whether they could be encompassed within a unified physiological framework. Despite the fact that he was unable to answer his main question, there are some excellent-so far unknown-contributions in Exner's paper. For example, he describes observations that can be related to binocular interaction, not only in motion aftereffects but also in rivalry. To the best of our knowledge, Exner provides the first description of binocular rivalry induced by differently moving patterns in each eye, for motion as well as for their aftereffects. Moreover, apart from several known, but beautifully addressed, phenomena he makes a clear distinction between motion in depth based on stimulus properties and motion in depth based on the interpretation of motion. That is, the experience of movement, as distinct from the perception of movement. The experience, unlike the perception, did not result in a motion aftereffect in depth.

  3. Sigmund Exner’s (1887) Einige Beobachtungen über Bewegungsnachbilder (Some Observations on Movement Aftereffects): An Illustrated Translation With Commentary

    PubMed Central

    Niehorster, Diederick C.; van de Grind, Wim A.; Wade, Nicholas J.

    2015-01-01

    In his original contribution, Exner’s principal concern was a comparison between the properties of different aftereffects, and particularly to determine whether aftereffects of motion were similar to those of color and whether they could be encompassed within a unified physiological framework. Despite the fact that he was unable to answer his main question, there are some excellent—so far unknown—contributions in Exner’s paper. For example, he describes observations that can be related to binocular interaction, not only in motion aftereffects but also in rivalry. To the best of our knowledge, Exner provides the first description of binocular rivalry induced by differently moving patterns in each eye, for motion as well as for their aftereffects. Moreover, apart from several known, but beautifully addressed, phenomena he makes a clear distinction between motion in depth based on stimulus properties and motion in depth based on the interpretation of motion. That is, the experience of movement, as distinct from the perception of movement. The experience, unlike the perception, did not result in a motion aftereffect in depth. PMID:27648213

  4. How imagery changes self-motion perception

    PubMed Central

    Nigmatullina, Y.; Arshad, Q.; Wu, K.; Seemungal, B.M.; Bronstein, A.M.; Soto, D.

    2015-01-01

    Imagery and perception are thought to be tightly linked, however, little is known about the interaction between imagery and the vestibular sense, in particular, self-motion perception. In this study, the observers were seated in the dark on a motorized chair that could rotate either to the right or to the left. Prior to the physical rotation, observers were asked to imagine themselves rotating leftward or rightward. We found that if the direction of imagined rotation was different to the physical rotation of the chair (incongruent trials), the velocity of the chair needed to be higher for observers to experience themselves rotating relative to when the imagined and the physical rotation matched (on congruent trials). Accordingly, the vividness of imagined rotations was reduced on incongruent relative to congruent trials. Notably, we found that similar effects of imagery were found at the earliest stages of vestibular processing, namely, the onset of the vestibular–ocular reflex was modulated by the congruency between physical and imagined rotations. Together, the results demonstrate that mental imagery influences self-motion perception by exerting top-down influences over the earliest vestibular response and subsequent perceptual decision-making. PMID:25637805

  5. Frogs Jump Forward: Semantic Knowledge Influences the Perception of Element Motion in the Ternus Display.

    PubMed

    Hsu, Patty; Taylor, J Eric T; Pratt, Jay

    2015-01-01

    The Ternus effect is a robust illusion of motion that produces element motion at short interstimulus intervals (ISIs; < 50 ms) and group motion at longer ISIs (> 50 ms). Previous research has shown that the nature of the stimuli (e.g., similarity, grouping), not just ISI, can influence the likelihood of perceiving element or group motion. We examined if semantic knowledge can also influence what type of illusory motion is perceived. In Experiment I, we used a modified Ternus display with pictures of frogs in a jump-ready pose facing forwards or backwards to the direction of illusory motion. Participants perceived more element motion with the forward-facing frogs and more group motion with the backward-facing frogs. Experiment 2 tested whether this effect would still occur with line drawings of frogs, or if a more life-like image was necessary. Experiment 3 tested whether this effect was due to visual asymmetries inherent in the jumping pose. Experiment 4 tested whether frogs in a "non-jumping," sedentary pose would replicate the original effect. These experiments elucidate the role of semantic knowledge in the Ternus effect. Prior knowledge of the movement of certain animate objects, in this case, frogs can also bias the perception of element or group motion.

  6. Vibro-Perception of Optical Bio-Inspired Fiber-Skin.

    PubMed

    Li, Tao; Zhang, Sheng; Lu, Guo-Wei; Sunami, Yuta

    2018-05-12

    In this research, based on the principle of optical interferometry, the Mach-Zehnder and Optical Phase-locked Loop (OPLL) vibro-perception systems of bio-inspired fiber-skin are designed to mimic the tactile perception of human skin. The fiber-skin is made of the optical fiber embedded in the silicone elastomer. The optical fiber is an instinctive and alternative sensor for tactile perception with high sensitivity and reliability, also low cost and susceptibility to the magnetic interference. The silicone elastomer serves as a substrate with high flexibility and biocompatibility, and the optical fiber core serves as the vibro-perception sensor to detect physical motions like tapping and sliding. According to the experimental results, the designed optical fiber-skin demonstrates the ability to detect the physical motions like tapping and sliding in both the Mach-Zehnder and OPLL vibro-perception systems. For direct contact condition, the OPLL vibro-perception system shows better performance compared with the Mach-Zehnder vibro-perception system. However, the Mach-Zehnder vibro-perception system is preferable to the OPLL system in the indirect contact experiment. In summary, the fiber-skin is validated to have light touch character and excellent repeatability, which is highly-suitable for skin-mimic sensing.

  7. Enhancing Motion-In-Depth Perception of Random-Dot Stereograms.

    PubMed

    Zhang, Di; Nourrit, Vincent; De Bougrenet de la Tocnaye, Jean-Louis

    2018-07-01

    Random-dot stereograms have been widely used to explore the neural mechanisms underlying binocular vision. Although they are a powerful tool to stimulate motion-in-depth (MID) perception, published results report some difficulties in the capacity to perceive MID generated by random-dot stereograms. The purpose of this study was to investigate whether the performance of MID perception could be improved using an appropriate stimulus design. Sixteen inexperienced observers participated in the experiment. A training session was carried out to improve the accuracy of MID detection before the experiment. Four aspects of stimulus design were investigated: presence of a static reference, background texture, relative disparity, and stimulus contrast. Participants' performance in MID direction discrimination was recorded and compared to evaluate whether varying these factors helped MID perception. Results showed that only the presence of background texture had a significant effect on MID direction perception. This study provides suggestions for the design of 3D stimuli in order to facilitate MID perception.

  8. Suppressive mechanisms in visual motion processing: From perception to intelligence.

    PubMed

    Tadin, Duje

    2015-10-01

    Perception operates on an immense amount of incoming information that greatly exceeds the brain's processing capacity. Because of this fundamental limitation, the ability to suppress irrelevant information is a key determinant of perceptual efficiency. Here, I will review a series of studies investigating suppressive mechanisms in visual motion processing, namely perceptual suppression of large, background-like motions. These spatial suppression mechanisms are adaptive, operating only when sensory inputs are sufficiently robust to guarantee visibility. Converging correlational and causal evidence links these behavioral results with inhibitory center-surround mechanisms, namely those in cortical area MT. Spatial suppression is abnormally weak in several special populations, including the elderly and individuals with schizophrenia-a deficit that is evidenced by better-than-normal direction discriminations of large moving stimuli. Theoretical work shows that this abnormal weakening of spatial suppression should result in motion segregation deficits, but direct behavioral support of this hypothesis is lacking. Finally, I will argue that the ability to suppress information is a fundamental neural process that applies not only to perception but also to cognition in general. Supporting this argument, I will discuss recent research that shows individual differences in spatial suppression of motion signals strongly predict individual variations in IQ scores. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Pitch body orientation influences the perception of self-motion direction induced by optic flow.

    PubMed

    Bourrelly, A; Vercher, J-L; Bringoux, L

    2010-10-04

    We studied the effect of static pitch body tilts on the perception of self-motion direction induced by a visual stimulus. Subjects were seated in front of a screen on which was projected a 3D cluster of moving dots visually simulating a forward motion of the observer with upward or downward directional biases (relative to a true earth horizontal direction). The subjects were tilted at various angles relative to gravity and were asked to estimate the direction of the perceived motion (nose-up, as during take-off or nose-down, as during landing). The data showed that body orientation proportionally affected the amount of error in the reported perceived direction (by 40% of body tilt magnitude in a range of +/-20 degrees) and these errors were systematically recorded in the direction of body tilt. As a consequence, a same visual stimulus was differently interpreted depending on body orientation. While the subjects were required to perform the task in a geocentric reference frame (i.e., relative to a gravity-related direction), they were obviously influenced by egocentric references. These results suggest that the perception of self-motion is not elaborated within an exclusive reference frame (either egocentric or geocentric) but rather results from the combined influence of both. (c) 2010 Elsevier Ireland Ltd. All rights reserved.

  10. Examining the Effect of Age on Visual-Vestibular Self-Motion Perception Using a Driving Paradigm.

    PubMed

    Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L

    2017-05-01

    Previous psychophysical research has examined how younger adults and non-human primates integrate visual and vestibular cues to perceive self-motion. However, there is much to be learned about how multisensory self-motion perception changes with age, and how these changes affect performance on everyday tasks involving self-motion. Evidence suggests that older adults display heightened multisensory integration compared with younger adults; however, few previous studies have examined this for visual-vestibular integration. To explore age differences in the way that visual and vestibular cues contribute to self-motion perception, we had younger and older participants complete a basic driving task containing visual and vestibular cues. We compared their performance against a previously established control group that experienced visual cues alone. Performance measures included speed, speed variability, and lateral position. Vestibular inputs resulted in more precise speed control among older adults, but not younger adults, when traversing curves. Older adults demonstrated more variability in lateral position when vestibular inputs were available versus when they were absent. These observations align with previous evidence of age-related differences in multisensory integration and demonstrate that they may extend to visual-vestibular integration. These findings may have implications for vehicle and simulator design when considering older users.

  11. Motion transparency: making models of motion perception transparent.

    PubMed

    Snowden; Verstraten

    1999-10-01

    In daily life our visual system is bombarded with motion information. We see cars driving by, flocks of birds flying in the sky, clouds passing behind trees that are dancing in the wind. Vision science has a good understanding of the first stage of visual motion processing, that is, the mechanism underlying the detection of local motions. Currently, research is focused on the processes that occur beyond the first stage. At this level, local motions have to be integrated to form objects, define the boundaries between them, construct surfaces and so on. An interesting, if complicated case is known as motion transparency: the situation in which two overlapping surfaces move transparently over each other. In that case two motions have to be assigned to the same retinal location. Several researchers have tried to solve this problem from a computational point of view, using physiological and psychophysical results as a guideline. We will discuss two models: one uses the traditional idea known as 'filter selection' and the other a relatively new approach based on Bayesian inference. Predictions from these models are compared with our own visual behaviour and that of the neural substrates that are presumed to underlie these perceptions.

  12. Neural Integration of Information Specifying Human Structure from Form, Motion, and Depth

    PubMed Central

    Jackson, Stuart; Blake, Randolph

    2010-01-01

    Recent computational models of biological motion perception operate on ambiguous two-dimensional representations of the body (e.g., snapshots, posture templates) and contain no explicit means for disambiguating the three-dimensional orientation of a perceived human figure. Are there neural mechanisms in the visual system that represent a moving human figure’s orientation in three dimensions? To isolate and characterize the neural mechanisms mediating perception of biological motion, we used an adaptation paradigm together with bistable point-light (PL) animations whose perceived direction of heading fluctuates over time. After exposure to a PL walker with a particular stereoscopically defined heading direction, observers experienced a consistent aftereffect: a bistable PL walker, which could be perceived in the adapted orientation or reversed in depth, was perceived predominantly reversed in depth. A phase-scrambled adaptor produced no aftereffect, yet when adapting and test walkers differed in size or appeared on opposite sides of fixation aftereffects did occur. Thus, this heading direction aftereffect cannot be explained by local, disparity-specific motion adaptation, and the properties of scale and position invariance imply higher-level origins of neural adaptation. Nor is disparity essential for producing adaptation: when suspended on top of a stereoscopically defined, rotating globe, a context-disambiguated “globetrotter” was sufficient to bias the bistable walker’s direction, as were full-body adaptors. In sum, these results imply that the neural signals supporting biomotion perception integrate information on the form, motion, and three-dimensional depth orientation of the moving human figure. Models of biomotion perception should incorporate mechanisms to disambiguate depth ambiguities in two-dimensional body representations. PMID:20089892

  13. Human Systems Integration and Automation Issues in Small Unmanned Aerial Vehicles

    DTIC Science & Technology

    2004-10-01

    display (HMD) bounce. Motion sickness occurs in these situations due to a combination of actual motion plus “ cybersickness ” (McCauley and Sharkey...Research Laboratory. McCauley, M.E. and Sharkey, T.J. (Summer 1992). Cybersickness : Perception of Self-Motion in Virtual Environments. Presence

  14. I Dream of J.J., or Affordances and Motion Pictures.

    ERIC Educational Resources Information Center

    Anderson, Joseph D.

    1995-01-01

    Categorizes attempts to account for how viewers garner meanings from motion pictures as either semiotic, realist, or conventionalist. Proposes an alternative explanation based on J. J. Gibson's ecological theory of perception. Offers his concept of "affordances" as the key to an explanation of how meanings in motion pictures are…

  15. Efficacy of manual and manipulative therapy in the perception of pain and cervical motion in patients with tension-type headache: a randomized, controlled clinical trial.

    PubMed

    Espí-López, Gemma V; Gómez-Conesa, Antonia

    2014-03-01

    The purpose of this study was to evaluate the efficacy of manipulative and manual therapy treatments with regard to pain perception and neck mobility in patients with tension-type headache. A randomized clinical trial was conducted on 84 adults diagnosed with tension-type headache. Eighty-four subjects were enrolled in this study: 68 women and 16 men. Mean age was 39.76 years, ranging from 18 to 65 years. A total of 57.1% were diagnosed with chronic tension-type headache and 42.9% with tension-type headache. Participants were divided into 3 treatment groups (manual therapy, manipulative therapy, a combination of manual and manipulative therapy) and a control group. Four treatment sessions were administered during 4 weeks, with posttreatment assessment and follow-up at 1 month. Cervical ranges of motion pain perception, and frequency and intensity of headaches were assessed. All 3 treatment groups showed significant improvements in the different dimensions of pain perception. Manual therapy and manipulative treatment improved some cervical ranges of motion. Headache frequency was reduced with manipulative treatment (P < .008). Combined treatment reported improvement after the treatment (P < .000) and at follow-up (P < .002). Pain intensity improved after the treatment and at follow-up with manipulative therapy (P < .01) and combined treatment (P < .01). Both treatments, administered both separately and combined together, showed efficacy for patients with tension-type headache with regard to pain perception. As for cervical ranges of motion, treatments produced greater effect when separately administered.

  16. Fast transfer of crossmodal time interval training.

    PubMed

    Chen, Lihan; Zhou, Xiaolin

    2014-06-01

    Sub-second time perception is essential for many important sensory and perceptual tasks including speech perception, motion perception, motor coordination, and crossmodal interaction. This study investigates to what extent the ability to discriminate sub-second time intervals acquired in one sensory modality can be transferred to another modality. To this end, we used perceptual classification of visual Ternus display (Ternus in Psychol Forsch 7:81-136, 1926) to implicitly measure participants' interval perception in pre- and posttests and implemented an intra- or crossmodal sub-second interval discrimination training protocol in between the tests. The Ternus display elicited either an "element motion" or a "group motion" percept, depending on the inter-stimulus interval between the two visual frames. The training protocol required participants to explicitly compare the interval length between a pair of visual, auditory, or tactile stimuli with a standard interval or to implicitly perceive the length of visual, auditory, or tactile intervals by completing a non-temporal task (discrimination of auditory pitch or tactile intensity). Results showed that after fast explicit training of interval discrimination (about 15 min), participants improved their ability to categorize the visual apparent motion in Ternus displays, although the training benefits were mild for visual timing training. However, the benefits were absent for implicit interval training protocols. This finding suggests that the timing ability in one modality can be rapidly acquired and used to improve timing-related performance in another modality and that there may exist a central clock for sub-second temporal processing, although modality-specific perceptual properties may constrain the functioning of this clock.

  17. Tilt and Translation Motion Perception during Pitch Tilt with Visual Surround Translation

    NASA Technical Reports Server (NTRS)

    O'Sullivan, Brita M.; Harm, Deborah L.; Reschke, Millard F.; Wood, Scott J.

    2006-01-01

    The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive an accurate representation of spatial orientation. Previous studies suggest that multisensory integration is critical for discriminating linear accelerations arising from tilt and translation head motion. Visual input is especially important at low frequencies where canal input is declining. The NASA Tilt Translation Device (TTD) was designed to recreate postflight orientation disturbances by exposing subjects to matching tilt self motion with conflicting visual surround translation. Previous studies have demonstrated that brief exposures to pitch tilt with foreaft visual surround translation produced changes in compensatory vertical eye movement responses, postural equilibrium, and motion sickness symptoms. Adaptation appeared greatest with visual scene motion leading (versus lagging) the tilt motion, and the adaptation time constant appeared to be approximately 30 min. The purpose of this study was to compare motion perception when the visual surround translation was inphase versus outofphase with pitch tilt. The inphase stimulus presented visual surround motion one would experience if the linear acceleration was due to foreaft self translation within a stationary surround, while the outofphase stimulus had the visual scene motion leading the tilt by 90 deg as previously used. The tilt stimuli in these conditions were asymmetrical, ranging from an upright orientation to 10 deg pitch back. Another objective of the study was to compare motion perception with the inphase stimulus when the tilts were asymmetrical relative to upright (0 to 10 deg back) versus symmetrical (10 deg forward to 10 deg back). Twelve subjects (6M, 6F, 22-55 yrs) were tested during 3 sessions separated by at least one week. During each of the three sessions (out-of-phase asymmetrical, in-phase asymmetrical, inphase symmetrical), subjects were exposed to visual surround translation synchronized with pitch tilt at 0.1 Hz for a total of 30 min. Tilt and translation motion perception was obtained from verbal reports and a joystick mounted on a linear stage. Horizontal vergence and vertical eye movements were obtained with a binocular video system. Responses were also obtained during darkness before and following 15 min and 30 min of visual surround translation. Each of the three stimulus conditions involving visual surround translation elicited a significantly reduced sense of perceived tilt and strong linear vection (perceived translation) compared to pre-exposure tilt stimuli in darkness. This increase in perceived translation with reduction in tilt perception was also present in darkness following 15 and 30 min exposures, provided the tilt stimuli were not interrupted. Although not significant, there was a trend for the inphase asymmetrical stimulus to elicit a stronger sense of both translation and tilt than the out-of-phase asymmetrical stimulus. Surprisingly, the inphase asymmetrical stimulus also tended to elicit a stronger sense of peak-to-peak translation than the inphase symmetrical stimulus, even though the range of linear acceleration during the symmetrical stimulus was twice that of the asymmetrical stimulus. These results are consistent with the hypothesis that the central nervous system resolves the ambiguity of inertial motion sensory cues by integrating inputs from visual, vestibular, and somatosensory systems.

  18. Spatial perception predicts laparoscopic skills on virtual reality laparoscopy simulator.

    PubMed

    Hassan, I; Gerdes, B; Koller, M; Dick, B; Hellwig, D; Rothmund, M; Zielke, A

    2007-06-01

    This study evaluates the influence of visual-spatial perception on laparoscopic performance of novices with a virtual reality simulator (LapSim(R)). Twenty-four novices completed standardized tests of visual-spatial perception (Lameris Toegepaste Natuurwetenschappelijk Onderzoek [TNO] Test(R) and Stumpf-Fay Cube Perspectives Test(R)) and laparoscopic skills were assessed objectively, while performing 1-h practice sessions on the LapSim(R), comprising of coordination, cutting, and clip application tasks. Outcome variables included time to complete the tasks, economy of motion as well as total error scores, respectively. The degree of visual-spatial perception correlated significantly with laparoscopic performance on the LapSim(R) scores. Participants with a high degree of spatial perception (Group A) performed the tasks faster than those (Group B) who had a low degree of spatial perception (p = 0.001). Individuals with a high degree of spatial perception also scored better for economy of motion (p = 0.021), tissue damage (p = 0.009), and total error (p = 0.007). Among novices, visual-spatial perception is associated with manual skills performed on a virtual reality simulator. This result may be important for educators to develop adequate training programs that can be individually adapted.

  19. Cognitive Rehabilitation in Bilateral Vestibular Patients: A Computational Perspective.

    PubMed

    Ellis, Andrew W; Schöne, Corina G; Vibert, Dominique; Caversaccio, Marco D; Mast, Fred W

    2018-01-01

    There is evidence that vestibular sensory processing affects, and is affected by, higher cognitive processes. This is highly relevant from a clinical perspective, where there is evidence for cognitive impairments in patients with peripheral vestibular deficits. The vestibular system performs complex probabilistic computations, and we claim that understanding these is important for investigating interactions between vestibular processing and cognition. Furthermore, this will aid our understanding of patients' self-motion perception and will provide useful information for clinical interventions. We propose that cognitive training is a promising way to alleviate the debilitating symptoms of patients with complete bilateral vestibular loss (BVP), who often fail to show improvement when relying solely on conventional treatment methods. We present a probabilistic model capable of processing vestibular sensory data during both passive and active self-motion. Crucially, in our model, knowledge from multiple sources, including higher-level cognition, can be used to predict head motion. This is the entry point for cognitive interventions. Despite the loss of sensory input, the processing circuitry in BVP patients is still intact, and they can still perceive self-motion when the movement is self-generated. We provide computer simulations illustrating self-motion perception of BVP patients. Cognitive training may lead to more accurate and confident predictions, which result in decreased weighting of sensory input, and thus improved self-motion perception. Using our model, we show the possible impact of cognitive interventions to help vestibular rehabilitation in patients with BVP.

  20. Efficiencies for parts and wholes in biological-motion perception.

    PubMed

    Bromfield, W Drew; Gold, Jason M

    2017-10-01

    People can reliably infer the actions, intentions, and mental states of fellow humans from body movements (Blake & Shiffrar, 2007). Previous research on such biological-motion perception has suggested that the movements of the feet may play a particularly important role in making certain judgments about locomotion (Chang & Troje, 2009; Troje & Westhoff, 2006). One account of this effect is that the human visual system may have evolved specialized processes that are efficient for extracting information carried by the feet (Troje & Westhoff, 2006). Alternatively, the motion of the feet may simply be more discriminable than that of other parts of the body. To dissociate these two possibilities, we measured people's ability to discriminate the walking direction of stimuli in which individual body parts (feet, hands) were removed or shown in isolation. We then compared human performance to that of a statistically optimal observer (Gold, Tadin, Cook, & Blake, 2008), giving us a measure of humans' discriminative ability independent of the information available (a quantity known as efficiency). We found that efficiency was highest when the hands and the feet were shown in isolation. A series of follow-up experiments suggested that observers were relying on a form-based cue with the isolated hands (specifically, the orientation of their path through space) and a motion-based cue with the isolated feet to achieve such high efficiencies. We relate our findings to previous proposals of a distinction between form-based and motion-based mechanisms in biological-motion perception.

  1. Comparison of two Simon tasks: neuronal correlates of conflict resolution based on coherent motion perception.

    PubMed

    Wittfoth, Matthias; Buck, Daniela; Fahle, Manfred; Herrmann, Manfred

    2006-08-15

    The present study aimed at characterizing the neural correlates of conflict resolution in two variations of the Simon effect. We introduced two different Simon tasks where subjects had to identify shapes on the basis of form-from-motion perception (FFMo) within a randomly moving dot field, while (1) motion direction (motion-based Simon task) or (2) stimulus location (location-based Simon task) had to be ignored. Behavioral data revealed that both types of Simon tasks induced highly significant interference effects. Using event-related fMRI, we could demonstrate that both tasks share a common cluster of activated brain regions during conflict resolution (pre-supplementary motor area (pre-SMA), superior parietal lobule (SPL), and cuneus) but also show task-specific activation patterns (left superior temporal cortex in the motion-based, and the left fusiform gyrus in the location-based Simon task). Although motion-based and location-based Simon tasks are conceptually very similar (Type 3 stimulus-response ensembles according to the taxonomy of [Kornblum, S., Stevens, G. (2002). Sequential effects of dimensional overlap: findings and issues. In: Prinz, W., Hommel., B. (Eds.), Common mechanism in perception and action. Oxford University Press, Oxford, pp. 9-54]) conflict resolution in both tasks results in the activation of different task-specific regions probably related to the different sources of task-irrelevant information. Furthermore, the present data give evidence those task-specific regions are most likely to detect the relationship between task-relevant and task-irrelevant information.

  2. The 50s cliff: a decline in perceptuo-motor learning, not a deficit in visual motion perception.

    PubMed

    Ren, Jie; Huang, Shaochen; Zhang, Jiancheng; Zhu, Qin; Wilson, Andrew D; Snapp-Childs, Winona; Bingham, Geoffrey P

    2015-01-01

    Previously, we measured perceptuo-motor learning rates across the lifespan and found a sudden drop in learning rates between ages 50 and 60, called the "50s cliff." The task was a unimanual visual rhythmic coordination task in which participants used a joystick to oscillate one dot in a display in coordination with another dot oscillated by a computer. Participants learned to produce a coordination with a 90° relative phase relation between the dots. Learning rates for participants over 60 were half those of younger participants. Given existing evidence for visual motion perception deficits in people over 60 and the role of visual motion perception in the coordination task, it remained unclear whether the 50s cliff reflected onset of this deficit or a genuine decline in perceptuo-motor learning. The current work addressed this question. Two groups of 12 participants in each of four age ranges (20s, 50s, 60s, 70s) learned to perform a bimanual coordination of 90° relative phase. One group trained with only haptic information and the other group with both haptic and visual information about relative phase. Both groups were tested in both information conditions at baseline and post-test. If the 50s cliff was caused by an age dependent deficit in visual motion perception, then older participants in the visual group should have exhibited less learning than those in the haptic group, which should not exhibit the 50s cliff, and older participants in both groups should have performed less well when tested with visual information. Neither of these expectations was confirmed by the results, so we concluded that the 50s cliff reflects a genuine decline in perceptuo-motor learning with aging, not the onset of a deficit in visual motion perception.

  3. Visual Cues of Motion That Trigger Animacy Perception at Birth: The Case of Self-Propulsion

    ERIC Educational Resources Information Center

    Di Giorgio, Elisa; Lunghi, Marco; Simion, Francesca; Vallortigara, Giorgio

    2017-01-01

    Self-propelled motion is a powerful cue that conveys information that an object is animate. In this case, animate refers to an entity's capacity to initiate motion without an applied external force. Sensitivity to this motion cue is present in infants that are a few months old, but whether this sensitivity is experience-dependent or is already…

  4. Parametric Study of Diffusion-Enhancement Networks for Spatiotemporal Grouping in Real-Time Artificial Vision

    DTIC Science & Technology

    1993-04-01

    suggesting it occurs in later visual motion processing (long-range or second-order system). STIMULUS PERCEPT L" FLASH DURATION FLASH DURATION (a) TIME ( b ...TIME Figure 2. Gamma motion. (a) A light of fixed spatial extent is illuminated then extim- guished. ( b ) The percept is of a light expanding and then...while smaller, type- B cells provide input to its parvocellular subdivision. From here the magnocellular pathway progresses up through visual cortex area V

  5. Visually Guided Control of Movement

    NASA Technical Reports Server (NTRS)

    Johnson, Walter W. (Editor); Kaiser, Mary K. (Editor)

    1991-01-01

    The papers given at an intensive, three-week workshop on visually guided control of movement are presented. The participants were researchers from academia, industry, and government, with backgrounds in visual perception, control theory, and rotorcraft operations. The papers included invited lectures and preliminary reports of research initiated during the workshop. Three major topics are addressed: extraction of environmental structure from motion; perception and control of self motion; and spatial orientation. Each topic is considered from both theoretical and applied perspectives. Implications for control and display are suggested.

  6. Incorporating Animation Concepts and Principles in STEM Education

    ERIC Educational Resources Information Center

    Harrison, Henry L., III; Hummell, Laura J.

    2010-01-01

    Animation is the rapid display of a sequence of static images that creates the illusion of movement. This optical illusion is often called perception of motion, persistence of vision, illusion of motion, or short-range apparent motion. The phenomenon occurs when the eye is exposed to rapidly changing still images, with each image being changed…

  7. Visual motion detection and habitat preference in Anolis lizards.

    PubMed

    Steinberg, David S; Leal, Manuel

    2016-11-01

    The perception of visual stimuli has been a major area of inquiry in sensory ecology, and much of this work has focused on coloration. However, for visually oriented organisms, the process of visual motion detection is often equally crucial to survival and reproduction. Despite the importance of motion detection to many organisms' daily activities, the degree of interspecific variation in the perception of visual motion remains largely unexplored. Furthermore, the factors driving this potential variation (e.g., ecology or evolutionary history) along with the effects of such variation on behavior are unknown. We used a behavioral assay under laboratory conditions to quantify the visual motion detection systems of three species of Puerto Rican Anolis lizard that prefer distinct structural habitat types. We then compared our results to data previously collected for anoles from Cuba, Puerto Rico, and Central America. Our findings indicate that general visual motion detection parameters are similar across species, regardless of habitat preference or evolutionary history. We argue that these conserved sensory properties may drive the evolution of visual communication behavior in this clade.

  8. Unimpaired perception of social and physical causality, but impaired perception of animacy in high functioning children with autism.

    PubMed

    Congiu, Sara; Schlottmann, Anne; Ray, Elizabeth

    2010-01-01

    We investigated perception of social and physical causality and animacy in simple motion events, for high-functioning children with autism (CA = 13, VMA = 9.6). Children matched 14 different animations to pictures showing physical, social or non-causality. In contrast to previous work, children with autism performed at a high level similar to VMA-matched controls, recognizing physical causality in launch and social causality in reaction events. The launch deficit previously found in younger children with autism, possibly related to attentional/verbal difficulties, is apparently overcome with age. Some events involved squares moving non-rigidly, like animals. Children with autism had difficulties recognizing this, extending the biological motion literature. However, animacy prompts amplified their attributions of social causality. Thus children with autism may overcome their animacy perception deficit strategically.

  9. Creating stimuli for the study of biological-motion perception.

    PubMed

    Dekeyser, Mathias; Verfaillie, Karl; Vanrie, Jan

    2002-08-01

    In the perception of biological motion, the stimulus information is confined to a small number of lights attached to the major joints of a moving person. Despite this drastic degradation of the stimulus information, the human visual apparatus organizes the swarm of moving dots into a vivid percept of a moving biological creature. Several techniques have been proposed to create point-light stimuli: placing dots at strategic locations on photographs or films, video recording a person with markers attached to the body, computer animation based on artificial synthesis, and computer animation based on motion-capture data. A description is given of the technique we are currently using in our laboratory to produce animated point-light figures. The technique is based on a combination of motion capture and three-dimensional animation software (Character Studio, Autodesk, Inc., 1998). Some of the advantages of our approach are that the same actions can be shown from any viewpoint, that point-light versions, as well as versions with a full-fleshed character, can be created of the same actions, and that point lights can indicate the center of a joint (thereby eliminating several disadvantages associated with other techniques).

  10. Mom's shadow: structure-from-motion in newly hatched chicks as revealed by an imprinting procedure.

    PubMed

    Mascalzoni, Elena; Regolin, Lucia; Vallortigara, Giorgio

    2009-03-01

    The ability to recognize three-dimensional objects from two-dimensional (2-D) displays was investigated in domestic chicks, focusing on the role of the object's motion. In Experiment 1 newly hatched chicks, imprinted on a three-dimensional (3-D) object, were allowed to choose between the shadows of the familiar object and of an object never seen before. In Experiments 2 and 3 random-dot displays were used to produce the perception of a solid shape only when set in motion. Overall, the results showed that domestic chicks were able to recognize familiar shapes from 2-D motion stimuli. It is likely that similar general mechanisms underlying the perception of structure-from-motion and the extraction of 3-D information are shared by humans and animals. The present data shows that they occur similarly in birds as known for mammals, two separate vertebrate classes; this possibly indicates a common phylogenetic origin of these processes.

  11. Gravity Cues Embedded in the Kinematics of Human Motion Are Detected in Form-from-Motion Areas of the Visual System and in Motor-Related Areas

    PubMed Central

    Cignetti, Fabien; Chabeauti, Pierre-Yves; Menant, Jasmine; Anton, Jean-Luc J. J.; Schmitz, Christina; Vaugoyeau, Marianne; Assaiante, Christine

    2017-01-01

    The present study investigated the cortical areas engaged in the perception of graviceptive information embedded in biological motion (BM). To this end, functional magnetic resonance imaging was used to assess the cortical areas active during the observation of human movements performed under normogravity and microgravity (parabolic flight). Movements were defined by motion cues alone using point-light displays. We found that gravity modulated the activation of a restricted set of regions of the network subtending BM perception, including form-from-motion areas of the visual system (kinetic occipital region, lingual gyrus, cuneus) and motor-related areas (primary motor and somatosensory cortices). These findings suggest that compliance of observed movements with normal gravity was carried out by mapping them onto the observer’s motor system and by extracting their overall form from local motion of the moving light points. We propose that judgment on graviceptive information embedded in BM can be established based on motor resonance and visual familiarity mechanisms and not necessarily by accessing the internal model of gravitational motion stored in the vestibular cortex. PMID:28861024

  12. Self-motion perception: assessment by computer-generated animations

    NASA Technical Reports Server (NTRS)

    Parker, D. E.; Harm, D. L.; Sandoz, G. R.; Skinner, N. C.

    1998-01-01

    The goal of this research is more precise description of adaptation to sensory rearrangements, including microgravity, by development of improved procedures for assessing spatial orientation perception. Thirty-six subjects reported perceived self-motion following exposure to complex inertial-visual motion. Twelve subjects were assigned to each of 3 perceptual reporting procedures: (a) animation movie selection, (b) written report selection and (c) verbal report generation. The question addressed was: do reports produced by these procedures differ with respect to complexity and reliability? Following repeated (within-day and across-day) exposures to 4 different "motion profiles," subjects either (a) selected movies presented on a laptop computer, or (b) selected written descriptions from a booklet, or (c) generated self-motion verbal descriptions that corresponded most closely with their motion experience. One "complexity" and 2 reliability "scores" were calculated. Contrary to expectations, reliability and complexity scores were essentially equivalent for the animation movie selection and written report selection procedures. Verbal report generation subjects exhibited less complexity than did subjects in the other conditions and their reports were often ambiguous. The results suggest that, when selecting from carefully written descriptions and following appropriate training, people may be better able to describe their self-motion experience with words than is usually believed.

  13. Path perception during rotation: influence of instructions, depth range, and dot density

    NASA Technical Reports Server (NTRS)

    Li, Li; Warren, William H Jr

    2004-01-01

    How do observers perceive their direction of self-motion when traveling on a straight path while their eyes are rotating? Our previous findings suggest that information from retinal flow and extra-retinal information about eye movements are each sufficient to solve this problem for both perception and active control of self-motion [Vision Res. 40 (2000) 3873; Psych. Sci. 13 (2002) 485]. In this paper, using displays depicting translation with simulated eye rotation, we investigated how task variables such as instructions, depth range, and dot density influenced the visual system's reliance on retinal vs. extra-retinal information for path perception during rotation. We found that path errors were small when observers expected to travel on a straight path or with neutral instructions, but errors increased markedly when observers expected to travel on a curved path. Increasing depth range or dot density did not improve path judgments. We conclude that the expectation of the shape of an upcoming path can influence the interpretation of the ambiguous retinal flow. A large depth range and dense motion parallax are not essential for accurate path perception during rotation, but reference objects and a large field of view appear to improve path judgments.

  14. Facilitating Effects of Emotion on the Perception of Biological Motion: Evidence for a Happiness Superiority Effect.

    PubMed

    Lee, Hannah; Kim, Jejoong

    2017-06-01

    It has been reported that visual perception can be influenced not only by the physical features of a stimulus but also by the emotional valence of the stimulus, even without explicit emotion recognition. Some previous studies reported an anger superiority effect while others found a happiness superiority effect during visual perception. It thus remains unclear as to which emotion is more influential. In the present study, we conducted two experiments using biological motion (BM) stimuli to examine whether emotional valence of the stimuli would affect BM perception; and if so, whether a specific type of emotion is associated with a superiority effect. Point-light walkers with three emotion types (anger, happiness, and neutral) were used, and the threshold to detect BM within noise was measured in Experiment 1. Participants showed higher performance in detecting happy walkers compared with the angry and neutral walkers. Follow-up motion velocity analysis revealed that physical difference among the stimuli was not the main factor causing the effect. The results of the emotion recognition task in Experiment 2 also showed a happiness superiority effect, as in Experiment 1. These results show that emotional valence (happiness) of the stimuli can facilitate the processing of BM.

  15. The neural basis of form and form-motion integration from static and dynamic translational Glass patterns: A rTMS investigation.

    PubMed

    Pavan, Andrea; Ghin, Filippo; Donato, Rita; Campana, Gianluca; Mather, George

    2017-08-15

    A long-held view of the visual system is that form and motion are independently analysed. However, there is physiological and psychophysical evidence of early interaction in the processing of form and motion. In this study, we used a combination of Glass patterns (GPs) and repetitive Transcranial Magnetic Stimulation (rTMS) to investigate in human observers the neural mechanisms underlying form-motion integration. GPs consist of randomly distributed dot pairs (dipoles) that induce the percept of an oriented stimulus. GPs can be either static or dynamic. Dynamic GPs have both a form component (i.e., orientation) and a non-directional motion component along the orientation axis. GPs were presented in two temporal intervals and observers were asked to discriminate the temporal interval containing the most coherent GP. rTMS was delivered over early visual area (V1/V2) and over area V5/MT shortly after the presentation of the GP in each interval. The results showed that rTMS applied over early visual areas affected the perception of static GPs, but the stimulation of area V5/MT did not affect observers' performance. On the other hand, rTMS was delivered over either V1/V2 or V5/MT strongly impaired the perception of dynamic GPs. These results suggest that early visual areas seem to be involved in the processing of the spatial structure of GPs, and interfering with the extraction of the global spatial structure also affects the extraction of the motion component, possibly interfering with early form-motion integration. However, visual area V5/MT is likely to be involved only in the processing of the motion component of dynamic GPs. These results suggest that motion and form cues may interact as early as V1/V2. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Exhibition of stochastic resonance in vestibular tilt motion perception.

    PubMed

    Galvan-Garza, R C; Clark, T K; Mulavara, A P; Oman, C M

    2018-04-03

    Stochastic Resonance (SR) is a phenomenon broadly described as "noise benefit". The application of subsensory electrical Stochastic Vestibular Stimulation (SVS) via electrodes behind each ear has been used to improve human balance and gait, but its effect on motion perception thresholds has not been examined. This study investigated the capability of subsensory SVS to reduce vestibular motion perception thresholds in a manner consistent with a characteristic bell-shaped SR curve. We measured upright, head-centered, roll tilt Direction Recognition (DR) thresholds in the dark in 12 human subjects with the application of wideband 0-30 Hz SVS ranging from ±0-700 μA. To conservatively assess if SR was exhibited, we compared the proportions of both subjective and statistical SR exhibition in our experimental data to proportions of SR exhibition in multiple simulation cases with varying underlying SR behavior. Analysis included individual and group statistics. As there is not an established mathematical definition, three humans subjectively judged that SR was exhibited in 78% of subjects. "Statistically significant SR exhibition", which additionally required that a subject's DR threshold with SVS be significantly lower than baseline (no SVS), was present in 50% of subjects. Both percentages were higher than simulations suggested could occur simply by chance. For SR exhibitors, defined by subjective or statistically significant criteria, the mean DR threshold improved by -30% and -39%, respectively. The largest individual improvement was -47%. At least half of the subjects were better able to perceive passive body motion with the application of subsensory SVS. This study presents the first conclusive demonstration of SR in vestibular motion perception. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Perception of visual apparent motion is modulated by a gap within concurrent auditory glides, even when it is illusory.

    PubMed

    Wang, Qingcui; Guo, Lu; Bao, Ming; Chen, Lihan

    2015-01-01

    Auditory and visual events often happen concurrently, and how they group together can have a strong effect on what is perceived. We investigated whether/how intra- or cross-modal temporal grouping influenced the perceptual decision of otherwise ambiguous visual apparent motion. To achieve this, we juxtaposed auditory gap transfer illusion with visual Ternus display. The Ternus display involves a multi-element stimulus that can induce either of two different percepts of apparent motion: 'element motion' (EM) or 'group motion' (GM). In "EM," the endmost disk is seen as moving back and forth while the middle disk at the central position remains stationary; while in "GM," both disks appear to move laterally as a whole. The gap transfer illusion refers to the illusory subjective transfer of a short gap (around 100 ms) from the long glide to the short continuous glide when the two glides intercede at the temporal middle point. In our experiments, observers were required to make a perceptual discrimination of Ternus motion in the presence of concurrent auditory glides (with or without a gap inside). Results showed that a gap within a short glide imposed a remarkable effect on separating visual events, and led to a dominant perception of GM as well. The auditory configuration with gap transfer illusion triggered the same auditory capture effect. Further investigations showed that visual interval which coincided with the gap interval (50-230 ms) in the long glide was perceived to be shorter than that within both the short glide and the 'gap-transfer' auditory configurations in the same physical intervals (gaps). The results indicated that auditory temporal perceptual grouping takes priority over the cross-modal interaction in determining the final readout of the visual perception, and the mechanism of selective attention on auditory events also plays a role.

  18. The contribution of visual and proprioceptive information to the perception of leaning in a dynamic motorcycle simulator.

    PubMed

    Lobjois, Régis; Dagonneau, Virginie; Isableu, Brice

    2016-11-01

    Compared with driving or flight simulation, little is known about self-motion perception in riding simulation. The goal of this study was to examine whether or not continuous roll motion supports the sensation of leaning into bends in dynamic motorcycle simulation. To this end, riders were able to freely tune the visual scene and/or motorcycle simulator roll angle to find a pattern that matched their prior knowledge. Our results revealed idiosyncrasy in the combination of visual and proprioceptive information. Some subjects relied more on the visual dimension, but reported increased sickness symptoms with the visual roll angle. Others relied more on proprioceptive information, tuning the direction of the visual scenery to match three possible patterns. Our findings also showed that these two subgroups tuned the motorcycle simulator roll angle in a similar way. This suggests that sustained inertially specified roll motion have contributed to the sensation of leaning in spite of the occurrence of unexpected gravito-inertial stimulation during the tilt. Several hypotheses are discussed. Practitioner Summary: Self-motion perception in motorcycle simulation is a relatively new research area. We examined how participants combined visual and proprioceptive information. Findings revealed individual differences in the visual dimension. However, participants tuned the simulator roll angle similarly, supporting the hypothesis that sustained inertially specified roll motion contributes to a leaning sensation.

  19. Motion parallax in immersive cylindrical display systems

    NASA Astrophysics Data System (ADS)

    Filliard, N.; Reymond, G.; Kemeny, A.; Berthoz, A.

    2012-03-01

    Motion parallax is a crucial visual cue produced by translations of the observer for the perception of depth and selfmotion. Therefore, tracking the observer viewpoint has become inevitable in immersive virtual (VR) reality systems (cylindrical screens, CAVE, head mounted displays) used e.g. in automotive industry (style reviews, architecture design, ergonomics studies) or in scientific studies of visual perception. The perception of a stable and rigid world requires that this visual cue be coherent with other extra-retinal (e.g. vestibular, kinesthetic) cues signaling ego-motion. Although world stability is never questioned in real world, rendering head coupled viewpoint in VR can lead to the perception of an illusory perception of unstable environments, unless a non-unity scale factor is applied on recorded head movements. Besides, cylindrical screens are usually used with static observers due to image distortions when rendering image for viewpoints different from a sweet spot. We developed a technique to compensate in real-time these non-linear visual distortions, in an industrial VR setup, based on a cylindrical screen projection system. Additionally, to evaluate the amount of discrepancies tolerated without perceptual distortions between visual and extraretinal cues, a "motion parallax gain" between the velocity of the observer's head and that of the virtual camera was introduced in this system. The influence of this artificial gain was measured on the gait stability of free-standing participants. Results indicate that, below unity, gains significantly alter postural control. Conversely, the influence of higher gains remains limited, suggesting a certain tolerance of observers to these conditions. Parallax gain amplification is therefore proposed as a possible solution to provide a wider exploration of space to users of immersive virtual reality systems.

  20. Altered transfer of visual motion information to parietal association cortex in untreated first-episode psychosis: Implications for pursuit eye tracking

    PubMed Central

    Lencer, Rebekka; Keedy, Sarah K.; Reilly, James L.; McDonough, Bruce E.; Harris, Margret S. H.; Sprenger, Andreas; Sweeney, John A.

    2011-01-01

    Visual motion processing and its use for pursuit eye movement control represent a valuable model for studying the use of sensory input for action planning. In psychotic disorders, alterations of visual motion perception have been suggested to cause pursuit eye tracking deficits. We evaluated this system in functional neuroimaging studies of untreated first-episode schizophrenia (N=24), psychotic bipolar disorder patients (N=13) and healthy controls (N=20). During a passive visual motion processing task, both patient groups showed reduced activation in the posterior parietal projection fields of motion-sensitive extrastriate area V5, but not in V5 itself. This suggests reduced bottom-up transfer of visual motion information from extrastriate cortex to perceptual systems in parietal association cortex. During active pursuit, activation was enhanced in anterior intraparietal sulcus and insula in both patient groups, and in dorsolateral prefrontal cortex and dorsomedial thalamus in schizophrenia patients. This may result from increased demands on sensorimotor systems for pursuit control due to the limited availability of perceptual motion information about target speed and tracking error. Visual motion information transfer deficits to higher -level association cortex may contribute to well-established pursuit tracking abnormalities, and perhaps to a wider array of alterations in perception and action planning in psychotic disorders. PMID:21873035

  1. Receptive fields for smooth pursuit eye movements and motion perception.

    PubMed

    Debono, Kurt; Schütz, Alexander C; Spering, Miriam; Gegenfurtner, Karl R

    2010-12-01

    Humans use smooth pursuit eye movements to track moving objects of interest. In order to track an object accurately, motion signals from the target have to be integrated and segmented from motion signals in the visual context. Most studies on pursuit eye movements used small visual targets against a featureless background, disregarding the requirements of our natural visual environment. Here, we tested the ability of the pursuit and the perceptual system to integrate motion signals across larger areas of the visual field. Stimuli were random-dot kinematograms containing a horizontal motion signal, which was perturbed by a spatially localized, peripheral motion signal. Perturbations appeared in a gaze-contingent coordinate system and had a different direction than the main motion including a vertical component. We measured pursuit and perceptual direction discrimination decisions and found that both steady-state pursuit and perception were influenced most by perturbation angles close to that of the main motion signal and only in regions close to the center of gaze. The narrow direction bandwidth (26 angular degrees full width at half height) and small spatial extent (8 degrees of visual angle standard deviation) correspond closely to tuning parameters of neurons in the middle temporal area (MT). Copyright © 2010 Elsevier Ltd. All rights reserved.

  2. Simulated self-motion in a visual gravity field: sensitivity to vertical and horizontal heading in the human brain.

    PubMed

    Indovina, Iole; Maffei, Vincenzo; Pauwels, Karl; Macaluso, Emiliano; Orban, Guy A; Lacquaniti, Francesco

    2013-05-01

    Multiple visual signals are relevant to perception of heading direction. While the role of optic flow and depth cues has been studied extensively, little is known about the visual effects of gravity on heading perception. We used fMRI to investigate the contribution of gravity-related visual cues on the processing of vertical versus horizontal apparent self-motion. Participants experienced virtual roller-coaster rides in different scenarios, at constant speed or 1g-acceleration/deceleration. Imaging results showed that vertical self-motion coherent with gravity engaged the posterior insula and other brain regions that have been previously associated with vertical object motion under gravity. This selective pattern of activation was also found in a second experiment that included rectilinear motion in tunnels, whose direction was cued by the preceding open-air curves only. We argue that the posterior insula might perform high-order computations on visual motion patterns, combining different sensory cues and prior information about the effects of gravity. Medial-temporal regions including para-hippocampus and hippocampus were more activated by horizontal motion, preferably at constant speed, consistent with a role in inertial navigation. Overall, the results suggest partially distinct neural representations of the cardinal axes of self-motion (horizontal and vertical). Copyright © 2013 Elsevier Inc. All rights reserved.

  3. A neural basis for the spatial suppression of visual motion perception

    PubMed Central

    Liu, Liu D; Haefner, Ralf M; Pack, Christopher C

    2016-01-01

    In theory, sensory perception should be more accurate when more neurons contribute to the representation of a stimulus. However, psychophysical experiments that use larger stimuli to activate larger pools of neurons sometimes report impoverished perceptual performance. To determine the neural mechanisms underlying these paradoxical findings, we trained monkeys to discriminate the direction of motion of visual stimuli that varied in size across trials, while simultaneously recording from populations of motion-sensitive neurons in cortical area MT. We used the resulting data to constrain a computational model that explained the behavioral data as an interaction of three main mechanisms: noise correlations, which prevented stimulus information from growing with stimulus size; neural surround suppression, which decreased sensitivity for large stimuli; and a read-out strategy that emphasized neurons with receptive fields near the stimulus center. These results suggest that paradoxical percepts reflect tradeoffs between sensitivity and noise in neuronal populations. DOI: http://dx.doi.org/10.7554/eLife.16167.001 PMID:27228283

  4. Shared motion signals for human perceptual decisions and oculomotor actions

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Krauzlis, Richard J.

    2003-01-01

    A fundamental question in primate neurobiology is to understand to what extent motor behaviors are driven by shared neural signals that also support conscious perception or by independent subconscious neural signals dedicated to motor control. Although it has clearly been established that cortical areas involved in processing visual motion support both perception and smooth pursuit eye movements, it remains unknown whether the same or different sets of neurons within these structures perform these two functions. Examination of the trial-by-trial variation in human perceptual and pursuit responses during a simultaneous psychophysical and oculomotor task reveals that the direction signals for pursuit and perception are not only similar on average but also co-vary on a trial-by-trial basis, even when performance is at or near chance and the decisions are determined largely by neural noise. We conclude that the neural signal encoding the direction of target motion that drives steady-state pursuit and supports concurrent perceptual judgments emanates from a shared ensemble of cortical neurons.

  5. High-level, but not low-level, motion perception is impaired in patients with schizophrenia.

    PubMed

    Kandil, Farid I; Pedersen, Anya; Wehnes, Jana; Ohrmann, Patricia

    2013-01-01

    Smooth pursuit eye movements are compromised in patients with schizophrenia and their first-degree relatives. Although research has demonstrated that the motor components of smooth pursuit eye movements are intact, motion perception has been shown to be impaired. In particular, studies have consistently revealed deficits in performance on tasks specific to the high-order motion area V5 (middle temporal area, MT) in patients with schizophrenia. In contrast, data from low-level motion detectors in the primary visual cortex (V1) have been inconsistent. To differentiate between low-level and high-level visual motion processing, we applied a temporal-order judgment task for motion events and a motion-defined figure-ground segregation task using patients with schizophrenia and healthy controls. Successful judgments in both tasks rely on the same low-level motion detectors in the V1; however, the first task is further processed in the higher-order motion area MT in the magnocellular (dorsal) pathway, whereas the second task requires subsequent computations in the parvocellular (ventral) pathway in visual area V4 and the inferotemporal cortex (IT). These latter structures are supposed to be intact in schizophrenia. Patients with schizophrenia revealed a significantly impaired temporal resolution on the motion-based temporal-order judgment task but only mild impairment in the motion-based segregation task. These results imply that low-level motion detection in V1 is not, or is only slightly, compromised; furthermore, our data restrain the locus of the well-known deficit in motion detection to areas beyond the primary visual cortex.

  6. Motion-based nearest vector metric for reference frame selection in the perception of motion.

    PubMed

    Agaoglu, Mehmet N; Clarke, Aaron M; Herzog, Michael H; Ögmen, Haluk

    2016-05-01

    We investigated how the visual system selects a reference frame for the perception of motion. Two concentric arcs underwent circular motion around the center of the display, where observers fixated. The outer (target) arc's angular velocity profile was modulated by a sine wave midflight whereas the inner (reference) arc moved at a constant angular speed. The task was to report whether the target reversed its direction of motion at any point during its motion. We investigated the effects of spatial and figural factors by systematically varying the radial and angular distances between the arcs, and their relative sizes. We found that the effectiveness of the reference frame decreases with increasing radial- and angular-distance measures. Drastic changes in the relative sizes of the arcs did not influence motion reversal thresholds, suggesting no influence of stimulus form on perceived motion. We also investigated the effect of common velocity by introducing velocity fluctuations to the reference arc as well. We found no effect of whether or not a reference frame has a constant motion. We examined several form- and motion-based metrics, which could potentially unify our findings. We found that a motion-based nearest vector metric can fully account for all the data reported here. These findings suggest that the selection of reference frames for motion processing does not result from a winner-take-all process, but instead, can be explained by a field whose strength decreases with the distance between the nearest motion vectors regardless of the form of the moving objects.

  7. Plasticity Beyond V1: Reinforcement of Motion Perception upon Binocular Central Retinal Lesions in Adulthood.

    PubMed

    Burnat, Kalina; Hu, Tjing-Tjing; Kossut, Małgorzata; Eysel, Ulf T; Arckens, Lutgarde

    2017-09-13

    Induction of a central retinal lesion in both eyes of adult mammals is a model for macular degeneration and leads to retinotopic map reorganization in the primary visual cortex (V1). Here we characterized the spatiotemporal dynamics of molecular activity levels in the central and peripheral representation of five higher-order visual areas, V2/18, V3/19, V4/21a,V5/PMLS, area 7, and V1/17, in adult cats with central 10° retinal lesions (both sexes), by means of real-time PCR for the neuronal activity reporter gene zif268. The lesions elicited a similar, permanent reduction in activity in the center of the lesion projection zone of area V1/17, V2/18, V3/19, and V4/21a, but not in the motion-driven V5/PMLS, which instead displayed an increase in molecular activity at 3 months postlesion, independent of visual field coordinates. Also area 7 only displayed decreased activity in its LPZ in the first weeks postlesion and increased activities in its periphery from 1 month onward. Therefore we examined the impact of central vision loss on motion perception using random dot kinematograms to test the capacity for form from motion detection based on direction and velocity cues. We revealed that the central retinal lesions either do not impair motion detection or even result in better performance, specifically when motion discrimination was based on velocity discrimination. In conclusion, we propose that central retinal damage leads to enhanced peripheral vision by sensitizing the visual system for motion processing relying on feedback from V5/PMLS and area 7. SIGNIFICANCE STATEMENT Central retinal lesions, a model for macular degeneration, result in functional reorganization of the primary visual cortex. Examining the level of cortical reactivation with the molecular activity marker zif268 revealed reorganization in visual areas outside V1. Retinotopic lesion projection zones typically display an initial depression in zif268 expression, followed by partial recovery with postlesion time. Only the motion-sensitive area V5/PMLS shows no decrease, and even a significant activity increase at 3 months post-retinal lesion. Behavioral tests of motion perception found no impairment and even better sensitivity to higher random dot stimulus velocities. We demonstrate that the loss of central vision induces functional mobilization of motion-sensitive visual cortex, resulting in enhanced perception of moving stimuli. Copyright © 2017 the authors 0270-6474/17/378989-11$15.00/0.

  8. Stability of Kinesthetic Perception in Efferent-Afferent Spaces: The Concept of Iso-perceptual Manifold.

    PubMed

    Latash, Mark L

    2018-02-21

    The main goal of this paper is to introduce the concept of iso-perceptual manifold for perception of body configuration and related variables (kinesthetic perception) and to discuss its relation to the equilibrium-point hypothesis and the concepts of reference coordinate and uncontrolled manifold. Hierarchical control of action is postulated with abundant transformations between sets of spatial reference coordinates for salient effectors at different levels. Iso-perceptual manifold is defined in the combined space of afferent and efferent variables as the subspace corresponding to a stable percept. Examples of motion along an iso-perceptual manifold (perceptually equivalent motion) are considered during various natural actions. Some combinations of afferent and efferent signals, in particular those implying a violation of body's integrity, give rise to variable percepts by artificial projection onto iso-perceptual manifolds. This framework is used to interpret unusual features of vibration-induced kinesthetic illusions and to predict new illusions not yet reported in the literature. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  9. Differences in Otolith and Abdominal Viscera Graviceptor Dynamics: Implications for Motion Sickness and Perceived Body Position

    NASA Technical Reports Server (NTRS)

    vonGierke, Henning E.; Parker, Donald E.

    1993-01-01

    Human graviceptors, located in the trunk by Mittelstaedt probably transduce acceleration by abdominal viscera motion. As demonstrated previously in biodynamic vibration and impact tolerance research the thoraco-abdominal viscera exhibit a resonance at 4 to 6 Hz. Behavioral observations and mechanical models of otolith graviceptor response indicate a phase shift increasing with frequency between 0.01 and O.5 Hz. Consequently the potential exists for intermodality sensory conflict between vestibular and visceral graviceptor signals at least at the mechanical receptor level. The frequency range of this potential conflict corresponds with the primary frequency range for motion sickness incidence in transportation, in subjects rotated about Earth-horizontal axes (barbecue spit stimulation) and in periodic parabolic flight microgravity research and also for erroneous perception of vertical oscillations in helicopters. We discuss the implications of this hypothesis for previous self motion perception research and suggestions for various future studies.

  10. Efference Copy Failure during Smooth Pursuit Eye Movements in Schizophrenia

    PubMed Central

    Dias, Elisa C.; Sanchez, Jamie L.; Schütz, Alexander C.; Javitt, Daniel C.

    2013-01-01

    Abnormal smooth pursuit eye movements in patients with schizophrenia are often considered a consequence of impaired motion perception. Here we used a novel motion prediction task to assess the effects of abnormal pursuit on perception in human patients. Schizophrenia patients (n = 15) and healthy controls (n = 16) judged whether a briefly presented moving target (“ball”) would hit/miss a stationary vertical line segment (“goal”). To relate prediction performance and pursuit directly, we manipulated eye movements: in half of the trials, observers smoothly tracked the ball; in the other half, they fixated on the goal. Strict quality criteria ensured that pursuit was initiated and that fixation was maintained. Controls were significantly better in trajectory prediction during pursuit than during fixation, their performance increased with presentation duration, and their pursuit gain and perceptual judgments were correlated. Such perceptual benefits during pursuit may be due to the use of extraretinal motion information estimated from an efference copy signal. With an overall lower performance in pursuit and perception, patients showed no such pursuit advantage and no correlation between pursuit gain and perception. Although patients' pursuit showed normal improvement with longer duration, their prediction performance failed to benefit from duration increases. This dissociation indicates relatively intact early visual motion processing, but a failure to use efference copy information. Impaired efference function in the sensory system may represent a general deficit in schizophrenia and thus contribute to symptoms and functional outcome impairments associated with the disorder. PMID:23864667

  11. Efference copy failure during smooth pursuit eye movements in schizophrenia.

    PubMed

    Spering, Miriam; Dias, Elisa C; Sanchez, Jamie L; Schütz, Alexander C; Javitt, Daniel C

    2013-07-17

    Abnormal smooth pursuit eye movements in patients with schizophrenia are often considered a consequence of impaired motion perception. Here we used a novel motion prediction task to assess the effects of abnormal pursuit on perception in human patients. Schizophrenia patients (n = 15) and healthy controls (n = 16) judged whether a briefly presented moving target ("ball") would hit/miss a stationary vertical line segment ("goal"). To relate prediction performance and pursuit directly, we manipulated eye movements: in half of the trials, observers smoothly tracked the ball; in the other half, they fixated on the goal. Strict quality criteria ensured that pursuit was initiated and that fixation was maintained. Controls were significantly better in trajectory prediction during pursuit than during fixation, their performance increased with presentation duration, and their pursuit gain and perceptual judgments were correlated. Such perceptual benefits during pursuit may be due to the use of extraretinal motion information estimated from an efference copy signal. With an overall lower performance in pursuit and perception, patients showed no such pursuit advantage and no correlation between pursuit gain and perception. Although patients' pursuit showed normal improvement with longer duration, their prediction performance failed to benefit from duration increases. This dissociation indicates relatively intact early visual motion processing, but a failure to use efference copy information. Impaired efference function in the sensory system may represent a general deficit in schizophrenia and thus contribute to symptoms and functional outcome impairments associated with the disorder.

  12. Perception of the dynamic visual vertical during sinusoidal linear motion.

    PubMed

    Pomante, A; Selen, L P J; Medendorp, W P

    2017-10-01

    The vestibular system provides information for spatial orientation. However, this information is ambiguous: because the otoliths sense the gravitoinertial force, they cannot distinguish gravitational and inertial components. As a consequence, prolonged linear acceleration of the head can be interpreted as tilt, referred to as the somatogravic effect. Previous modeling work suggests that the brain disambiguates the otolith signal according to the rules of Bayesian inference, combining noisy canal cues with the a priori assumption that prolonged linear accelerations are unlikely. Within this modeling framework the noise of the vestibular signals affects the dynamic characteristics of the tilt percept during linear whole-body motion. To test this prediction, we devised a novel paradigm to psychometrically characterize the dynamic visual vertical-as a proxy for the tilt percept-during passive sinusoidal linear motion along the interaural axis (0.33 Hz motion frequency, 1.75 m/s 2 peak acceleration, 80 cm displacement). While subjects ( n =10) kept fixation on a central body-fixed light, a line was briefly flashed (5 ms) at different phases of the motion, the orientation of which had to be judged relative to gravity. Consistent with the model's prediction, subjects showed a phase-dependent modulation of the dynamic visual vertical, with a subject-specific phase shift with respect to the imposed acceleration signal. The magnitude of this modulation was smaller than predicted, suggesting a contribution of nonvestibular signals to the dynamic visual vertical. Despite their dampening effect, our findings may point to a link between the noise components in the vestibular system and the characteristics of dynamic visual vertical. NEW & NOTEWORTHY A fundamental question in neuroscience is how the brain processes vestibular signals to infer the orientation of the body and objects in space. We show that, under sinusoidal linear motion, systematic error patterns appear in the disambiguation of linear acceleration and spatial orientation. We discuss the dynamics of these illusory percepts in terms of a dynamic Bayesian model that combines uncertainty in the vestibular signals with priors based on the natural statistics of head motion. Copyright © 2017 the American Physiological Society.

  13. Development of Motion Processing in Children with Autism

    ERIC Educational Resources Information Center

    Annaz, Dagmara; Remington, Anna; Milne, Elizabeth; Coleman, Mike; Campbell, Ruth; Thomas, Michael S. C.; Swettenham, John

    2010-01-01

    Recent findings suggest that children with autism may be impaired in the perception of biological motion from moving point-light displays. Some children with autism also have abnormally high motion coherence thresholds. In the current study we tested a group of children with autism and a group of typically developing children aged 5 to 12 years of…

  14. The psychophysics of Visual Motion and Global form Processing in Autism

    ERIC Educational Resources Information Center

    Koldewyn, Kami; Whitney, David; Rivera, Susan M.

    2010-01-01

    Several groups have recently reported that people with autism may suffer from a deficit in visual motion processing and proposed that these deficits may be related to a general dorsal stream dysfunction. In order to test the dorsal stream deficit hypothesis, we investigated coherent and biological motion perception as well as coherent form…

  15. Contrast Sensitivity for Motion Detection and Direction Discrimination in Adolescents with Autism Spectrum Disorders and Their Siblings

    ERIC Educational Resources Information Center

    Koh, Hwan Cui; Milne, Elizabeth; Dobkins, Karen

    2010-01-01

    The magnocellular (M) pathway hypothesis proposes that impaired visual motion perception observed in individuals with Autism Spectrum Disorders (ASD) might be mediated by atypical functioning of the subcortical M pathway, as this pathway provides the bulk of visual input to cortical motion detectors. To test this hypothesis, we measured luminance…

  16. Integration of visual and motion cues for simulator requirements and ride quality investigation

    NASA Technical Reports Server (NTRS)

    Young, L. R.

    1976-01-01

    Practical tools which can extend the state of the art of moving base flight simulation for research and training are developed. Main approaches to this research effort include: (1) application of the vestibular model for perception of orientation based on motion cues: optimum simulator motion controls; and (2) visual cues in landing.

  17. The Motion Picture and the Teaching of English.

    ERIC Educational Resources Information Center

    Sheridan, Marion C.; And Others

    Written to help a viewer watch a motion picture perceptively, this book explains the characteristics of the film as an art form and examines the role of motion pictures in the English curriculum. Specific topics covered include (1) the technical aspects of the production of films (the order of "shots," camera angle, and point of view), (2) the…

  18. Whole-Motion Model of Perception during Forward- and Backward-Facing Centrifuge Runs

    PubMed Central

    Holly, Jan E.; Vrublevskis, Arturs; Carlson, Lindsay E.

    2009-01-01

    Illusory perceptions of motion and orientation arise during human centrifuge runs without vision. Asymmetries have been found between acceleration and deceleration, and between forward-facing and backward-facing runs. Perceived roll tilt has been studied extensively during upright fixed-carriage centrifuge runs, and other components have been studied to a lesser extent. Certain, but not all, perceptual asymmetries in acceleration-vs-deceleration and forward-vs-backward motion can be explained by existing analyses. The immediate acceleration-deceleration roll-tilt asymmetry can be explained by the three-dimensional physics of the external stimulus; in addition, longer-term data has been modeled in a standard way using physiological time constants. However, the standard modeling approach is shown in the present research to predict forward-vs-backward-facing symmetry in perceived roll tilt, contradicting experimental data, and to predict perceived sideways motion, rather than forward or backward motion, around a curve. The present work develops a different whole-motion-based model taking into account the three-dimensional form of perceived motion and orientation. This model predicts perceived forward or backward motion around a curve, and predicts additional asymmetries such as the forward-backward difference in roll tilt. This model is based upon many of the same principles as the standard model, but includes an additional concept of familiarity of motions as a whole. PMID:19208962

  19. fMRI response during visual motion stimulation in patients with late whiplash syndrome.

    PubMed

    Freitag, P; Greenlee, M W; Wachter, K; Ettlin, T M; Radue, E W

    2001-01-01

    After whiplash trauma, up to one fourth of patients develop chronic symptoms including head and neck pain and cognitive disturbances. Resting perfusion single-photon-emission computed tomography (SPECT) found decreased temporoparietooccipital tracer uptake among these long-term symptomatic patients with late whiplash syndrome. As MT/MST (V5/V5a) are located in that area, this study addressed the question whether these patients show impairments in visual motion perception. We examined five symptomatic patients with late whiplash syndrome, five asymptomatic patients after whiplash trauma, and a control group of seven volunteers without the history of trauma. Tests for visual motion perception and functional magnetic resonance imaging (fMRI) measurements during visual motion stimulation were performed. Symptomatic patients showed a significant reduction in their ability to perceive coherent visual motion compared with controls, whereas the asymptomatic patients did not show this effect. fMRI activation was similar during random dot motion in all three groups, but was significantly decreased during coherent dot motion in the symptomatic patients compared with the other two groups. Reduced psychophysical motion performance and reduced fMRI responses in symptomatic patients with late whiplash syndrome both point to a functional impairment in cortical areas sensitive to coherent motion. Larger studies are needed to confirm these clinical and functional imaging results to provide a possible additional diagnostic criterion for the evaluation of patients with late whiplash syndrome.

  20. Dynamics of the functional link between area MT LFPs and motion detection

    PubMed Central

    Smith, Jackson E. T.; Beliveau, Vincent; Schoen, Alan; Remz, Jordana; Zhan, Chang'an A.

    2015-01-01

    The evolution of a visually guided perceptual decision results from multiple neural processes, and recent work suggests that signals with different neural origins are reflected in separate frequency bands of the cortical local field potential (LFP). Spike activity and LFPs in the middle temporal area (MT) have a functional link with the perception of motion stimuli (referred to as neural-behavioral correlation). To cast light on the different neural origins that underlie this functional link, we compared the temporal dynamics of the neural-behavioral correlations of MT spikes and LFPs. Wide-band activity was simultaneously recorded from two locations of MT from monkeys performing a threshold, two-stimuli, motion pulse detection task. Shortly after the motion pulse occurred, we found that high-gamma (100–200 Hz) LFPs had a fast, positive correlation with detection performance that was similar to that of the spike response. Beta (10–30 Hz) LFPs were negatively correlated with detection performance, but their dynamics were much slower, peaked late, and did not depend on stimulus configuration or reaction time. A late change in the correlation of all LFPs across the two recording electrodes suggests that a common input arrived at both MT locations prior to the behavioral response. Our results support a framework in which early high-gamma LFPs likely reflected fast, bottom-up, sensory processing that was causally linked to perception of the motion pulse. In comparison, late-arriving beta and high-gamma LFPs likely reflected slower, top-down, sources of neural-behavioral correlation that originated after the perception of the motion pulse. PMID:25948867

  1. Dorsal and ventral stream contributions to form-from-motion perception in a patient with form-from motion deficit: a case report.

    PubMed

    Mercier, Manuel R; Schwartz, Sophie; Spinelli, Laurent; Michel, Christoph M; Blanke, Olaf

    2017-03-01

    The main model of visual processing in primates proposes an anatomo-functional distinction between the dorsal stream, specialized in spatio-temporal information, and the ventral stream, processing essentially form information. However, these two pathways also communicate to share much visual information. These dorso-ventral interactions have been studied using form-from-motion (FfM) stimuli, revealing that FfM perception first activates dorsal regions (e.g., MT+/V5), followed by successive activations of ventral regions (e.g., LOC). However, relatively little is known about the implications of focal brain damage of visual areas on these dorso-ventral interactions. In the present case report, we investigated the dynamics of dorsal and ventral activations related to FfM perception (using topographical ERP analysis and electrical source imaging) in a patient suffering from a deficit in FfM perception due to right extrastriate brain damage in the ventral stream. Despite the patient's FfM impairment, both successful (observed for the highest level of FfM signal) and absent/failed FfM perception evoked the same temporal sequence of three processing states observed previously in healthy subjects. During the first period, brain source localization revealed cortical activations along the dorsal stream, currently associated with preserved elementary motion processing. During the latter two periods, the patterns of activity differed from normal subjects: activations were observed in the ventral stream (as reported for normal subjects), but also in the dorsal pathway, with the strongest and most sustained activity localized in the parieto-occipital regions. On the other hand, absent/failed FfM perception was characterized by weaker brain activity, restricted to the more lateral regions. This study shows that in the present case report, successful FfM perception, while following the same temporal sequence of processing steps as in normal subjects, evoked different patterns of brain activity. By revealing a brain circuit involving the most rostral part of the dorsal pathway, this study provides further support for neuro-imaging studies and brain lesion investigations that have suggested the existence of different brain circuits associated with different profiles of interaction between the dorsal and the ventral streams.

  2. Effect of transcranial direct current stimulation on vestibular-ocular and vestibulo-perceptual thresholds.

    PubMed

    Kyriakareli, Artemis; Cousins, Sian; Pettorossi, Vito E; Bronstein, Adolfo M

    2013-10-02

    Transcranial direct current stimulation (tDCS) was used in 17 normal individuals to modulate vestibulo-ocular reflex (VOR) and self-motion perception rotational thresholds. The electrodes were applied over the temporoparietal junction bilaterally. Both vestibular nystagmic and perceptual thresholds were increased during as well as after tDCS stimulation. Body rotation was labeled as ipsilateral or contralateral to the anode side, but no difference was observed depending on the direction of rotation or hemisphere polarity. Threshold increase during tDCS was greater for VOR than for motion perception. 'Sham' stimulation had no effect on thresholds. We conclude that tDCS produces an immediate and sustained depression of cortical regions controlling VOR and movement perception. Temporoparietal areas appear to be involved in vestibular threshold modulation but the differential effects observed between VOR and perception suggest a partial dissociation between cortical processing of reflexive and perceptual responses.

  3. Stereomotion is processed by the third-order motion system: reply to comment on Three-systems theory of human visual motion perception: review and update

    NASA Astrophysics Data System (ADS)

    Lu, Zhong-Lin; Sperling, George

    2002-10-01

    Two theories are considered to account for the perception of motion of depth-defined objects in random-dot stereograms (stereomotion). In the LuSperling three-motion-systems theory J. Opt. Soc. Am. A 18 , 2331 (2001), stereomotion is perceived by the third-order motion system, which detects the motion of areas defined as figure (versus ground) in a salience map. Alternatively, in his comment J. Opt. Soc. Am. A 19 , 2142 (2002), Patterson proposes a low-level motion-energy system dedicated to stereo depth. The critical difference between these theories is the preprocessing (figureground based on depth and other cues versus simply stereo depth) rather than the motion-detection algorithm itself (because the motion-extraction algorithm for third-order motion is undetermined). Furthermore, the ability of observers to perceive motion in alternating feature displays in which stereo depth alternates with other features such as texture orientation indicates that the third-order motion system can perceive stereomotion. This reduces the stereomotion question to Is it third-order alone or third-order plus dedicated depth-motion processing? Two new experiments intended to support the dedicated depth-motion processing theory are shown here to be perfectly accounted for by third-order motion, as are many older experiments that have previously been shown to be consistent with third-order motion. Cyclopean and rivalry images are shown to be a likely confound in stereomotion studies, rivalry motion being as strong as stereomotion. The phase dependence of superimposed same-direction stereomotion stimuli, rivalry stimuli, and isoluminant color stimuli indicates that these stimuli are processed in the same (third-order) motion system. The phase-dependence paradigm Lu and Sperling, Vision Res. 35 , 2697 (1995) ultimately can resolve the question of which types of signals share a single motion detector. All the evidence accumulated so far is consistent with the three-motion-systems theory. 2002 Optical Society of America

  4. Human body perception and higher-level person perception are dissociated in early development.

    PubMed

    Slaughter, Virginia

    2011-01-01

    Abstract Developmental data support the proposal that human body perceptual processing is distinct from other aspects of person perception. Infants are sensitive to human bodily motion and attribute goals to human arm movements before they demonstrate recognition of human body structure. The developmental data suggest the possibility of bidirectional linkages between EBA- and FBA-mediated representations and these higher-level elements of person perception.

  5. People can understand descriptions of motion without activating visual motion brain regions

    PubMed Central

    Dravida, Swethasri; Saxe, Rebecca; Bedny, Marina

    2013-01-01

    What is the relationship between our perceptual and linguistic neural representations of the same event? We approached this question by asking whether visual perception of motion and understanding linguistic depictions of motion rely on the same neural architecture. The same group of participants took part in two language tasks and one visual task. In task 1, participants made semantic similarity judgments with high motion (e.g., “to bounce”) and low motion (e.g., “to look”) words. In task 2, participants made plausibility judgments for passages describing movement (“A centaur hurled a spear … ”) or cognitive events (“A gentleman loved cheese …”). Task 3 was a visual motion localizer in which participants viewed animations of point-light walkers, randomly moving dots, and stationary dots changing in luminance. Based on the visual motion localizer we identified classic visual motion areas of the temporal (MT/MST and STS) and parietal cortex (inferior and superior parietal lobules). We find that these visual cortical areas are largely distinct from neural responses to linguistic depictions of motion. Motion words did not activate any part of the visual motion system. Motion passages produced a small response in the right superior parietal lobule, but none of the temporal motion regions. These results suggest that (1) as compared to words, rich language stimuli such as passages are more likely to evoke mental imagery and more likely to affect perceptual circuits and (2) effects of language on the visual system are more likely in secondary perceptual areas as compared to early sensory areas. We conclude that language and visual perception constitute distinct but interacting systems. PMID:24009592

  6. Stimulus size and eccentricity in visually induced perception of horizontally translational self-motion.

    PubMed

    Nakamura, S; Shimojo, S

    1998-10-01

    The effects of the size and eccentricity of the visual stimulus upon visually induced perception of self-motion (vection) were examined with various sizes of central and peripheral visual stimulation. Analysis indicated the strength of vection increased linearly with the size of the area in which the moving pattern was presented, but there was no difference in vection strength between central and peripheral stimuli when stimulus sizes were the same. Thus, the effect of stimulus size is homogeneous across eccentricities in the visual field.

  7. Research on integration of visual and motion cues for flight simulation and ride quality investigation

    NASA Technical Reports Server (NTRS)

    Young, L. R.; Oman, C. M.; Curry, R. E.

    1977-01-01

    Vestibular perception and integration of several sensory inputs in simulation were studied. The relationship between tilt sensation induced by moving fields and those produced by actual body tilt is discussed. Linearvection studies were included and the application of the vestibular model for perception of orientation based on motion cues is presented. Other areas of examination includes visual cues in approach to landing, and a comparison of linear and nonlinear wash out filters using a model of the human vestibular system is given.

  8. Integration time for the perception of depth from motion parallax.

    PubMed

    Nawrot, Mark; Stroyan, Keith

    2012-04-15

    The perception of depth from relative motion is believed to be a slow process that "builds-up" over a period of observation. However, in the case of motion parallax, the potential accuracy of the depth estimate suffers as the observer translates during the viewing period. Our recent quantitative model for the perception of depth from motion parallax proposes that relative object depth (d) can be determined from retinal image motion (dθ/dt), pursuit eye movement (dα/dt), and fixation distance (f) by the formula: d/f≈dθ/dα. Given the model's dynamics, it is important to know the integration time required by the visual system to recover dα and dθ, and then estimate d. Knowing the minimum integration time reveals the incumbent error in this process. A depth-phase discrimination task was used to determine the time necessary to perceive depth-sign from motion parallax. Observers remained stationary and viewed a briefly translating random-dot motion parallax stimulus. Stimulus duration varied between trials. Fixation on the translating stimulus was monitored and enforced with an eye-tracker. The study found that relative depth discrimination can be performed with presentations as brief as 16.6 ms, with only two stimulus frames providing both retinal image motion and the stimulus window motion for pursuit (mean range=16.6-33.2 ms). This was found for conditions in which, prior to stimulus presentation, the eye was engaged in ongoing pursuit or the eye was stationary. A large high-contrast masking stimulus disrupted depth-discrimination for stimulus presentations less than 70-75 ms in both pursuit and stationary conditions. This interval might be linked to ocular-following response eye-movement latencies. We conclude that neural mechanisms serving depth from motion parallax generate a depth estimate much more quickly than previously believed. We propose that additional sluggishness might be due to the visual system's attempt to determine the maximum dθ/dα ratio for a selection of points on a complicated stimulus. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. A comparison of form processing involved in the perception of biological and nonbiological movements

    PubMed Central

    Thurman, Steven M.; Lu, Hongjing

    2016-01-01

    Although there is evidence for specialization in the human brain for processing biological motion per se, few studies have directly examined the specialization of form processing in biological motion perception. The current study was designed to systematically compare form processing in perception of biological (human walkers) to nonbiological (rotating squares) stimuli. Dynamic form-based stimuli were constructed with conflicting form cues (position and orientation), such that the objects were perceived to be moving ambiguously in two directions at once. In Experiment 1, we used the classification image technique to examine how local form cues are integrated across space and time in a bottom-up manner. By comparing with a Bayesian observer model that embodies generic principles of form analysis (e.g., template matching) and integrates form information according to cue reliability, we found that human observers employ domain-general processes to recognize both human actions and nonbiological object movements. Experiments 2 and 3 found differential top-down effects of spatial context on perception of biological and nonbiological forms. When a background does not involve social information, observers are biased to perceive foreground object movements in the direction opposite to surrounding motion. However, when a background involves social cues, such as a crowd of similar objects, perception is biased toward the same direction as the crowd for biological walking stimuli, but not for rotating nonbiological stimuli. The model provided an accurate account of top-down modulations by adjusting the prior probabilities associated with the internal templates, demonstrating the power and flexibility of the Bayesian approach for visual form perception. PMID:26746875

  10. Passive motion reduces vestibular balance and perceptual responses

    PubMed Central

    Fitzpatrick, Richard C; Watson, Shaun R D

    2015-01-01

    With the hypothesis that vestibular sensitivity is regulated to deal with a range of environmental motion conditions, we explored the effects of passive whole-body motion on vestibular perceptual and balance responses. In 10 subjects, vestibular responses were measured before and after a period of imposed passive motion. Vestibulospinal balance reflexes during standing evoked by galvanic vestibular stimulation (GVS) were measured as shear reaction forces. Perceptual tests measured thresholds for detecting angular motion, perceptions of suprathreshold rotation and perceptions of GVS-evoked illusory rotation. The imposed conditioning motion was 10 min of stochastic yaw rotation (0.5–2.5 Hz ≤ 300 deg s−2) with subjects seated. This conditioning markedly reduced reflexive and perceptual responses. The medium latency galvanic reflex (300–350 ms) was halved in amplitude (48%; P = 0.011) but the short latency response was unaffected. Thresholds for detecting imposed rotation more than doubled (248%; P < 0.001) and remained elevated after 30 min. Over-estimation of whole-body rotation (30–180 deg every 5 s) before conditioning was significantly reduced (41.1 to 21.5%; P = 0.033). Conditioning reduced illusory vestibular sensations of rotation evoked by GVS (mean 113 deg for 10 s at 1 mA) by 44% (P < 0.01) and the effect persisted for at least 1 h (24% reduction; P < 0.05). We conclude that a system of vestibular sensory autoregulation exists and that this probably involves central and peripheral mechanisms, possibly through vestibular efferent regulation. We propose that failure of these regulatory mechanisms at different levels could lead to disorders of movement perception and balance control during standing. Key points Human activity exposes the vestibular organs to a wide dynamic range of motion. We aimed to discover whether the CNS regulates sensitivity to vestibular afference during exposure to ambient motion. Balance and perceptual responses to vestibular stimulation were measured before and after a 10 min period of imposed, moderate intensity, stochastic whole-body rotation. After this conditioning, vestibular balance reflexes evoked by galvanic vestibular stimulation were halved in amplitude. Conditioning doubled the thresholds for perceiving small rotations, and reduced perceptions of the amplitude of real rotations, and illusory rotation evoked by galvanic stimulation. We conclude that the CNS auto-regulates sensitivity to vestibular sensory afference and that this probably involves central and peripheral mechanisms, as might arise from vestibular efferent regulation. Failure of these regulatory mechanisms at different levels could lead to disorders of movement perception and balance control during standing. PMID:25809702

  11. Perception of Stand-on-ability: Do Geographical Slants Feel Steeper Than They Look?

    PubMed

    Hajnal, Alen; Wagman, Jeffrey B; Doyon, Jonathan K; Clark, Joseph D

    2016-07-01

    Past research has shown that haptically perceived surface slant by foot is matched with visually perceived slant by a factor of 0.81. Slopes perceived visually appear shallower than when stood on without looking. We sought to identify the sources of this discrepancy by asking participants to judge whether they would be able to stand on an inclined ramp. In the first experiment, visual perception was compared to pedal perception in which participants took half a step with one foot onto an occluded ramp. Visual perception closely matched the actual maximal slope angle that one could stand on, whereas pedal perception underestimated it. Participants may have been less stable in the pedal condition while taking half a step onto the ramp. We controlled for this by having participants hold onto a sturdy tripod in the pedal condition (Experiment 2). This did not eliminate the difference between visual and haptic perception, but repeating the task while sitting on a chair did (Experiment 3). Beyond balance requirements, pedal perception may also be constrained by the limited range of motion at the ankle and knee joints while standing. Indeed, when we restricted range of motion by wearing an ankle brace pedal perception underestimated the affordance (Experiment 4). Implications for ecological theory were offered by discussing the notion of functional equivalence and the role of exploration in perception. © The Author(s) 2016.

  12. Long-lasting effects of neck muscle vibration and contraction on self-motion perception of vestibular origin.

    PubMed

    Pettorossi, Vito Enrico; Panichi, Roberto; Botti, Fabio Massimo; Biscarini, Andrea; Filippi, Guido Maria; Schieppati, Marco

    2015-10-01

    To show that neck proprioceptive input can induce long-term effects on vestibular-dependent self-motion perception. Motion perception was assessed by measuring the subject's error in tracking in the dark the remembered position of a fixed target during whole-body yaw asymmetric rotation of a supporting platform, consisting in a fast rightward half-cycle and a slow leftward half-cycle returning the subject to the initial position. Neck muscles were relaxed or voluntarily contracted, and/or vibrated. Whole-body rotation was administered during or at various intervals after the vibration train. The tracking position error (TPE) at the end of the platform rotation was measured during and after the muscle conditioning maneuvers. Neck input produced immediate and sustained changes in the vestibular perceptual response to whole-body rotation. Vibration of the left sterno-cleido-mastoideus (SCM) or right splenius capitis (SC) or isometric neck muscle effort to rotate the head to the right enhanced the TPE by decreasing the perception of the slow rotation. The reverse effect was observed by activating the contralateral muscle. The effects persisted after the end of SCM conditioning, and slowly vanished within several hours, as tested by late asymmetric rotations. The aftereffect increased in amplitude and persistence by extending the duration of the vibration train (from 1 to 10min), augmenting the vibration frequency (from 5 to 100Hz) or contracting the vibrated muscle. Symmetric yaw rotation elicited a negligible TPE, upon which neck muscle vibrations were ineffective. Neck proprioceptive input induces enduring changes in vestibular-dependent self-motion perception, conditional on the vestibular stimulus feature, and on the side and the characteristics of vibration and status of vibrated muscles. This shows that our perception of whole-body yaw-rotation is not only dependent on accurate vestibular information, but is modulated by proprioceptive information related to previously experienced position of head with respect to trunk. Tonic proprioceptive inflow, as might occur as a consequence of enduring or permanent head postures, can induce adaptive plastic changes in vestibular-dependent motion sensitiveness. These changes might be counteracted by vibration of selected neck muscles. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  13. The First Time Ever I Saw Your Feet: Inversion Effect in Newborns' Sensitivity to Biological Motion

    ERIC Educational Resources Information Center

    Bardi, Lara; Regolin, Lucia; Simion, Francesca

    2014-01-01

    Inversion effect in biological motion perception has been recently attributed to an innate sensitivity of the visual system to the gravity-dependent dynamic of the motion. However, the specific cues that determine the inversion effect in naïve subjects were never investigated. In the present study, we have assessed the contribution of the local…

  14. Spatial and temporal processing in healthy aging: implications for perceptions of driving skills.

    PubMed

    Conlon, Elizabeth; Herkes, Kathleen

    2008-07-01

    Sensitivity to the attributes of a stimulus (form or motion) and accuracy when detecting rapidly presented stimulus information were measured in older (N = 36) and younger (N = 37) groups. Before and after practice, the older group was significantly less sensitive to global motion (but not to form) and less accurate on a rapid sequencing task when detecting the individual elements presented in long but not short sequences. These effect sizes produced power for the different analyses that ranged between 0.5 and 1.00. The reduced sensitivity found among older individuals to temporal but not spatial stimuli, adds support to previous findings of a selective age-related deficit in temporal processing. Older women were significantly less sensitive than older men, younger men and younger women on the global motion task. Gender effects were evident when, in response to global motion stimuli, complex extraction and integration processes needed to be undertaken rapidly. Significant moderate correlations were found between age, global motion sensitivity and reports of perceptions of other vehicles and road signs when driving. These associations suggest that reduced motion sensitivity may produce functional difficulties for the older adults when judging speeds or estimating gaps in traffic while driving.

  15. The continuous Wagon Wheel Illusion depends on, but is not identical to neuronal adaptation.

    PubMed

    VanRullen, Rufin

    2007-07-01

    The occurrence of perceived reversed motion while observers view a continuous, periodically moving stimulus (a bistable phenomenon coined the "continuous Wagon Wheel Illusion" or "c-WWI") has been taken as evidence that some aspects of motion perception rely on discrete sampling of visual information. Alternative accounts rely on the possibility of a motion aftereffect that may become visible even while the adapting stimulus is present. Here I show that motion adaptation might be necessary, but is not sufficient to explain the illusion. When local adaptation is prevented by slowly drifting the moving wheel across the retina, the c-WWI illusion tends to decrease, as do other bistable percepts (e.g. binocular rivalry). However, the strength of the c-WWI and that of adaptation (as measured by either the static or flicker motion aftereffects) are not directly related: although the c-WWI decreases with increasing eccentricity, the aftereffects actually intensify concurrently. A similar dissociation can be induced by manipulating stimulus contrast. This indicates that the c-WWI may be enabled by, but is not equivalent to, local motion adaptation - and that other factors such as discrete sampling may be involved in its generation.

  16. Effects of simulator motion and visual characteristics on rotorcraft handling qualities evaluations

    NASA Technical Reports Server (NTRS)

    Mitchell, David G.; Hart, Daniel C.

    1993-01-01

    The pilot's perceptions of aircraft handling qualities are influenced by a combination of the aircraft dynamics, the task, and the environment under which the evaluation is performed. When the evaluation is performed in a groundbased simulator, the characteristics of the simulation facility also come into play. Two studies were conducted on NASA Ames Research Center's Vertical Motion Simulator to determine the effects of simulator characteristics on perceived handling qualities. Most evaluations were conducted with a baseline set of rotorcraft dynamics, using a simple transfer-function model of an uncoupled helicopter, under different conditions of visual time delays and motion command washout filters. Differences in pilot opinion were found as the visual and motion parameters were changed, reflecting a change in the pilots' perceptions of handling qualities, rather than changes in the aircraft model itself. The results indicate a need for tailoring the motion washout dynamics to suit the task. Visual-delay data are inconclusive but suggest that it may be better to allow some time delay in the visual path to minimize the mismatch between visual and motion, rather than eliminate the visual delay entirely through lead compensation.

  17. Effects of motion speed in action representations

    PubMed Central

    van Dam, Wessel O.; Speed, Laura J.; Lai, Vicky T.; Vigliocco, Gabriella; Desai, Rutvik H.

    2017-01-01

    Grounded cognition accounts of semantic representation posit that brain regions traditionally linked to perception and action play a role in grounding the semantic content of words and sentences. Sensory-motor systems are thought to support partially abstract simulations through which conceptual content is grounded. However, which details of sensory-motor experience are included in, or excluded from these simulations, is not well understood. We investigated whether sensory-motor brain regions are differentially involved depending on the speed of actions described in a sentence. We addressed this issue by examining the neural signature of relatively fast (The old lady scurried across the road) and slow (The old lady strolled across the road) action sentences. The results showed that sentences that implied fast motion modulated activity within the right posterior superior temporal sulcus and the angular and middle occipital gyri, areas associated with biological motion and action perception. Sentences that implied slow motion resulted in greater signal within the right primary motor cortex and anterior inferior parietal lobule, areas associated with action execution and planning. These results suggest that the speed of described motion influences representational content and modulates the nature of conceptual grounding. Fast motion events are represented more visually whereas motor regions play a greater role in representing conceptual content associated with slow motion. PMID:28160739

  18. Direction of Perceived Motion and Eye Movements Show Similar Biases for Asymmetrically Windowed Moving Plaids

    NASA Technical Reports Server (NTRS)

    Beutter, B. R.; Mulligan, J. B.; Stone, L. S.; Hargens, Alan R. (Technical Monitor)

    1995-01-01

    We have shown that moving a plaid in an asymmetric window biases the perceived direction of motion (Beutter, Mulligan & Stone, ARVO 1994). We now explore whether these biased motion signals might also drive the smooth eye-movement response by comparing the perceived and tracked directions. The human smooth oculomotor response to moving plaids appears to be driven by the perceived rather than the veridical direction of motion. This suggests that human motion perception and smooth eye movements share underlying neural motion-processing substrates as has already been shown to be true for monkeys.

  19. Self-Motion Perception and Motion Sickness

    NASA Technical Reports Server (NTRS)

    Fox, Robert A.

    1991-01-01

    Motion sickness typically is considered a bothersome artifact of exposure to passive motion in vehicles of conveyance. This condition seldom has significant impact on the health of individuals because it is of brief duration, it usually can be prevented by simply avoiding the eliciting condition and, when the conditions that produce it are unavoidable, sickness dissipates with continued exposure. The studies conducted examined several aspects of motion sickness in animal models. A principle objective of these studies was to investigate the neuroanatomy that is important in motion sickness with the objectives of examining both the utility of putative models and defining neural mechanisms that are important in motion sickness.

  20. Perception of Self-Motion and Regulation of Walking Speed in Young-Old Adults.

    PubMed

    Lalonde-Parsi, Marie-Jasmine; Lamontagne, Anouk

    2015-07-01

    Whether a reduced perception of self-motion contributes to poor walking speed adaptations in older adults is unknown. In this study, speed discrimination thresholds (perceptual task) and walking speed adaptations (walking task) were compared between young (19-27 years) and young-old individuals (63-74 years), and the relationship between the performance on the two tasks was examined. Participants were evaluated while viewing a virtual corridor in a helmet-mounted display. Speed discrimination thresholds were determined using a staircase procedure. Walking speed modulation was assessed on a self-paced treadmill while exposed to different self-motion speeds ranging from 0.25 to 2 times the participants' comfortable speed. For each speed, participants were instructed to match the self-motion speed described by the moving corridor. On the walking task, participants displayed smaller walking speed errors at comfortable walking speeds compared with slower of faster speeds. The young-old adults presented larger speed discrimination thresholds (perceptual experiment) and larger walking speed errors (walking experiment) compared with young adults. Larger walking speed errors were associated with higher discrimination thresholds. The enhanced performance on the walking task at comfortable speed suggests that intersensory calibration processes are influenced by experience, hence optimized for frequently encountered conditions. The altered performance of the young-old adults on the perceptual and walking tasks, as well as the relationship observed between the two tasks, suggest that a poor perception of visual motion information may contribute to the poor walking speed adaptations that arise with aging.

  1. Filling-in visual motion with sounds.

    PubMed

    Väljamäe, A; Soto-Faraco, S

    2008-10-01

    Information about the motion of objects can be extracted by multiple sensory modalities, and, as a consequence, object motion perception typically involves the integration of multi-sensory information. Often, in naturalistic settings, the flow of such information can be rather discontinuous (e.g. a cat racing through the furniture in a cluttered room is partly seen and partly heard). This study addressed audio-visual interactions in the perception of time-sampled object motion by measuring adaptation after-effects. We found significant auditory after-effects following adaptation to unisensory auditory and visual motion in depth, sampled at 12.5 Hz. The visually induced (cross-modal) auditory motion after-effect was eliminated if visual adaptors flashed at half of the rate (6.25 Hz). Remarkably, the addition of the high-rate acoustic flutter (12.5 Hz) to this ineffective, sparsely time-sampled, visual adaptor restored the auditory after-effect to a level comparable to what was seen with high-rate bimodal adaptors (flashes and beeps). Our results suggest that this auditory-induced reinstatement of the motion after-effect from the poor visual signals resulted from the occurrence of sound-induced illusory flashes. This effect was found to be dependent both on the directional congruency between modalities and on the rate of auditory flutter. The auditory filling-in of time-sampled visual motion supports the feasibility of using reduced frame rate visual content in multisensory broadcasting and virtual reality applications.

  2. Motion Cueing Algorithm Development: Human-Centered Linear and Nonlinear Approaches

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.

    2005-01-01

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. Prior research identified viable features from two algorithms: the nonlinear "adaptive algorithm", and the "optimal algorithm" that incorporates human vestibular models. A novel approach to motion cueing, the "nonlinear algorithm" is introduced that combines features from both approaches. This algorithm is formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli. Using a time-varying control law, the matrix Riccati equation is updated in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. The neurocomputing approach was crucial in that the number of presentations of an input vector could be reduced to meet the real time requirement without degrading the quality of the motion cues.

  3. The Responsiveness of Biological Motion Processing Areas to Selective Attention Towards Goals

    PubMed Central

    Herrington, John; Nymberg, Charlotte; Faja, Susan; Price, Elinora; Schultz, Robert

    2012-01-01

    A growing literature indicates that visual cortex areas viewed as primarily responsive to exogenous stimuli are susceptible to top-down modulation by selective attention. The present study examines whether brain areas involved in biological motion perception are among these areas – particularly with respect to selective attention towards human movement goals. Fifteen participants completed a point-light biological motion study following a two-by-two factorial design, with one factor representing an exogenous manipulation of human movement goals (goal-directed versus random movement), and the other an endogenous manipulation (a goal identification task versus an ancillary color-change task). Both manipulations yielded increased activation in the human homologue of motion-sensitive area MT+ (hMT+) as well as the extrastriate body area (EBA). The endogenous manipulation was associated with increased right posterior superior temporal sulcus (STS) activation, whereas the exogenous manipulation was associated with increased activation in left posterior STS. Selective attention towards goals activated portion of left hMT+/EBA only during the perception of purposeful movement consistent with emerging theories associating this area with the matching of visual motion input to known goal-directed actions. The overall pattern of results indicates that attention towards the goals of human movement activates biological motion areas. Ultimately, selective attention may explain why some studies examining biological motion show activation in hMT+ and EBA, even when using control stimuli with comparable motion properties. PMID:22796987

  4. Visual motion disambiguation by a subliminal sound.

    PubMed

    Dufour, Andre; Touzalin, Pascale; Moessinger, Michèle; Brochard, Renaud; Després, Olivier

    2008-09-01

    There is growing interest in the effect of sound on visual motion perception. One model involves the illusion created when two identical objects moving towards each other on a two-dimensional visual display can be seen to either bounce off or stream through each other. Previous studies show that the large bias normally seen toward the streaming percept can be modulated by the presentation of an auditory event at the moment of coincidence. However, no reports to date provide sufficient evidence to indicate whether the sound bounce-inducing effect is due to a perceptual binding process or merely to an explicit inference resulting from the transient auditory stimulus resembling a physical collision of two objects. In the present study, we used a novel experimental design in which a subliminal sound was presented either 150 ms before, at, or 150 ms after the moment of coincidence of two disks moving towards each other. The results showed that there was an increased perception of bouncing (rather than streaming) when the subliminal sound was presented at or 150 ms after the moment of coincidence compared to when no sound was presented. These findings provide the first empirical demonstration that activation of the human auditory system without reaching consciousness affects the perception of an ambiguous visual motion display.

  5. Detecting agency from the biological motion of veridical vs animated agents

    PubMed Central

    Kelley, William M.; Heatherton, Todd F.; Macrae, C. Neil

    2007-01-01

    The ability to detect agency is fundamental for understanding the social world. Underlying this capacity are neural circuits that respond to patterns of intentional biological motion in the superior temporal sulcus and temporoparietal junction. Here we show that the brain's blood oxygenation level dependent (BOLD) response to such motion is modulated by the representation of the actor. Dynamic social interactions were portrayed by either live-action agents or computer-animated agents, enacting the exact same patterns of biological motion. Using an event-related design, we found that the BOLD response associated with the perception and interpretation of agency was greater when identical physical movements were performed by real rather than animated agents. This finding has important implications for previous work on biological motion that has relied upon computer-animated stimuli and demonstrates that the neural substrates of social perception are finely tuned toward real-world agents. In addition, the response in lateral temporal areas was observed in the absence of instructions to make mental inferences, thus demonstrating the spontaneous implementation of the intentional stance. PMID:18985141

  6. Breaking camouflage and detecting targets require optic flow and image structure information.

    PubMed

    Pan, Jing Samantha; Bingham, Ned; Chen, Chang; Bingham, Geoffrey P

    2017-08-01

    Use of motion to break camouflage extends back to the Cambrian [In the Blink of an Eye: How Vision Sparked the Big Bang of Evolution (New York Basic Books, 2003)]. We investigated the ability to break camouflage and continue to see camouflaged targets after motion stops. This is crucial for the survival of hunting predators. With camouflage, visual targets and distracters cannot be distinguished using only static image structure (i.e., appearance). Motion generates another source of optical information, optic flow, which breaks camouflage and specifies target locations. Optic flow calibrates image structure with respect to spatial relations among targets and distracters, and calibrated image structure makes previously camouflaged targets perceptible in a temporally stable fashion after motion stops. We investigated this proposal using laboratory experiments and compared how many camouflaged targets were identified either with optic flow information alone or with combined optic flow and image structure information. Our results show that the combination of motion-generated optic flow and target-projected image structure information yielded efficient and stable perception of camouflaged targets.

  7. Mechanisms underlying the perceived angular velocity of a rigidly rotating object.

    PubMed

    Caplovitz, G P; Hsieh, P-J; Tse, P U

    2006-09-01

    The perceived angular velocity of an ellipse undergoing a constant rate of rotation will vary as its aspect ratio is changed. Specifically, a "fat" ellipse with a low aspect ratio will in general be perceived to rotate more slowly than a "thin" ellipse with a higher aspect ratio. Here we investigate this illusory underestimation of angular velocity in the domain where ellipses appear to be rotating rigidly. We characterize the relationship between aspect ratio and perceived angular velocity under luminance and non-luminance-defined conditions. The data are consistent with two hypotheses concerning the construction of rotational motion percepts. The first hypothesis is that perceived angular velocity is determined by low-level component-motion (i.e., motion-energy) signals computed along the ellipse's contour. The second hypothesis is that relative maxima of positive contour curvature are treated as non-component, form-based "trackable features" (TFs) that contribute to the visual system's construction of the motion percept. Our data suggest that perceived angular velocity is driven largely by component signals, but is modulated by the motion signals of trackable features, such as corners and regions of high contour curvature.

  8. An unbiased measure of the contributions of chroma and luminance to saccadic suppression of displacement.

    PubMed

    Anand, Sulekha; Bridgeman, Bruce

    2002-02-01

    Perception of image displacement is suppressed during saccadic eye movements. We probed the source of saccadic suppression of displacement by testing whether it selectively affects chromatic- or luminance-based motion information. Human subjects viewed a stimulus in which chromatic and luminance cues provided conflicting information about displacement direction. Apparent motion occurred during either fixation or a 19.5 degree saccade. Subjects detected motion and discriminated displacement direction in each trial. They reported motion in over 90% of fixation trials and over 70% of saccade trials. During fixation, the probability of perceiving the direction carried by chromatic cues decreased as luminance contrast increased. During saccades, subjects tended to perceive the direction indicated by luminance cues when luminance contrast was high. However, when luminance contrast was low, subjects showed no preference for the chromatic- or luminance-based direction. Thus magnocellular channels are suppressed, while stimulation of parvocellular channels is below threshold, so that neither channel drives motion perception during saccades. These results confirm that magnocellular inhibition is the source of saccadic suppression.

  9. Ground-based training for the stimulus rearrangement encountered during spaceflight

    NASA Technical Reports Server (NTRS)

    Reschke, M. F.; Parker, D. E.; Harm, D. L.; Michaud, L.

    1988-01-01

    Approximately 65-70% of the crew members now experience motion sickness of some degree during the first 72 h of orbital flight on the Space Shuttle. Lack of congruence among signals from spatial orientation systems leads to sensory conflict, which appears to be the basic cause of space motion sickness. A project to develop training devices and procedures to preadapt astronauts to the stimulus rearrangements of microgravity is currently being pursued. The preflight adaptation trainers (PATs) are intended to: demonstrate sensory phenomena likely to be experienced in flight, allow astronauts to train preflight in an altered sensory environment, alter sensory-motor reflexes, and alleviate or shorten the duration of space motion sickness. Four part-task PATs are anticipated. The trainers are designed to evoke two adaptation processes, sensory compensation and sensory reinterpretation, which are necessary to maintain spatial orientation in a weightless environment. Recent investigations using one of the trainers indicate that self-motion perception of linear translation is enhanced when body tilt is combined with visual surround translation, and that a 270 degrees phase angle relationship between tilt and surround motion produces maximum translation perception.

  10. Influence of Passive Joint Stiffness on Proprioceptive Acuity in Individuals With Functional Instability of the Ankle.

    PubMed

    Marinho, Hellen Veloso Rocha; Amaral, Giovanna Mendes; de Souza Moreira, Bruno; Araújo, Vanessa Lara; Souza, Thales Rezende; Ocarino, Juliana Melo; da Fonseca, Sérgio Teixeira

    2017-12-01

    Study Design Controlled laboratory study, cross-sectional. Background Deficits in ankle proprioceptive acuity have been reported in persons with functional instability of the ankle. Passive stiffness has been proposed as a possible mechanism underlying proprioceptive acuity. Objective To compare proprioceptive acuity and passive ankle stiffness in persons with and without functional ankle instability, and to assess the influence of passive joint stiffness on proprioceptive acuity in persons with functional ankle instability. Methods A sample of 18 subjects with and 18 without complaints of functional ankle instability following lateral ankle sprain participated. An isokinetic dynamometer was used to compare motion perception threshold, passive position sense, and passive ankle stiffness between groups. To evaluate the influence of passive stiffness on proprioceptive acuity, individuals in the lateral functional ankle instability group were divided into 2 subgroups: "high" and "low" passive ankle stiffness. Results The functional ankle instability group exhibited increased motion perception threshold when compared with the corresponding limb of the control group. Between-group differences were not found for passive position sense and passive ankle stiffness. Those in the functional ankle instability group with higher passive ankle stiffness had smaller motion perception thresholds than those with lower passive ankle stiffness. Conclusion Unlike motion perception threshold, passive position sense is not affected by the presence of functional ankle instability. Passive ankle stiffness appears to influence proprioceptive acuity in persons with functional ankle instability. J Orthop Sports Phys Ther 2017;47(12):899-905. Epub 7 Oct 2017. doi:10.2519/jospt.2017.7030.

  11. Effects of aging on perception of motion

    NASA Astrophysics Data System (ADS)

    Kaur, Manpreet; Wilder, Joseph; Hung, George; Julesz, Bela

    1997-09-01

    Driving requires two basic visual components: 'visual sensory function' and 'higher order skills.' Among the elderly, it has been observed that when attention must be divided in the presence of multiple objects, their attentional skills and relational processes, along with impairment of basic visual sensory function, are markedly impaired. A high frame rate imaging system was developed to assess the elderly driver's ability to locate and distinguish computer generated images of vehicles and to determine their direction of motion in a simulated intersection. Preliminary experiments were performed at varying target speeds and angular displacements to study the effect of these parameters on motion perception. Results for subjects in four different age groups, ranging from mid- twenties to mid-sixties, show significantly better performance for the younger subjects as compared to the older ones.

  12. Social forces for team coordination in ball possession game

    NASA Astrophysics Data System (ADS)

    Yokoyama, Keiko; Shima, Hiroyuki; Fujii, Keisuke; Tabuchi, Noriyuki; Yamamoto, Yuji

    2018-02-01

    Team coordination is a basic human behavioral trait observed in many real-life communities. To promote teamwork, it is important to cultivate social skills that elicit team coordination. In the present work, we consider which social skills are indispensable for individuals performing a ball possession game in soccer. We develop a simple social force model that describes the synchronized motion of offensive players. Comparing the simulation results with experimental observations, we uncovered that the cooperative social force, a measure of perception skill, has the most important role in reproducing the harmonized collective motion of experienced players in the task. We further developed an experimental tool that facilitates real players' perceptions of interpersonal distance, revealing that the tool improves novice players' motions as if the cooperative social force were imposed.

  13. Unimpaired Perception of Social and Physical Causality, but Impaired Perception of Animacy in High Functioning Children with Autism

    ERIC Educational Resources Information Center

    Congiu, Sara; Schlottmann, Anne; Ray, Elizabeth

    2010-01-01

    We investigated perception of social and physical causality and animacy in simple motion events, for high-functioning children with autism (CA = 13, VMA = 9.6). Children matched 14 different animations to pictures showing physical, social or non-causality. In contrast to previous work, children with autism performed at a high level similar to…

  14. Dynamic facial expressions evoke distinct activation in the face perception network: a connectivity analysis study.

    PubMed

    Foley, Elaine; Rippon, Gina; Thai, Ngoc Jade; Longe, Olivia; Senior, Carl

    2012-02-01

    Very little is known about the neural structures involved in the perception of realistic dynamic facial expressions. In the present study, a unique set of naturalistic dynamic facial emotional expressions was created. Through fMRI and connectivity analysis, a dynamic face perception network was identified, which is demonstrated to extend Haxby et al.'s [Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. The distributed human neural system for face perception. Trends in Cognitive Science, 4, 223-233, 2000] distributed neural system for face perception. This network includes early visual regions, such as the inferior occipital gyrus, which is identified as insensitive to motion or affect but sensitive to the visual stimulus, the STS, identified as specifically sensitive to motion, and the amygdala, recruited to process affect. Measures of effective connectivity between these regions revealed that dynamic facial stimuli were associated with specific increases in connectivity between early visual regions, such as the inferior occipital gyrus and the STS, along with coupling between the STS and the amygdala, as well as the inferior frontal gyrus. These findings support the presence of a distributed network of cortical regions that mediate the perception of different dynamic facial expressions.

  15. Biological motion perception links diverse facets of theory of mind during middle childhood.

    PubMed

    Rice, Katherine; Anderson, Laura C; Velnoskey, Kayla; Thompson, James C; Redcay, Elizabeth

    2016-06-01

    Two cornerstones of social development--social perception and theory of mind--undergo brain and behavioral changes during middle childhood, but the link between these developing domains is unclear. One theoretical perspective argues that these skills represent domain-specific areas of social development, whereas other perspectives suggest that both skills may reflect a more integrated social system. Given recent evidence from adults that these superficially different domains may be related, the current study examined the developmental relation between these social processes in 52 children aged 7 to 12 years. Controlling for age and IQ, social perception (perception of biological motion in noise) was significantly correlated with two measures of theory of mind: one in which children made mental state inferences based on photographs of the eye region of the face and another in which children made mental state inferences based on stories. Social perception, however, was not correlated with children's ability to make physical inferences from stories about people. Furthermore, the mental state inference tasks were not correlated with each other, suggesting a role for social perception in linking various facets of theory of mind. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Reprint of "Biological motion perception links diverse facets of theory of mind during middle childhood".

    PubMed

    Rice, Katherine; Anderson, Laura C; Velnoskey, Kayla; Thompson, James C; Redcay, Elizabeth

    2016-09-01

    Two cornerstones of social development-social perception and theory of mind-undergo brain and behavioral changes during middle childhood, but the link between these developing domains is unclear. One theoretical perspective argues that these skills represent domain-specific areas of social development, whereas other perspectives suggest that both skills may reflect a more integrated social system. Given recent evidence from adults that these superficially different domains may be related, the current study examined the developmental relation between these social processes in 52 children aged 7 to 12years. Controlling for age and IQ, social perception (perception of biological motion in noise) was significantly correlated with two measures of theory of mind: one in which children made mental state inferences based on photographs of the eye region of the face and another in which children made mental state inferences based on stories. Social perception, however, was not correlated with children's ability to make physical inferences from stories about people. Furthermore, the mental state inference tasks were not correlated with each other, suggesting a role for social perception in linking various facets of theory of mind. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Remote operation: a selective review of research into visual depth perception.

    PubMed

    Reinhardt-Rutland, A H

    1996-07-01

    Some perceptual motor operations are performed remotely; examples include the handling of life-threatening materials and surgical procedures. A camera conveys the site of operation to a TV monitor, so depth perception relies mainly on pictorial information, perhaps with enhancement of the occlusion cue by motion. However, motion information such as motion parallax is not likely to be important. The effectiveness of pictorial information is diminished by monocular and binocular information conveying flatness of the screen and by difficulties in scaling: Only a degree of relative depth can be conveyed. Furthermore, pictorial information can mislead. Depth perception is probably adequate in remote operation, if target objects are well separated, with well-defined edges and familiar shapes. Stereoscopic viewing systems are being developed to introduce binocular information to remote operation. However, stereoscopic viewing is problematic because binocular disparity conflicts with convergence and monocular information. An alternative strategy to improve precision in remote operation may be to rely on individuals who lack binocular function: There is redundancy in depth information, and such individuals seem to compensate for the lack of binocular function.

  18. Perception of the Body in Space: Mechanisms

    NASA Technical Reports Server (NTRS)

    Young, Laurence R.

    1991-01-01

    The principal topic is the perception of body orientation and motion in space and the extent to which these perceptual abstraction can be related directly to the knowledge of sensory mechanisms, particularly for the vestibular apparatus. Spatial orientation is firmly based on the underlying sensory mechanisms and their central integration. For some of the simplest situations, like rotation about a vertical axis in darkness, the dynamic response of the semicircular canals furnishes almost enough information to explain the sensations of turning and stopping. For more complex conditions involving multiple sensory systems and possible conflicts among their messages, a mechanistic response requires significant speculative assumptions. The models that exist for multisensory spatial orientation are still largely of the non-rational parameter variety. They are capable of predicting relationships among input motions and output perceptions of motion, but they involve computational functions that do not now and perhaps never will have their counterpart in central nervous system machinery. The challenge continues to be in the iterative process of testing models by experiment, correcting them where necessary, and testing them again.

  19. Training in Contrast Detection Improves Motion Perception of Sinewave Gratings in Amblyopia

    PubMed Central

    Hou, Fang; Huang, Chang-bing; Tao, Liming; Feng, Lixia; Zhou, Yifeng; Lu, Zhong-Lin

    2011-01-01

    Purpose. One critical concern about using perceptual learning to treat amblyopia is whether training with one particular stimulus and task generalizes to other stimuli and tasks. In the spatial domain, it has been found that the bandwidth of contrast sensitivity improvement is much broader in amblyopes than in normals. Because previous studies suggested the local motion deficits in amblyopia are explained by the spatial vision deficits, the hypothesis for this study was that training in the spatial domain could benefit motion perception of sinewave gratings. Methods. Nine adult amblyopes (mean age, 22.1 ± 5.6 years) were trained in a contrast detection task in the amblyopic eye for 10 days. Visual acuity, spatial contrast sensitivity functions, and temporal modulation transfer functions (MTF) for sinewave motion detection and discrimination were measured for each eye before and after training. Eight adult amblyopes (mean age, 22.6 ± 6.7 years) served as control subjects. Results. In the amblyopic eye, training improved (1) contrast sensitivity by 6.6 dB (or 113.8%) across spatial frequencies, with a bandwidth of 4.4 octaves; (2) sensitivity of motion detection and discrimination by 3.2 dB (or 44.5%) and 3.7 dB (or 53.1%) across temporal frequencies, with bandwidths of 3.9 and 3.1 octaves, respectively; (3) visual acuity by 3.2 dB (or 44.5%). The fellow eye also showed a small amount of improvement in contrast sensitivities and no significant change in motion perception. Control subjects who received no training demonstrated no obvious improvement in any measure. Conclusions. The results demonstrate substantial plasticity in the amblyopic visual system, and provide additional empirical support for perceptual learning as a potential treatment for amblyopia. PMID:21693615

  20. Audio–visual interactions for motion perception in depth modulate activity in visual area V3A

    PubMed Central

    Ogawa, Akitoshi; Macaluso, Emiliano

    2013-01-01

    Multisensory signals can enhance the spatial perception of objects and events in the environment. Changes of visual size and auditory intensity provide us with the main cues about motion direction in depth. However, frequency changes in audition and binocular disparity in vision also contribute to the perception of motion in depth. Here, we presented subjects with several combinations of auditory and visual depth-cues to investigate multisensory interactions during processing of motion in depth. The task was to discriminate the direction of auditory motion in depth according to increasing or decreasing intensity. Rising or falling auditory frequency provided an additional within-audition cue that matched or did not match the intensity change (i.e. intensity-frequency (IF) “matched vs. unmatched” conditions). In two-thirds of the trials, a task-irrelevant visual stimulus moved either in the same or opposite direction of the auditory target, leading to audio–visual “congruent vs. incongruent” between-modalities depth-cues. Furthermore, these conditions were presented either with or without binocular disparity. Behavioral data showed that the best performance was observed in the audio–visual congruent condition with IF matched. Brain imaging results revealed maximal response in visual area V3A when all cues provided congruent and reliable depth information (i.e. audio–visual congruent, IF-matched condition including disparity cues). Analyses of effective connectivity revealed increased coupling from auditory cortex to V3A specifically in audio–visual congruent trials. We conclude that within- and between-modalities cues jointly contribute to the processing of motion direction in depth, and that they do so via dynamic changes of connectivity between visual and auditory cortices. PMID:23333414

  1. Color Improves Speed of Processing But Not Perception in a Motion Illusion

    PubMed Central

    Perry, Carolyn J.; Fallah, Mazyar

    2012-01-01

    When two superimposed surfaces of dots move in different directions, the perceived directions are shifted away from each other. This perceptual illusion has been termed direction repulsion and is thought to be due to mutual inhibition between the representations of the two directions. It has further been shown that a speed difference between the two surfaces attenuates direction repulsion. As speed and direction are both necessary components of representing motion, the reduction in direction repulsion can be attributed to the additional motion information strengthening the representations of the two directions and thus reducing the mutual inhibition. We tested whether bottom-up attention and top-down task demands, in the form of color differences between the two surfaces, would also enhance motion processing, reducing direction repulsion. We found that the addition of color differences did not improve direction discrimination and reduce direction repulsion. However, we did find that adding a color difference improved performance on the task. We hypothesized that the performance differences were due to the limited presentation time of the stimuli. We tested this in a follow-up experiment where we varied the time of presentation to determine the duration needed to successfully perform the task with and without the color difference. As we expected, color segmentation reduced the amount of time needed to process and encode both directions of motion. Thus we find a dissociation between the effects of attention on the speed of processing and conscious perception of direction. We propose four potential mechanisms wherein color speeds figure-ground segmentation of an object, attentional switching between objects, direction discrimination and/or the accumulation of motion information for decision-making, without affecting conscious perception of the direction. Potential neural bases are also explored. PMID:22479255

  2. Color improves speed of processing but not perception in a motion illusion.

    PubMed

    Perry, Carolyn J; Fallah, Mazyar

    2012-01-01

    When two superimposed surfaces of dots move in different directions, the perceived directions are shifted away from each other. This perceptual illusion has been termed direction repulsion and is thought to be due to mutual inhibition between the representations of the two directions. It has further been shown that a speed difference between the two surfaces attenuates direction repulsion. As speed and direction are both necessary components of representing motion, the reduction in direction repulsion can be attributed to the additional motion information strengthening the representations of the two directions and thus reducing the mutual inhibition. We tested whether bottom-up attention and top-down task demands, in the form of color differences between the two surfaces, would also enhance motion processing, reducing direction repulsion. We found that the addition of color differences did not improve direction discrimination and reduce direction repulsion. However, we did find that adding a color difference improved performance on the task. We hypothesized that the performance differences were due to the limited presentation time of the stimuli. We tested this in a follow-up experiment where we varied the time of presentation to determine the duration needed to successfully perform the task with and without the color difference. As we expected, color segmentation reduced the amount of time needed to process and encode both directions of motion. Thus we find a dissociation between the effects of attention on the speed of processing and conscious perception of direction. We propose four potential mechanisms wherein color speeds figure-ground segmentation of an object, attentional switching between objects, direction discrimination and/or the accumulation of motion information for decision-making, without affecting conscious perception of the direction. Potential neural bases are also explored.

  3. Perception and the strongest sensory memory trace of multi-stable displays both form shortly after the stimulus onset.

    PubMed

    Pastukhov, Alexander

    2016-02-01

    We investigated the relation between perception and sensory memory of multi-stable structure-from-motion displays. The latter is an implicit visual memory that reflects a recent history of perceptual dominance and influences only the initial perception of multi-stable displays. First, we established the earliest time point when the direction of an illusory rotation can be reversed after the display onset (29-114 ms). Because our display manipulation did not bias perception towards a specific direction of illusory rotation but only signaled the change in motion, this means that the perceptual dominance was established no later than 29-114 ms after the stimulus onset. Second, we used orientation-selectivity of sensory memory to establish which display orientation produced the strongest memory trace and when this orientation was presented during the preceding prime interval (80-140 ms). Surprisingly, both estimates point towards the time interval immediately after the display onset, indicating that both perception and sensory memory form at approximately the same time. This suggests a tighter integration between perception and sensory memory than previously thought, warrants a reconsideration of its role in visual perception, and indicates that sensory memory could be a unique behavioral correlate of the earlier perceptual inference that can be studied post hoc.

  4. Moving Faces

    ERIC Educational Resources Information Center

    Journal of College Science Teaching, 2005

    2005-01-01

    A recent study by Zara Ambadar and Jeffrey F. Cohn of the University of Pittsburgh and Jonathan W. Schooler of the University of British Columbia, examined how motion affects people's judgment of subtle facial expressions. Two experiments demonstrated robust effects of motion in facilitating the perception of subtle facial expressions depicting…

  5. A binaural beat constructed from a noise

    PubMed Central

    Akeroyd, Michael A

    2012-01-01

    The binaural beat has been used for over one hundred years as a stimulus for generating the percept of motion. Classically the beat consists of a pure tone at one ear (e.g. 500 Hz) and the same pure tone at the other ear but shifted upwards or downwards in frequency (e.g., 501 Hz). An experiment and binaural computational analysis are reported which demonstrate that a more powerful motion percept can be obtained by applying the concept of the frequency shift to a noise, via an upwards or downwards shift in the frequency of the Fourier components of its spectrum. PMID:21218863

  6. Apparent motion perception in lower limb amputees with phantom sensations: "obstacle shunning" and "obstacle tolerance".

    PubMed

    Saetta, Gianluca; Grond, Ilva; Brugger, Peter; Lenggenhager, Bigna; Tsay, Anthony J; Giummarra, Melita J

    2018-03-21

    Phantom limbs are the phenomenal persistence of postural and sensorimotor features of an amputated limb. Although immaterial, their characteristics can be modulated by the presence of physical matter. For instance, the phantom may disappear when its phenomenal space is invaded by objects ("obstacle shunning"). Alternatively, "obstacle tolerance" occurs when the phantom is not limited by the law of impenetrability and co-exists with physical objects. Here we examined the link between this under-investigated aspect of phantom limbs and apparent motion perception. The illusion of apparent motion of human limbs involves the perception that a limb moves through or around an object, depending on the stimulus onset asynchrony (SOA) for the two images. Participants included 12 unilateral lower limb amputees matched for obstacle shunning (n = 6) and obstacle tolerance (n = 6) experiences, and 14 non-amputees. Using multilevel linear models, we replicated robust biases for short perceived trajectories for short SOA (moving through the object), and long trajectories (circumventing the object) for long SOAs in both groups. Importantly, however, amputees with obstacle shunning perceived leg stimuli to predominantly move through the object, whereas amputees with obstacle tolerance perceived leg stimuli to predominantly move around the object. That is, in people who experience obstacle shunning, apparent motion perception of lower limbs was not constrained to the laws of impenetrability (as the phantom disappears when invaded by objects), and legs can therefore move through physical objects. Amputees who experience obstacle tolerance, however, had stronger solidity constraints for lower limb apparent motion, perhaps because they must avoid co-location of the phantom with physical objects. Phantom limb experience does, therefore, appear to be modulated by intuitive physics, but not in the same way for everyone. This may have important implications for limb experience post-amputation (e.g., improving prosthesis embodiment when limb representation is constrained by the same limits as an intact limb). Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Trading of dynamic interaural time and level difference cues and its effect on the auditory motion-onset response measured with electroencephalography.

    PubMed

    Altmann, Christian F; Ueda, Ryuhei; Bucher, Benoit; Furukawa, Shigeto; Ono, Kentaro; Kashino, Makio; Mima, Tatsuya; Fukuyama, Hidenao

    2017-10-01

    Interaural time (ITD) and level differences (ILD) constitute the two main cues for sound localization in the horizontal plane. Despite extensive research in animal models and humans, the mechanism of how these two cues are integrated into a unified percept is still far from clear. In this study, our aim was to test with human electroencephalography (EEG) whether integration of dynamic ITD and ILD cues is reflected in the so-called motion-onset response (MOR), an evoked potential elicited by moving sound sources. To this end, ITD and ILD trajectories were determined individually by cue trading psychophysics. We then measured EEG while subjects were presented with either static click-trains or click-trains that contained a dynamic portion at the end. The dynamic part was created by combining ITD with ILD either congruently to elicit the percept of a right/leftward moving sound, or incongruently to elicit the percept of a static sound. In two experiments that differed in the method to derive individual dynamic cue trading stimuli, we observed an MOR with at least a change-N1 (cN1) component for both the congruent and incongruent conditions at about 160-190 ms after motion-onset. A significant change-P2 (cP2) component for both the congruent and incongruent ITD/ILD combination was found only in the second experiment peaking at about 250 ms after motion onset. In sum, this study shows that a sound which - by a combination of counter-balanced ITD and ILD cues - induces a static percept can still elicit a motion-onset response, indicative of independent ITD and ILD processing at the level of the MOR - a component that has been proposed to be, at least partly, generated in non-primary auditory cortex. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Reversed stereo depth and motion direction with anti-correlated stimuli.

    PubMed

    Read, J C; Eagle, R A

    2000-01-01

    We used anti-correlated stimuli to compare the correspondence problem in stereo and motion. Subjects performed a two-interval forced-choice disparity/motion direction discrimination task for different displacements. For anti-correlated 1d band-pass noise, we found weak reversed depth and motion. With 2d anti-correlated stimuli, stereo performance was impaired, but the perception of reversed motion was enhanced. We can explain the main features of our data in terms of channels tuned to different spatial frequencies and orientation. We suggest that a key difference between the solution of the correspondence problem by the motion and stereo systems concerns the integration of information at different orientations.

  9. ZAG-Otolith: Modification of Otolith-Ocular Reflexes, Motion Perception and Manual Control during Variable Radius Centrifugation Following Space Flight

    NASA Technical Reports Server (NTRS)

    Wood, S. J.; Clarke, A. H.; Rupert, A. H.; Harm, D. L.; Clement, G. R.

    2009-01-01

    Two joint ESA-NASA studies are examining changes in otolith-ocular reflexes and motion perception following short duration space flights, and the operational implications of post-flight tilt-translation ambiguity for manual control performance. Vibrotactile feedback of tilt orientation is also being evaluated as a countermeasure to improve performance during a closed-loop nulling task. METHODS. Data is currently being collected on astronaut subjects during 3 preflight sessions and during the first 8 days after Shuttle landings. Variable radius centrifugation is utilized to elicit otolith reflexes in the lateral plane without concordant roll canal cues. Unilateral centrifugation (400 deg/s, 3.5 cm radius) stimulates one otolith positioned off-axis while the opposite side is centered over the axis of rotation. During this paradigm, roll-tilt perception is measured using a subjective visual vertical task and ocular counter-rolling is obtained using binocular video-oculography. During a second paradigm (216 deg/s, <20 cm radius), the effects of stimulus frequency (0.15 - 0.6 Hz) are examined on eye movements and motion perception. A closed-loop nulling task is also performed with and without vibrotactile display feedback of chair radial position. PRELIMINARY RESULTS. Data collection is currently ongoing. Results to date suggest there is a trend for perceived tilt and translation amplitudes to be increased at the low and medium frequencies on landing day compared to pre-flight. Manual control performance is improved with vibrotactile feedback. DISCUSSION. One result of this study will be to characterize the variability (gain, asymmetry) in both otolithocular responses and motion perception during variable radius centrifugation, and measure the time course of postflight recovery. This study will also address how adaptive changes in otolith-mediated reflexes correspond to one's ability to perform closed-loop nulling tasks following G-transitions, and whether manual control performance can be improved with vibrotactile feedback of orientation.

  10. Modification of Otolith-Ocular Reflexes, Motion Perception and Manual Control During Variable Radius Centrifugation Following Space Flight

    NASA Technical Reports Server (NTRS)

    Wood, Scott J.; Clarke, A. H.; Rupert, A. H.; Harm, D. L.; Clement, G. R.

    2009-01-01

    Two joint ESA-NASA studies are examining changes in otolith-ocular reflexes and motion perception following short duration space flights, and the operational implications of post-flight tilt-translation ambiguity for manual control performance. Vibrotactile feedback of tilt orientation is also being evaluated as a countermeasure to improve performance during a closed-loop nulling task. Data is currently being collected on astronaut subjects during 3 preflight sessions and during the first 8 days after Shuttle landings. Variable radius centrifugation is utilized to elicit otolith reflexes in the lateral plane without concordant roll canal cues. Unilateral centrifugation (400 deg/s, 3.5 cm radius) stimulates one otolith positioned off-axis while the opposite side is centered over the axis of rotation. During this paradigm, roll-tilt perception is measured using a subjective visual vertical task and ocular counter-rolling is obtained using binocular video-oculography. During a second paradigm (216 deg/s, less than 20 cm radius), the effects of stimulus frequency (0.15 - 0.6 Hz) are examined on eye movements and motion perception. A closed-loop nulling task is also performed with and without vibrotactile display feedback of chair radial position. Data collection is currently ongoing. Results to date suggest there is a trend for perceived tilt and translation amplitudes to be increased at the low and medium frequencies on landing day compared to pre-flight. Manual control performance is improved with vibrotactile feedback. One result of this study will be to characterize the variability (gain, asymmetry) in both otolith-ocular responses and motion perception during variable radius centrifugation, and measure the time course of post-flight recovery. This study will also address how adaptive changes in otolith-mediated reflexes correspond to one's ability to perform closed-loop nulling tasks following G-transitions, and whether manual control performance can be improved with vibrotactile feedback of orientation.

  11. Motion perception during variable-radius swing motion in darkness.

    PubMed

    Rader, A A; Oman, C M; Merfeld, D M

    2009-10-01

    Using a variable-radius roll swing motion paradigm, we examined the influence of interaural (y-axis) and dorsoventral (z-axis) force modulation on perceived tilt and translation by measuring perception of horizontal translation, roll tilt, and distance from center of rotation (radius) at 0.45 and 0.8 Hz using standard magnitude estimation techniques (primarily verbal reports) in darkness. Results show that motion perception was significantly influenced by both y- and z-axis forces. During constant radius trials, subjects' perceptions of tilt and translation were generally almost veridical. By selectively pairing radius (1.22 and 0.38 m) and frequency (0.45 and 0.8 Hz, respectively), the y-axis acceleration could be tailored in opposition to gravity so that the combined y-axis gravitoinertial force (GIF) variation at the subject's ears was reduced to approximately 0.035 m/s(2) - in effect, the y-axis GIF was "nulled" below putative perceptual threshold levels. With y-axis force nulling, subjects overestimated their tilt angle and underestimated their horizontal translation and radius. For some y-axis nulling trials, a radial linear acceleration at twice the tilt frequency (0.25 m/s(2) at 0.9 Hz, 0.13 m/s(2) at 1.6 Hz) was simultaneously applied to reduce the z-axis force variations caused by centripetal acceleration and by changes in the z-axis component of gravity during tilt. For other trials, the phase of this radial linear acceleration was altered to double the magnitude of the z-axis force variations. z-axis force nulling further increased the perceived tilt angle and further decreased perceived horizontal translation and radius relative to the y-axis nulling trials, while z-axis force doubling had the opposite effect. Subject reports were remarkably geometrically consistent; an observer model-based analysis suggests that perception was influenced by knowledge of swing geometry.

  12. Embodied learning of a generative neural model for biological motion perception and inference

    PubMed Central

    Schrodt, Fabian; Layher, Georg; Neumann, Heiko; Butz, Martin V.

    2015-01-01

    Although an action observation network and mirror neurons for understanding the actions and intentions of others have been under deep, interdisciplinary consideration over recent years, it remains largely unknown how the brain manages to map visually perceived biological motion of others onto its own motor system. This paper shows how such a mapping may be established, even if the biologically motion is visually perceived from a new vantage point. We introduce a learning artificial neural network model and evaluate it on full body motion tracking recordings. The model implements an embodied, predictive inference approach. It first learns to correlate and segment multimodal sensory streams of own bodily motion. In doing so, it becomes able to anticipate motion progression, to complete missing modal information, and to self-generate learned motion sequences. When biological motion of another person is observed, this self-knowledge is utilized to recognize similar motion patterns and predict their progress. Due to the relative encodings, the model shows strong robustness in recognition despite observing rather large varieties of body morphology and posture dynamics. By additionally equipping the model with the capability to rotate its visual frame of reference, it is able to deduce the visual perspective onto the observed person, establishing full consistency to the embodied self-motion encodings by means of active inference. In further support of its neuro-cognitive plausibility, we also model typical bistable perceptions when crucial depth information is missing. In sum, the introduced neural model proposes a solution to the problem of how the human brain may establish correspondence between observed bodily motion and its own motor system, thus offering a mechanism that supports the development of mirror neurons. PMID:26217215

  13. Embodied learning of a generative neural model for biological motion perception and inference.

    PubMed

    Schrodt, Fabian; Layher, Georg; Neumann, Heiko; Butz, Martin V

    2015-01-01

    Although an action observation network and mirror neurons for understanding the actions and intentions of others have been under deep, interdisciplinary consideration over recent years, it remains largely unknown how the brain manages to map visually perceived biological motion of others onto its own motor system. This paper shows how such a mapping may be established, even if the biologically motion is visually perceived from a new vantage point. We introduce a learning artificial neural network model and evaluate it on full body motion tracking recordings. The model implements an embodied, predictive inference approach. It first learns to correlate and segment multimodal sensory streams of own bodily motion. In doing so, it becomes able to anticipate motion progression, to complete missing modal information, and to self-generate learned motion sequences. When biological motion of another person is observed, this self-knowledge is utilized to recognize similar motion patterns and predict their progress. Due to the relative encodings, the model shows strong robustness in recognition despite observing rather large varieties of body morphology and posture dynamics. By additionally equipping the model with the capability to rotate its visual frame of reference, it is able to deduce the visual perspective onto the observed person, establishing full consistency to the embodied self-motion encodings by means of active inference. In further support of its neuro-cognitive plausibility, we also model typical bistable perceptions when crucial depth information is missing. In sum, the introduced neural model proposes a solution to the problem of how the human brain may establish correspondence between observed bodily motion and its own motor system, thus offering a mechanism that supports the development of mirror neurons.

  14. Perceiving Object Shape from Specular Highlight Deformation, Boundary Contour Deformation, and Active Haptic Manipulation.

    PubMed

    Norman, J Farley; Phillips, Flip; Cheeseman, Jacob R; Thomason, Kelsey E; Ronning, Cecilia; Behari, Kriti; Kleinman, Kayla; Calloway, Autum B; Lamirande, Davora

    2016-01-01

    It is well known that motion facilitates the visual perception of solid object shape, particularly when surface texture or other identifiable features (e.g., corners) are present. Conventional models of structure-from-motion require the presence of texture or identifiable object features in order to recover 3-D structure. Is the facilitation in 3-D shape perception similar in magnitude when surface texture is absent? On any given trial in the current experiments, participants were presented with a single randomly-selected solid object (bell pepper or randomly-shaped "glaven") for 12 seconds and were required to indicate which of 12 (for bell peppers) or 8 (for glavens) simultaneously visible objects possessed the same shape. The initial single object's shape was defined either by boundary contours alone (i.e., presented as a silhouette), specular highlights alone, specular highlights combined with boundary contours, or texture. In addition, there was a haptic condition: in this condition, the participants haptically explored with both hands (but could not see) the initial single object for 12 seconds; they then performed the same shape-matching task used in the visual conditions. For both the visual and haptic conditions, motion (rotation in depth or active object manipulation) was present in half of the trials and was not present for the remaining trials. The effect of motion was quantitatively similar for all of the visual and haptic conditions-e.g., the participants' performance in Experiment 1 was 93.5 percent higher in the motion or active haptic manipulation conditions (when compared to the static conditions). The current results demonstrate that deforming specular highlights or boundary contours facilitate 3-D shape perception as much as the motion of objects that possess texture. The current results also indicate that the improvement with motion that occurs for haptics is similar in magnitude to that which occurs for vision.

  15. Perceiving Object Shape from Specular Highlight Deformation, Boundary Contour Deformation, and Active Haptic Manipulation

    PubMed Central

    Cheeseman, Jacob R.; Thomason, Kelsey E.; Ronning, Cecilia; Behari, Kriti; Kleinman, Kayla; Calloway, Autum B.; Lamirande, Davora

    2016-01-01

    It is well known that motion facilitates the visual perception of solid object shape, particularly when surface texture or other identifiable features (e.g., corners) are present. Conventional models of structure-from-motion require the presence of texture or identifiable object features in order to recover 3-D structure. Is the facilitation in 3-D shape perception similar in magnitude when surface texture is absent? On any given trial in the current experiments, participants were presented with a single randomly-selected solid object (bell pepper or randomly-shaped “glaven”) for 12 seconds and were required to indicate which of 12 (for bell peppers) or 8 (for glavens) simultaneously visible objects possessed the same shape. The initial single object’s shape was defined either by boundary contours alone (i.e., presented as a silhouette), specular highlights alone, specular highlights combined with boundary contours, or texture. In addition, there was a haptic condition: in this condition, the participants haptically explored with both hands (but could not see) the initial single object for 12 seconds; they then performed the same shape-matching task used in the visual conditions. For both the visual and haptic conditions, motion (rotation in depth or active object manipulation) was present in half of the trials and was not present for the remaining trials. The effect of motion was quantitatively similar for all of the visual and haptic conditions–e.g., the participants’ performance in Experiment 1 was 93.5 percent higher in the motion or active haptic manipulation conditions (when compared to the static conditions). The current results demonstrate that deforming specular highlights or boundary contours facilitate 3-D shape perception as much as the motion of objects that possess texture. The current results also indicate that the improvement with motion that occurs for haptics is similar in magnitude to that which occurs for vision. PMID:26863531

  16. Learning what to expect (in visual perception)

    PubMed Central

    Seriès, Peggy; Seitz, Aaron R.

    2013-01-01

    Expectations are known to greatly affect our experience of the world. A growing theory in computational neuroscience is that perception can be successfully described using Bayesian inference models and that the brain is “Bayes-optimal” under some constraints. In this context, expectations are particularly interesting, because they can be viewed as prior beliefs in the statistical inference process. A number of questions remain unsolved, however, for example: How fast do priors change over time? Are there limits in the complexity of the priors that can be learned? How do an individual’s priors compare to the true scene statistics? Can we unlearn priors that are thought to correspond to natural scene statistics? Where and what are the neural substrate of priors? Focusing on the perception of visual motion, we here review recent studies from our laboratories and others addressing these issues. We discuss how these data on motion perception fit within the broader literature on perceptual Bayesian priors, perceptual expectations, and statistical and perceptual learning and review the possible neural basis of priors. PMID:24187536

  17. Effects of Motion and Figural Goodness on Haptic Object Perception in Infancy.

    ERIC Educational Resources Information Center

    Streri, Arlette; Spelke, Elizabeth S.

    1989-01-01

    After haptic habituation to a ring display, infants perceived the rings in two experiments as parts of one connected object. In both haptic and visual modes, infants appeared to perceive object unity by analyzing motion but not by analyzing figural goodness. (RH)

  18. Vection Modulates Emotional Valence of Autobiographical Episodic Memories

    ERIC Educational Resources Information Center

    Seno, Takeharu; Kawabe, Takahiro; Ito, Hiroyuki; Sunaga, Shoji

    2013-01-01

    We examined whether illusory self-motion perception ("vection") induced by viewing upward and downward grating motion stimuli can alter the emotional valence of recollected autobiographical episodic memories. We found that participants recollected positive episodes more often while perceiving upward vection. However, when we tested a small moving…

  19. Implied dynamics biases the visual perception of velocity.

    PubMed

    La Scaleia, Barbara; Zago, Myrka; Moscatelli, Alessandro; Lacquaniti, Francesco; Viviani, Paolo

    2014-01-01

    We expand the anecdotic report by Johansson that back-and-forth linear harmonic motions appear uniform. Six experiments explore the role of shape and spatial orientation of the trajectory of a point-light target in the perceptual judgment of uniform motion. In Experiment 1, the target oscillated back-and-forth along a circular arc around an invisible pivot. The imaginary segment from the pivot to the midpoint of the trajectory could be oriented vertically downward (consistent with an upright pendulum), horizontally leftward, or vertically upward (upside-down). In Experiments 2 to 5, the target moved uni-directionally. The effect of suppressing the alternation of movement directions was tested with curvilinear (Experiment 2 and 3) or rectilinear (Experiment 4 and 5) paths. Experiment 6 replicated the upright condition of Experiment 1, but participants were asked to hold the gaze on a fixation point. When some features of the trajectory evoked the motion of either a simple pendulum or a mass-spring system, observers identified as uniform the kinematic profiles close to harmonic motion. The bias towards harmonic motion was most consistent in the upright orientation of Experiment 1 and 6. The bias disappeared when the stimuli were incompatible with both pendulum and mass-spring models (Experiments 3 to 5). The results are compatible with the hypothesis that the perception of dynamic stimuli is biased by the laws of motion obeyed by natural events, so that only natural motions appear uniform.

  20. The effect of age upon the perception of 3-D shape from motion.

    PubMed

    Norman, J Farley; Cheeseman, Jacob R; Pyles, Jessica; Baxter, Michael W; Thomason, Kelsey E; Calloway, Autum B

    2013-12-18

    Two experiments evaluated the ability of 50 older, middle-aged, and younger adults to discriminate the 3-dimensional (3-D) shape of curved surfaces defined by optical motion. In Experiment 1, temporal correspondence was disrupted by limiting the lifetimes of the moving surface points. In order to discriminate 3-D surface shape reliably, the younger and middle-aged adults needed a surface point lifetime of approximately 4 views (in the apparent motion sequences). In contrast, the older adults needed a much longer surface point lifetime of approximately 9 views in order to reliably perform the same task. In Experiment 2, the negative effect of age upon 3-D shape discrimination from motion was replicated. In this experiment, however, the participants' abilities to discriminate grating orientation and speed were also assessed. Edden et al. (2009) have recently demonstrated that behavioral grating orientation discrimination correlates with GABA (gamma aminobutyric acid) concentration in human visual cortex. Our results demonstrate that the negative effect of age upon 3-D shape perception from motion is not caused by impairments in the ability to perceive motion per se, but does correlate significantly with grating orientation discrimination. This result suggests that the age-related decline in 3-D shape discrimination from motion is related to decline in GABA concentration in visual cortex. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Voluntary attention modulates motion-induced mislocalization

    PubMed Central

    Tse, Peter U.; Whitney, David; Anstis, Stuart; Cavanagh, Patrick

    2013-01-01

    When a test is flashed on top of two superimposed, opposing motions, the perceived location of the test is shifted in opposite directions depending on which of the two motions is attended. Because the stimulus remains unchanged as attention switches from one motion to the other, the effect cannot be due to stimulus-driven, low-level motion. A control condition ruled out any contribution from possible attention-induced cyclotorsion of the eyes. This provides the strongest evidence to date for a role of attention in the perception of location, and establishes that what we attend to influences where we perceive objects to be. PMID:21415228

  2. Relationships of a Circular Singer Arm Gesture to Acoustical and Perceptual Measures of Singing: A Motion Capture Study

    ERIC Educational Resources Information Center

    Brunkan, Melissa C.

    2016-01-01

    The purpose of this study was to validate previous research that suggests using movement in conjunction with singing tasks can affect intonation and perception of the task. Singers (N = 49) were video and audio recorded, using a motion capture system, while singing a phrase from a familiar song, first with no motion, and then while doing a low,…

  3. What you feel is what you see: inverse dynamics estimation underlies the resistive sensation of a delayed cursor

    PubMed Central

    Takamuku, Shinya; Gomi, Hiroaki

    2015-01-01

    How our central nervous system (CNS) learns and exploits relationships between force and motion is a fundamental issue in computational neuroscience. While several lines of evidence have suggested that the CNS predicts motion states and signals from motor commands for control and perception (forward dynamics), it remains controversial whether it also performs the ‘inverse’ computation, i.e. the estimation of force from motion (inverse dynamics). Here, we show that the resistive sensation we experience while moving a delayed cursor, perceived purely from the change in visual motion, provides evidence of the inverse computation. To clearly specify the computational process underlying the sensation, we systematically varied the visual feedback and examined its effect on the strength of the sensation. In contrast to the prevailing theory that sensory prediction errors modulate our perception, the sensation did not correlate with errors in cursor motion due to the delay. Instead, it correlated with the amount of exposure to the forward acceleration of the cursor. This indicates that the delayed cursor is interpreted as a mechanical load, and the sensation represents its visually implied reaction force. Namely, the CNS automatically computes inverse dynamics, using visually detected motions, to monitor the dynamic forces involved in our actions. PMID:26156766

  4. Conveying Movement in Music and Prosody

    PubMed Central

    Hedger, Stephen C.; Nusbaum, Howard C.; Hoeckner, Berthold

    2013-01-01

    We investigated whether acoustic variation of musical properties can analogically convey descriptive information about an object. Specifically, we tested whether information from the temporal structure in music interacts with perception of a visual image to form an analog perceptual representation as a natural part of music perception. In Experiment 1, listeners heard music with an accelerating or decelerating temporal pattern, and then saw a picture of a still or moving object and decided whether it was animate or inanimate – a task unrelated to the patterning of the music. Object classification was faster when musical motion matched visually depicted motion. In Experiment 2, participants heard spoken sentences that were accompanied by accelerating or decelerating music, and then were presented with a picture of a still or moving object. When motion information in the music matched motion information in the picture, participants were similarly faster to respond. Fast and slow temporal patterns without acceleration and deceleration, however, did not make participants faster when they saw a picture depicting congruent motion information (Experiment 3), suggesting that understanding temporal structure information in music may depend on specific metaphors about motion in music. Taken together, these results suggest that visuo-spatial referential information can be analogically conveyed and represented by music and can be integrated with speech or influence the understanding of speech. PMID:24146920

  5. Translation and articulation in biological motion perception.

    PubMed

    Masselink, Jana; Lappe, Markus

    2015-08-01

    Recent models of biological motion processing focus on the articulational aspect of human walking investigated by point-light figures walking in place. However, in real human walking, the change in the position of the limbs relative to each other (referred to as articulation) results in a change of body location in space over time (referred to as translation). In order to examine the role of this translational component on the perception of biological motion we designed three psychophysical experiments of facing (leftward/rightward) and articulation discrimination (forward/backward and leftward/rightward) of a point-light walker viewed from the side, varying translation direction (relative to articulation direction), the amount of local image motion, and trial duration. In a further set of a forward/backward and a leftward/rightward articulation task, we additionally tested the influence of translational speed, including catch trials without articulation. We found a perceptual bias in translation direction in all three discrimination tasks. In the case of facing discrimination the bias was limited to short stimulus presentation. Our results suggest an interaction of articulation analysis with the processing of translational motion leading to best articulation discrimination when translational direction and speed match articulation. Moreover, we conclude that the global motion of the center-of-mass of the dot pattern is more relevant to processing of translation than the local motion of the dots. Our findings highlight that translation is a relevant cue that should be integrated in models of human motion detection.

  6. The responsiveness of biological motion processing areas to selective attention towards goals.

    PubMed

    Herrington, John; Nymberg, Charlotte; Faja, Susan; Price, Elinora; Schultz, Robert

    2012-10-15

    A growing literature indicates that visual cortex areas viewed as primarily responsive to exogenous stimuli are susceptible to top-down modulation by selective attention. The present study examines whether brain areas involved in biological motion perception are among these areas-particularly with respect to selective attention towards human movement goals. Fifteen participants completed a point-light biological motion study following a two-by-two factorial design, with one factor representing an exogenous manipulation of human movement goals (goal-directed versus random movement), and the other an endogenous manipulation (a goal identification task versus an ancillary color-change task). Both manipulations yielded increased activation in the human homologue of motion-sensitive area MT+ (hMT+) as well as the extrastriate body area (EBA). The endogenous manipulation was associated with increased right posterior superior temporal sulcus (STS) activation, whereas the exogenous manipulation was associated with increased activation in left posterior STS. Selective attention towards goals activated a portion of left hMT+/EBA only during the perception of purposeful movement-consistent with emerging theories associating this area with the matching of visual motion input to known goal-directed actions. The overall pattern of results indicates that attention towards the goals of human movement activates biological motion areas. Ultimately, selective attention may explain why some studies examining biological motion show activation in hMT+ and EBA, even when using control stimuli with comparable motion properties. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Visual-vestibular integration motion perception reporting

    NASA Technical Reports Server (NTRS)

    Harm, Deborah L.; Reschke, Millard R.; Parker, Donald E.

    1999-01-01

    Self-orientation and self/surround-motion perception derive from a multimodal sensory process that integrates information from the eyes, vestibular apparatus, proprioceptive and somatosensory receptors. Results from short and long duration spaceflight investigations indicate that: (1) perceptual and sensorimotor function was disrupted during the initial exposure to microgravity and gradually improved over hours to days (individuals adapt), (2) the presence and/or absence of information from different sensory modalities differentially affected the perception of orientation, self-motion and surround-motion, (3) perceptual and sensorimotor function was initially disrupted upon return to Earth-normal gravity and gradually recovered to preflight levels (individuals readapt), and (4) the longer the exposure to microgravity, the more complete the adaptation, the more profound the postflight disturbances, and the longer the recovery period to preflight levels. While much has been learned about perceptual and sensorimotor reactions and adaptation to microgravity, there is much remaining to be learned about the mechanisms underlying the adaptive changes, and about how intersensory interactions affect perceptual and sensorimotor function during voluntary movements. During space flight, SMS and perceptual disturbances have led to reductions in performance efficiency and sense of well-being. During entry and immediately after landing, such disturbances could have a serious impact on the ability of the commander to land the Orbiter and on the ability of all crew members to egress from the Orbiter, particularly in a non-nominal condition or following extended stays in microgravity. An understanding of spatial orientation and motion perception is essential for developing countermeasures for Space Motion Sickness (SMS) and perceptual disturbances during spaceflight and upon return to Earth. Countermeasures for optimal performance in flight and a successful return to Earth require the development of preflight and in-flight training to help astronauts acquire and maintain a dual adaptive state. Despite the considerable experience with, and use of, an extensive set of countermeasures in the Russian space program, SMS and perceptual disturbances remain an unresolved problem on long-term flights. Reliable, valid perceptual reports are required to develop and refine stimulus rearrangements presented in the PAT devices currently being developed as countermeasures for the prevention of motion sickness and perceptual disturbances during spaceflight, and to ensure a less hazardous return to Earth. Prior to STS-8, crew member descriptions of their perceptual experiences were, at best, anecdotal. Crew members were not schooled in the physiology or psychology of sensory perception, nor were they exposed to the appropriate professional vocabulary. However, beginning with the STS-8 Shuttle flight, a serious effort was initiated to teach astronauts a systematic method to classify and quantify their perceptual responses in space, during entry, and after flight. Understanding, categorizing, and characterizing perceptual responses to spaceflight has been greatly enhanced by implementation of that training system.

  8. The contribution of dynamic visual cues to audiovisual speech perception.

    PubMed

    Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador

    2015-08-01

    Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. [Comparative analysis of light sensitivity, depth and motion perception in animals and humans].

    PubMed

    Schaeffel, F

    2017-11-01

    This study examined how humans perform regarding light sensitivity, depth perception and motion vision in comparison to various animals. The parameters that limit the performance of the visual system for these different functions were examined. This study was based on literature studies (search in PubMed) and own results. Light sensitivity is limited by the brightness of the retinal image, which in turn is determined by the f‑number of the eye. Furthermore, it is limited by photon noise, thermal decay of rhodopsin, noise in the phototransduction cascade and neuronal processing. In invertebrates, impressive optical tricks have been developed to increase the number of photons reaching the photoreceptors. Furthermore, the spontaneous decay of the photopigment is lower in invertebrates at the cost of higher energy consumption. For depth perception at close range, stereopsis is the most precise but is available only to a few vertebrates. In contrast, motion parallax is used by many species including vertebrates as well as invertebrates. In a few cases accommodation is used for depth measurements or chromatic aberration. In motion vision the temporal resolution of the eye is most important. The ficker fusion frequency correlates in vertebrates with metabolic turnover and body temperature but also has very high values in insects. Apart from that the flicker fusion frequency generally declines with increasing body weight. Compared to animals the performance of the visual system in humans is among the best regarding light sensitivity, is the best regarding depth resolution and in the middle range regarding motion resolution.

  10. Crossmodal Statistical Binding of Temporal Information and Stimuli Properties Recalibrates Perception of Visual Apparent Motion

    PubMed Central

    Zhang, Yi; Chen, Lihan

    2016-01-01

    Recent studies of brain plasticity that pertain to time perception have shown that fast training of temporal discrimination in one modality, for example, the auditory modality, can improve performance of temporal discrimination in another modality, such as the visual modality. We here examined whether the perception of visual Ternus motion could be recalibrated through fast crossmodal statistical binding of temporal information and stimuli properties binding. We conducted two experiments, composed of three sessions each: pre-test, learning, and post-test. In both the pre-test and the post-test, participants classified the Ternus display as either “element motion” or “group motion.” For the training session in Experiment 1, we constructed two types of temporal structures, in which two consecutively presented sound beeps were dominantly (80%) flanked by one leading visual Ternus frame and by one lagging visual Ternus frame (VAAV) or dominantly inserted by two Ternus visual frames (AVVA). Participants were required to respond which interval (auditory vs. visual) was longer. In Experiment 2, we presented only a single auditory–visual pair but with similar temporal configurations as in Experiment 1, and asked participants to perform an audio–visual temporal order judgment. The results of these two experiments support that statistical binding of temporal information and stimuli properties can quickly and selectively recalibrate the sensitivity of perceiving visual motion, according to the protocols of the specific bindings. PMID:27065910

  11. A computational model for reference-frame synthesis with applications to motion perception.

    PubMed

    Clarke, Aaron M; Öğmen, Haluk; Herzog, Michael H

    2016-09-01

    As discovered by the Gestaltists, in particular by Duncker, we often perceive motion to be within a non-retinotopic reference frame. For example, the motion of a reflector on a bicycle appears to be circular, whereas, it traces out a cycloidal path with respect to external world coordinates. The reflector motion appears to be circular because the human brain subtracts the horizontal motion of the bicycle from the reflector motion. The bicycle serves as a reference frame for the reflector motion. Here, we present a general mathematical framework, based on vector fields, to explain non-retinotopic motion processing. Using four types of non-retinotopic motion paradigms, we show how the theory works in detail. For example, we show how non-retinotopic motion in the Ternus-Pikler display can be computed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. The power law for the perception of rotation by airline pilots.

    NASA Technical Reports Server (NTRS)

    Clark, B.; Stewart, J. D.

    1972-01-01

    The purpose of this study was to determine the power laws for the perception of rotation about the three major body axes. Eighteen airline pilots made magnitude estimates of 5-sec pulses of nine angular accelerations having a range of acceleration x time of 10-150 deg/sec. The results showed that (1) the power law with an exponent of 1.4 describes the subjective motion of these pilots for all three major body axes, (2) the power law also describes the perception of motion for individual pilots with a substantial range of exponents, (3) there were significant correlations among the exponents for the three body axes, and (4) the data suggest that the power law over the wide range used may be more complex than implied by a formula with a single exponent.

  13. From Flashes to Edges to Objects: Recovery of Local Edge Fragments Initiates Spatiotemporal Boundary Formation

    PubMed Central

    Erlikhman, Gennady; Kellman, Philip J.

    2016-01-01

    Spatiotemporal boundary formation (SBF) is the perception of illusory boundaries, global form, and global motion from spatially and temporally sparse transformations of texture elements (Shipley and Kellman, 1993a, 1994; Erlikhman and Kellman, 2015). It has been theorized that the visual system uses positions and times of element transformations to extract local oriented edge fragments, which then connect by known interpolation processes to produce larger contours and shapes in SBF. To test this theory, we created a novel display consisting of a sawtooth arrangement of elements that disappeared and reappeared sequentially. Although apparent motion along the sawtooth would be expected, with appropriate spacing and timing, the resulting percept was of a larger, moving, illusory bar. This display approximates the minimal conditions for visual perception of an oriented edge fragment from spatiotemporal information and confirms that such events may be initiating conditions in SBF. Using converging objective and subjective methods, experiments showed that edge formation in these displays was subject to a temporal integration constraint of ~80 ms between element disappearances. The experiments provide clear support for models of SBF that begin with extraction of local edge fragments, and they identify minimal conditions required for this process. We conjecture that these results reveal a link between spatiotemporal object perception and basic visual filtering. Motion energy filters have usually been studied with orientation given spatially by luminance contrast. When orientation is not given in static frames, these same motion energy filters serve as spatiotemporal edge filters, yielding local orientation from discrete element transformations over time. As numerous filters of different characteristic orientations and scales may respond to any simple SBF stimulus, we discuss the aperture and ambiguity problems that accompany this conjecture and how they might be resolved by the visual system. PMID:27445886

  14. Velocity storage contribution to vestibular self-motion perception in healthy human subjects.

    PubMed

    Bertolini, G; Ramat, S; Laurens, J; Bockisch, C J; Marti, S; Straumann, D; Palla, A

    2011-01-01

    Self-motion perception after a sudden stop from a sustained rotation in darkness lasts approximately as long as reflexive eye movements. We hypothesized that, after an angular velocity step, self-motion perception and reflexive eye movements are driven by the same vestibular pathways. In 16 healthy subjects (25-71 years of age), perceived rotational velocity (PRV) and the vestibulo-ocular reflex (rVOR) after sudden decelerations (90°/s(2)) from constant-velocity (90°/s) earth-vertical axis rotations were simultaneously measured (PRV reported by hand-lever turning; rVOR recorded by search coils). Subjects were upright (yaw) or 90° left-ear-down (pitch). After both yaw and pitch decelerations, PRV rose rapidly and showed a plateau before decaying. In contrast, slow-phase eye velocity (SPV) decayed immediately after the initial increase. SPV and PRV were fitted with the sum of two exponentials: one time constant accounting for the semicircular canal (SCC) dynamics and one time constant accounting for a central process, known as velocity storage mechanism (VSM). Parameters were constrained by requiring equal SCC time constant and VSM time constant for SPV and PRV. The gains weighting the two exponential functions were free to change. SPV were accurately fitted (variance-accounted-for: 0.85 ± 0.10) and PRV (variance-accounted-for: 0.86 ± 0.07), showing that SPV and PRV curve differences can be explained by a greater relative weight of VSM in PRV compared with SPV (twofold for yaw, threefold for pitch). These results support our hypothesis that self-motion perception after angular velocity steps is be driven by the same central vestibular processes as reflexive eye movements and that no additional mechanisms are required to explain the perceptual dynamics.

  15. Integration of Canal and Otolith Inputs by Central Vestibular Neurons Is Subadditive for Both Active and Passive Self-Motion: Implication for Perception

    PubMed Central

    Carriot, Jerome; Jamali, Mohsen; Brooks, Jessica X.

    2015-01-01

    Traditionally, the neural encoding of vestibular information is studied by applying either passive rotations or translations in isolation. However, natural vestibular stimuli are typically more complex. During everyday life, our self-motion is generally not restricted to one dimension, but rather comprises both rotational and translational motion that will simultaneously stimulate receptors in the semicircular canals and otoliths. In addition, natural self-motion is the result of self-generated and externally generated movements. However, to date, it remains unknown how information about rotational and translational components of self-motion is integrated by vestibular pathways during active and/or passive motion. Accordingly, here, we compared the responses of neurons at the first central stage of vestibular processing to rotation, translation, and combined motion. Recordings were made in alert macaques from neurons in the vestibular nuclei involved in postural control and self-motion perception. In response to passive stimulation, neurons did not combine canal and otolith afferent information linearly. Instead, inputs were subadditively integrated with a weighting that was frequency dependent. Although canal inputs were more heavily weighted at low frequencies, the weighting of otolith input increased with frequency. In response to active stimulation, neuronal modulation was significantly attenuated (∼70%) relative to passive stimulation for rotations and translations and even more profoundly attenuated for combined motion due to subadditive input integration. Together, these findings provide insights into neural computations underlying the integration of semicircular canal and otolith inputs required for accurate posture and motor control, as well as perceptual stability, during everyday life. PMID:25716854

  16. Dynamic Stimuli And Active Processing In Human Visual Perception

    NASA Astrophysics Data System (ADS)

    Haber, Ralph N.

    1990-03-01

    Theories of visual perception traditionally have considered a static retinal image to be the starting point for processing; and has considered processing both to be passive and a literal translation of that frozen, two dimensional, pictorial image. This paper considers five problem areas in the analysis of human visually guided locomotion, in which the traditional approach is contrasted to newer ones that utilize dynamic definitions of stimulation, and an active perceiver: (1) differentiation between object motion and self motion, and among the various kinds of self motion (e.g., eyes only, head only, whole body, and their combinations); (2) the sources and contents of visual information that guide movement; (3) the acquisition and performance of perceptual motor skills; (4) the nature of spatial representations, percepts, and the perceived layout of space; and (5) and why the retinal image is a poor starting point for perceptual processing. These newer approaches argue that stimuli must be considered as dynamic: humans process the systematic changes in patterned light when objects move and when they themselves move. Furthermore, the processing of visual stimuli must be active and interactive, so that perceivers can construct panoramic and stable percepts from an interaction of stimulus information and expectancies of what is contained in the visual environment. These developments all suggest a very different approach to the computational analyses of object location and identification, and of the visual guidance of locomotion.

  17. Visual-vestibular cue integration for heading perception: applications of optimal cue integration theory.

    PubMed

    Fetsch, Christopher R; Deangelis, Gregory C; Angelaki, Dora E

    2010-05-01

    The perception of self-motion is crucial for navigation, spatial orientation and motor control. In particular, estimation of one's direction of translation, or heading, relies heavily on multisensory integration in most natural situations. Visual and nonvisual (e.g., vestibular) information can be used to judge heading, but each modality alone is often insufficient for accurate performance. It is not surprising, then, that visual and vestibular signals converge frequently in the nervous system, and that these signals interact in powerful ways at the level of behavior and perception. Early behavioral studies of visual-vestibular interactions consisted mainly of descriptive accounts of perceptual illusions and qualitative estimation tasks, often with conflicting results. In contrast, cue integration research in other modalities has benefited from the application of rigorous psychophysical techniques, guided by normative models that rest on the foundation of ideal-observer analysis and Bayesian decision theory. Here we review recent experiments that have attempted to harness these so-called optimal cue integration models for the study of self-motion perception. Some of these studies used nonhuman primate subjects, enabling direct comparisons between behavioral performance and simultaneously recorded neuronal activity. The results indicate that humans and monkeys can integrate visual and vestibular heading cues in a manner consistent with optimal integration theory, and that single neurons in the dorsal medial superior temporal area show striking correlates of the behavioral effects. This line of research and other applications of normative cue combination models should continue to shed light on mechanisms of self-motion perception and the neuronal basis of multisensory integration.

  18. How Do Changes in Speed Affect the Perception of Duration?

    ERIC Educational Resources Information Center

    Matthews, William J.

    2011-01-01

    Six experiments investigated how changes in stimulus speed influence subjective duration. Participants saw rotating or translating shapes in three conditions: constant speed, accelerating motion, and decelerating motion. The distance moved and average speed were the same in all three conditions. In temporal judgment tasks, the constant-speed…

  19. Motion and Edge Sensitivity in Perception of Object Unity

    ERIC Educational Resources Information Center

    Smith, W. Carter; Johnson, Scott P.; Spelke, Elizabeth S.

    2003-01-01

    Although much evidence indicates that young infants perceive unitary objects by analyzing patterns of motion, infants' abilities to perceive object unity by analyzing Gestalt properties and by integrating distinct views of an object over time are in dispute. To address these controversies, four experiments investigated adults' and infants'…

  20. Spatial Attention and Audiovisual Interactions in Apparent Motion

    ERIC Educational Resources Information Center

    Sanabria, Daniel; Soto-Faraco, Salvador; Spence, Charles

    2007-01-01

    In this study, the authors combined the cross-modal dynamic capture task (involving the horizontal apparent movement of visual and auditory stimuli) with spatial cuing in the vertical dimension to investigate the role of spatial attention in cross-modal interactions during motion perception. Spatial attention was manipulated endogenously, either…

  1. Perception of Invariance Over Perspective Transformations in Five Month Old Infants.

    ERIC Educational Resources Information Center

    Gibson, Eleanor; And Others

    This experiment asked whether infants at 5 months perceived an invariant over four types of rigid motion (perspective transformations), and thereby differentiated rigid motion from deformation. Four perspective transformations of a sponge rubber object (rotation around the vertical axis, rotation around the horizontal axis, rotation in the frontal…

  2. Sociability modifies dogs' sensitivity to biological motion of different social relevance.

    PubMed

    Ishikawa, Yuko; Mills, Daniel; Willmott, Alexander; Mullineaux, David; Guo, Kun

    2018-03-01

    Preferential attention to living creatures is believed to be an intrinsic capacity of the visual system of several species, with perception of biological motion often studied and, in humans, it correlates with social cognitive performance. Although domestic dogs are exceptionally attentive to human social cues, it is unknown whether their sociability is associated with sensitivity to conspecific and heterospecific biological motion cues of different social relevance. We recorded video clips of point-light displays depicting a human or dog walking in either frontal or lateral view. In a preferential looking paradigm, dogs spontaneously viewed 16 paired point-light displays showing combinations of normal/inverted (control condition), human/dog and frontal/lateral views. Overall, dogs looked significantly longer at frontal human point-light display versus the inverted control, probably due to its clearer social/biological relevance. Dogs' sociability, assessed through owner-completed questionnaires, further revealed that low-sociability dogs preferred the lateral point-light display view, whereas high-sociability dogs preferred the frontal view. Clearly, dogs can recognize biological motion, but their preference is influenced by their sociability and the stimulus salience, implying biological motion perception may reflect aspects of dogs' social cognition.

  3. The Spatiotemporal Characteristics of Visual Motion Priming

    DTIC Science & Technology

    1994-07-01

    859. Barden, W. (1982, June). A general-purpose I/O board for the Color Computer. BYTE Magazine, pp. 260-281. B . ->,.. H . & Levick , W. (1965). The... B y ...... . ........ Distribution I Availability Codes Avail and i or Dist Special DTIC qU(A~ry niNPETEM 3 iii ABSTRACT THE...bistable diamond, apparent motion figure 52 (after Ramachandran & Anstis, 1983). ( b ) "Streaming" and "bouncing" percepts of apparent 52 motion dot

  4. Near-optimal integration of facial form and motion.

    PubMed

    Dobs, Katharina; Ma, Wei Ji; Reddy, Leila

    2017-09-08

    Human perception consists of the continuous integration of sensory cues pertaining to the same object. While it has been fairly well shown that humans use an optimal strategy when integrating low-level cues proportional to their relative reliability, the integration processes underlying high-level perception are much less understood. Here we investigate cue integration in a complex high-level perceptual system, the human face processing system. We tested cue integration of facial form and motion in an identity categorization task and found that an optimal model could successfully predict subjects' identity choices. Our results suggest that optimal cue integration may be implemented across different levels of the visual processing hierarchy.

  5. The Posture of Putting One's Palms Together Modulates Visual Motion Event Perception.

    PubMed

    Saito, Godai; Gyoba, Jiro

    2018-02-01

    We investigated the effect of an observer's hand postures on visual motion perception using the stream/bounce display. When two identical visual objects move across collinear horizontal trajectories toward each other in a two-dimensional display, observers perceive them as either streaming or bouncing. In our previous study, we found that when observers put their palms together just below the coincidence point of the two objects, the percentage of bouncing responses increased, mainly depending on the proprioceptive information from their own hands. However, it remains unclear if the tactile or haptic (force) information produced by the postures mostly influences the stream/bounce perception. We solved this problem by changing the tactile and haptic information on the palms of the hands. Experiment 1 showed that the promotion of bouncing perception was observed only when the posture of directly putting one's palms together was used, while there was no effect when a brick was sandwiched between the participant's palms. Experiment 2 demonstrated that the strength of force used when putting the palms together had no effect on increasing bounce perception. Our findings indicate that the hands-induced bounce effect derives from the tactile information produced by the direct contact between both palms.

  6. The economics of motion perception and invariants of visual sensitivity.

    PubMed

    Gepshtein, Sergei; Tyukin, Ivan; Kubovy, Michael

    2007-06-21

    Neural systems face the challenge of optimizing their performance with limited resources, just as economic systems do. Here, we use tools of neoclassical economic theory to explore how a frugal visual system should use a limited number of neurons to optimize perception of motion. The theory prescribes that vision should allocate its resources to different conditions of stimulation according to the degree of balance between measurement uncertainties and stimulus uncertainties. We find that human vision approximately follows the optimal prescription. The equilibrium theory explains why human visual sensitivity is distributed the way it is and why qualitatively different regimes of apparent motion are observed at different speeds. The theory offers a new normative framework for understanding the mechanisms of visual sensitivity at the threshold of visibility and above the threshold and predicts large-scale changes in visual sensitivity in response to changes in the statistics of stimulation and system goals.

  7. Time perception of visual motion is tuned by the motor representation of human actions

    PubMed Central

    Gavazzi, Gioele; Bisio, Ambra; Pozzo, Thierry

    2013-01-01

    Several studies have shown that the observation of a rapidly moving stimulus dilates our perception of time. However, this effect appears to be at odds with the fact that our interactions both with environment and with each other are temporally accurate. This work exploits this paradox to investigate whether the temporal accuracy of visual motion uses motor representations of actions. To this aim, the stimuli were a dot moving with kinematics belonging or not to the human motor repertoire and displayed at different velocities. Participants had to replicate its duration with two tasks differing in the underlying motor plan. Results show that independently of the task's motor plan, the temporal accuracy and precision depend on the correspondence between the stimulus' kinematics and the observer's motor competencies. Our data suggest that the temporal mechanism of visual motion exploits a temporal visuomotor representation tuned by the motor knowledge of human actions. PMID:23378903

  8. Active touch and self-motion encoding by Merkel cell-associated afferents

    PubMed Central

    Severson, Kyle S.; Xu, Duo; Van de Loo, Margaret; Bai, Ling; Ginty, David D.; O’Connor, Daniel H.

    2017-01-01

    Summary Touch perception depends on integrating signals from multiple types of peripheral mechanoreceptors. Merkel-cell associated afferents are thought to play a major role in form perception by encoding surface features of touched objects. However, activity of Merkel afferents during active touch has not been directly measured. Here, we show that Merkel and unidentified slowly adapting afferents in the whisker system of behaving mice respond to both self-motion and active touch. Touch responses were dominated by sensitivity to bending moment (torque) at the base of the whisker and its rate of change, and largely explained by a simple mechanical model. Self-motion responses encoded whisker position within a whisk cycle (phase), not absolute whisker angle, and arose from stresses reflecting whisker inertia and activity of specific muscles. Thus, Merkel afferents send to the brain multiplexed information about whisker position and surface features, suggesting that proprioception and touch converge at the earliest neural level. PMID:28434802

  9. Spatiotemporal Filter for Visual Motion Integration from Pursuit Eye Movements in Humans and Monkeys

    PubMed Central

    Liu, Bing

    2017-01-01

    Despite the enduring interest in motion integration, a direct measure of the space–time filter that the brain imposes on a visual scene has been elusive. This is perhaps because of the challenge of estimating a 3D function from perceptual reports in psychophysical tasks. We take a different approach. We exploit the close connection between visual motion estimates and smooth pursuit eye movements to measure stimulus–response correlations across space and time, computing the linear space–time filter for global motion direction in humans and monkeys. Although derived from eye movements, we find that the filter predicts perceptual motion estimates quite well. To distinguish visual from motor contributions to the temporal duration of the pursuit motion filter, we recorded single-unit responses in the monkey middle temporal cortical area (MT). We find that pursuit response delays are consistent with the distribution of cortical neuron latencies and that temporal motion integration for pursuit is consistent with a short integration MT subpopulation. Remarkably, the visual system appears to preferentially weight motion signals across a narrow range of foveal eccentricities rather than uniformly over the whole visual field, with a transiently enhanced contribution from locations along the direction of motion. We find that the visual system is most sensitive to motion falling at approximately one-third the radius of the stimulus aperture. Hypothesizing that the visual drive for pursuit is related to the filtered motion energy in a motion stimulus, we compare measured and predicted eye acceleration across several other target forms. SIGNIFICANCE STATEMENT A compact model of the spatial and temporal processing underlying global motion perception has been elusive. We used visually driven smooth eye movements to find the 3D space–time function that best predicts both eye movements and perception of translating dot patterns. We found that the visual system does not appear to use all available motion signals uniformly, but rather weights motion preferentially in a narrow band at approximately one-third the radius of the stimulus. Although not universal, the filter predicts responses to other types of stimuli, demonstrating a remarkable degree of generalization that may lead to a deeper understanding of visual motion processing. PMID:28003348

  10. Graphical Representations and the Perception of Motion: Integrating Isomorphism through Kinesthesia into Physics Instruction

    ERIC Educational Resources Information Center

    Espinoza, Fernando

    2015-01-01

    The incorporation of engaging and meaningful learning experiences is essential for the enhancement of critical thinking and the development of scientific literacy. The study engaged several groups of students in activities designed to elicit their understanding of a graphical representation of motion, and to determine the kinesthetic effect of…

  11. The Perception of Prototypical Motion: Synchronization Is Enhanced with Quantitatively Morphed Gestures of Musical Conductors

    ERIC Educational Resources Information Center

    Wollner, Clemens; Deconinck, Frederik J. A.; Parkinson, Jim; Hove, Michael J.; Keller, Peter E.

    2012-01-01

    Aesthetic theories have long suggested perceptual advantages for prototypical exemplars of a given class of objects or events. Empirical evidence confirmed that morphed (quantitatively averaged) human faces, musical interpretations, and human voices are preferred over most individual ones. In this study, biological human motion was morphed and…

  12. No Evidence for Impaired Perception of Biological Motion in Adults with Autistic Spectrum Disorders

    ERIC Educational Resources Information Center

    Murphy, Patrick; Brady, Nuala; Fitzgerald, Michael; Troje, Nikolaus F.

    2009-01-01

    A central feature of autistic spectrum disorders (ASDs) is a difficulty in identifying and reading human expressions, including those present in the moving human form. One previous study, by Blake et al. (2003), reports decreased sensitivity for perceiving biological motion in children with autism, suggesting that perceptual anomalies underlie…

  13. Pilot Studies on Object Motion Perception During Linear Self-Motion After Long Duration Centrifugation of Human Subjects

    DTIC Science & Technology

    1993-02-01

    Taylor & Creelman , 1967.) Measurements were taken with the sled moving either in forward or backward direction, each threshold being measured once...Furrer, R. & Messerschmid, E. (1990). Space Sickness on Earth. Experimental Brain Research 79(3), 661-663. Taylor, M.M. & Creelman . C.D. (1967). PEST

  14. A Study of Planetarium Effectiveness on Student Achievement, Perceptions and Retention.

    ERIC Educational Resources Information Center

    Ridky, Robert William

    Reported is a study to determine the effect of planetarium instruction in terms of immediate attainment, attitude, and retention in the teaching of selected celestial motion and non-celestial motion concepts, when contrasted to or combined with the inquiry activities utilized by the nationally developed science curricula. Observations were made on…

  15. A Nonlinear, Human-Centered Approach to Motion Cueing with a Neurocomputing Solver

    NASA Technical Reports Server (NTRS)

    Telban, Robert J.; Cardullo, Frank M.; Houck, Jacob A.

    2002-01-01

    This paper discusses the continuation of research into the development of new motion cueing algorithms first reported in 1999. In this earlier work, two viable approaches to motion cueing were identified: the coordinated adaptive washout algorithm or 'adaptive algorithm', and the 'optimal algorithm'. In this study, a novel approach to motion cueing is discussed that would combine features of both algorithms. The new algorithm is formulated as a linear optimal control problem, incorporating improved vestibular models and an integrated visual-vestibular motion perception model previously reported. A control law is generated from the motion platform states, resulting in a set of nonlinear cueing filters. The time-varying control law requires the matrix Riccati equation to be solved in real time. Therefore, in order to meet the real time requirement, a neurocomputing approach is used to solve this computationally challenging problem. Single degree-of-freedom responses for the nonlinear algorithm were generated and compared to the adaptive and optimal algorithms. Results for the heave mode show the nonlinear algorithm producing a motion cue with a time-varying washout, sustaining small cues for a longer duration and washing out larger cues more quickly. The addition of the optokinetic influence from the integrated perception model was shown to improve the response to a surge input, producing a specific force response with no steady-state washout. Improved cues are also observed for responses to a sway input. Yaw mode responses reveal that the nonlinear algorithm improves the motion cues by reducing the magnitude of negative cues. The effectiveness of the nonlinear algorithm as compared to the adaptive and linear optimal algorithms will be evaluated on a motion platform, the NASA Langley Research Center Visual Motion Simulator (VMS), and ultimately the Cockpit Motion Facility (CMF) with a series of pilot controlled maneuvers. A proposed experimental procedure is discussed. The results of this evaluation will be used to assess motion cueing performance.

  16. Global processing in amblyopia: a review

    PubMed Central

    Hamm, Lisa M.; Black, Joanna; Dai, Shuan; Thompson, Benjamin

    2014-01-01

    Amblyopia is a neurodevelopmental disorder of the visual system that is associated with disrupted binocular vision during early childhood. There is evidence that the effects of amblyopia extend beyond the primary visual cortex to regions of the dorsal and ventral extra-striate visual cortex involved in visual integration. Here, we review the current literature on global processing deficits in observers with either strabismic, anisometropic, or deprivation amblyopia. A range of global processing tasks have been used to investigate the extent of the cortical deficit in amblyopia including: global motion perception, global form perception, face perception, and biological motion. These tasks appear to be differentially affected by amblyopia. In general, observers with unilateral amblyopia appear to show deficits for local spatial processing and global tasks that require the segregation of signal from noise. In bilateral cases, the global processing deficits are exaggerated, and appear to extend to specialized perceptual systems such as those involved in face processing. PMID:24987383

  17. Implied Dynamics Biases the Visual Perception of Velocity

    PubMed Central

    La Scaleia, Barbara; Zago, Myrka; Moscatelli, Alessandro; Lacquaniti, Francesco; Viviani, Paolo

    2014-01-01

    We expand the anecdotic report by Johansson that back-and-forth linear harmonic motions appear uniform. Six experiments explore the role of shape and spatial orientation of the trajectory of a point-light target in the perceptual judgment of uniform motion. In Experiment 1, the target oscillated back-and-forth along a circular arc around an invisible pivot. The imaginary segment from the pivot to the midpoint of the trajectory could be oriented vertically downward (consistent with an upright pendulum), horizontally leftward, or vertically upward (upside-down). In Experiments 2 to 5, the target moved uni-directionally. The effect of suppressing the alternation of movement directions was tested with curvilinear (Experiment 2 and 3) or rectilinear (Experiment 4 and 5) paths. Experiment 6 replicated the upright condition of Experiment 1, but participants were asked to hold the gaze on a fixation point. When some features of the trajectory evoked the motion of either a simple pendulum or a mass-spring system, observers identified as uniform the kinematic profiles close to harmonic motion. The bias towards harmonic motion was most consistent in the upright orientation of Experiment 1 and 6. The bias disappeared when the stimuli were incompatible with both pendulum and mass-spring models (Experiments 3 to 5). The results are compatible with the hypothesis that the perception of dynamic stimuli is biased by the laws of motion obeyed by natural events, so that only natural motions appear uniform. PMID:24667578

  18. Drifting while stepping in place in old adults: Association of self-motion perception with reference frame reliance and ground optic flow sensitivity.

    PubMed

    Agathos, Catherine P; Bernardin, Delphine; Baranton, Konogan; Assaiante, Christine; Isableu, Brice

    2017-04-07

    Optic flow provides visual self-motion information and is shown to modulate gait and provoke postural reactions. We have previously reported an increased reliance on the visual, as opposed to the somatosensory-based egocentric, frame of reference (FoR) for spatial orientation with age. In this study, we evaluated FoR reliance for self-motion perception with respect to the ground surface. We examined how effects of ground optic flow direction on posture may be enhanced by an intermittent podal contact with the ground, and reliance on the visual FoR and aging. Young, middle-aged and old adults stood quietly (QS) or stepped in place (SIP) for 30s under static stimulation, approaching and receding optic flow on the ground and a control condition. We calculated center of pressure (COP) translation and optic flow sensitivity was defined as the ratio of COP translation velocity over absolute optic flow velocity: the visual self-motion quotient (VSQ). COP translation was more influenced by receding flow during QS and by approaching flow during SIP. In addition, old adults drifted forward while SIP without any imposed visual stimulation. Approaching flow limited this natural drift and receding flow enhanced it, as indicated by the VSQ. The VSQ appears to be a motor index of reliance on the visual FoR during SIP and is associated with greater reliance on the visual and reduced reliance on the egocentric FoR. Exploitation of the egocentric FoR for self-motion perception with respect to the ground surface is compromised by age and associated with greater sensitivity to optic flow. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  19. Separate Perceptual and Neural Processing of Velocity- and Disparity-Based 3D Motion Signals

    PubMed Central

    Czuba, Thaddeus B.; Cormack, Lawrence K.; Huk, Alexander C.

    2016-01-01

    Although the visual system uses both velocity- and disparity-based binocular information for computing 3D motion, it is unknown whether (and how) these two signals interact. We found that these two binocular signals are processed distinctly at the levels of both cortical activity in human MT and perception. In human MT, adaptation to both velocity-based and disparity-based 3D motions demonstrated direction-selective neuroimaging responses. However, when adaptation to one cue was probed using the other cue, there was no evidence of interaction between them (i.e., there was no “cross-cue” adaptation). Analogous psychophysical measurements yielded correspondingly weak cross-cue motion aftereffects (MAEs) in the face of very strong within-cue adaptation. In a direct test of perceptual independence, adapting to opposite 3D directions generated by different binocular cues resulted in simultaneous, superimposed, opposite-direction MAEs. These findings suggest that velocity- and disparity-based 3D motion signals may both flow through area MT but constitute distinct signals and pathways. SIGNIFICANCE STATEMENT Recent human neuroimaging and monkey electrophysiology have revealed 3D motion selectivity in area MT, which is driven by both velocity-based and disparity-based 3D motion signals. However, to elucidate the neural mechanisms by which the brain extracts 3D motion given these binocular signals, it is essential to understand how—or indeed if—these two binocular cues interact. We show that velocity-based and disparity-based signals are mostly separate at the levels of both fMRI responses in area MT and perception. Our findings suggest that the two binocular cues for 3D motion might be processed by separate specialized mechanisms. PMID:27798134

  20. Non-rigid, but not rigid, motion interferes with the processing of structural face information in developmental prosopagnosia.

    PubMed

    Maguinness, Corrina; Newell, Fiona N

    2015-04-01

    There is growing evidence to suggest that facial motion is an important cue for face recognition. However, it is poorly understood whether motion is integrated with facial form information or whether it provides an independent cue to identity. To provide further insight into this issue, we compared the effect of motion on face perception in two developmental prosopagnosics and age-matched controls. Participants first learned faces presented dynamically (video), or in a sequence of static images, in which rigid (viewpoint) or non-rigid (expression) changes occurred. Immediately following learning, participants were required to match a static face image to the learned face. Test face images varied by viewpoint (Experiment 1) or expression (Experiment 2) and were learned or novel face images. We found similar performance across prosopagnosics and controls in matching facial identity across changes in viewpoint when the learned face was shown moving in a rigid manner. However, non-rigid motion interfered with face matching across changes in expression in both individuals with prosopagnosia compared to the performance of control participants. In contrast, non-rigid motion did not differentially affect the matching of facial expressions across changes in identity for either prosopagnosics (Experiment 3). Our results suggest that whilst the processing of rigid motion information of a face may be preserved in developmental prosopagnosia, non-rigid motion can specifically interfere with the representation of structural face information. Taken together, these results suggest that both form and motion cues are important in face perception and that these cues are likely integrated in the representation of facial identity. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Motion sickness increases functional connectivity between visual motion and nausea-associated brain regions.

    PubMed

    Toschi, Nicola; Kim, Jieun; Sclocco, Roberta; Duggento, Andrea; Barbieri, Riccardo; Kuo, Braden; Napadow, Vitaly

    2017-01-01

    The brain networks supporting nausea not yet understood. We previously found that while visual stimulation activated primary (V1) and extrastriate visual cortices (MT+/V5, coding for visual motion), increasing nausea was associated with increasing sustained activation in several brain areas, with significant co-activation for anterior insula (aIns) and mid-cingulate (MCC) cortices. Here, we hypothesized that motion sickness also alters functional connectivity between visual motion and previously identified nausea-processing brain regions. Subjects prone to motion sickness and controls completed a motion sickness provocation task during fMRI/ECG acquisition. We studied changes in connectivity between visual processing areas activated by the stimulus (MT+/V5, V1), right aIns and MCC when comparing rest (BASELINE) to peak nausea state (NAUSEA). Compared to BASELINE, NAUSEA reduced connectivity between right and left V1 and increased connectivity between right MT+/V5 and aIns and between left MT+/V5 and MCC. Additionally, the change in MT+/V5 to insula connectivity was significantly associated with a change in sympathovagal balance, assessed by heart rate variability analysis. No state-related connectivity changes were noted for the control group. Increased connectivity between a visual motion processing region and nausea/salience brain regions may reflect increased transfer of visual/vestibular mismatch information to brain regions supporting nausea perception and autonomic processing. We conclude that vection-induced nausea increases connectivity between nausea-processing regions and those activated by the nauseogenic stimulus. This enhanced low-frequency coupling may support continual, slowly evolving nausea perception and shifts toward sympathetic dominance. Disengaging this coupling may be a target for biobehavioral interventions aimed at reducing motion sickness severity. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Separate Perceptual and Neural Processing of Velocity- and Disparity-Based 3D Motion Signals.

    PubMed

    Joo, Sung Jun; Czuba, Thaddeus B; Cormack, Lawrence K; Huk, Alexander C

    2016-10-19

    Although the visual system uses both velocity- and disparity-based binocular information for computing 3D motion, it is unknown whether (and how) these two signals interact. We found that these two binocular signals are processed distinctly at the levels of both cortical activity in human MT and perception. In human MT, adaptation to both velocity-based and disparity-based 3D motions demonstrated direction-selective neuroimaging responses. However, when adaptation to one cue was probed using the other cue, there was no evidence of interaction between them (i.e., there was no "cross-cue" adaptation). Analogous psychophysical measurements yielded correspondingly weak cross-cue motion aftereffects (MAEs) in the face of very strong within-cue adaptation. In a direct test of perceptual independence, adapting to opposite 3D directions generated by different binocular cues resulted in simultaneous, superimposed, opposite-direction MAEs. These findings suggest that velocity- and disparity-based 3D motion signals may both flow through area MT but constitute distinct signals and pathways. Recent human neuroimaging and monkey electrophysiology have revealed 3D motion selectivity in area MT, which is driven by both velocity-based and disparity-based 3D motion signals. However, to elucidate the neural mechanisms by which the brain extracts 3D motion given these binocular signals, it is essential to understand how-or indeed if-these two binocular cues interact. We show that velocity-based and disparity-based signals are mostly separate at the levels of both fMRI responses in area MT and perception. Our findings suggest that the two binocular cues for 3D motion might be processed by separate specialized mechanisms. Copyright © 2016 the authors 0270-6474/16/3610791-12$15.00/0.

  3. The neurophysiology of biological motion perception in schizophrenia

    PubMed Central

    Jahshan, Carol; Wynn, Jonathan K; Mathis, Kristopher I; Green, Michael F

    2015-01-01

    Introduction The ability to recognize human biological motion is a fundamental aspect of social cognition that is impaired in people with schizophrenia. However, little is known about the neural substrates of impaired biological motion perception in schizophrenia. In the current study, we assessed event-related potentials (ERPs) to human and nonhuman movement in schizophrenia. Methods Twenty-four subjects with schizophrenia and 18 healthy controls completed a biological motion task while their electroencephalography (EEG) was simultaneously recorded. Subjects watched clips of point-light animations containing 100%, 85%, or 70% biological motion, and were asked to decide whether the clip resembled human or nonhuman movement. Three ERPs were examined: P1, N1, and the late positive potential (LPP). Results Behaviorally, schizophrenia subjects identified significantly fewer stimuli as human movement compared to healthy controls in the 100% and 85% conditions. At the neural level, P1 was reduced in the schizophrenia group but did not differ among conditions in either group. There were no group differences in N1 but both groups had the largest N1 in the 70% condition. There was a condition × group interaction for the LPP: Healthy controls had a larger LPP to 100% versus 85% and 70% biological motion; there was no difference among conditions in schizophrenia subjects. Conclusions Consistent with previous findings, schizophrenia subjects were impaired in their ability to recognize biological motion. The EEG results showed that biological motion did not influence the earliest stage of visual processing (P1). Although schizophrenia subjects showed the same pattern of N1 results relative to healthy controls, they were impaired at a later stage (LPP), reflecting a dysfunction in the identification of human form in biological versus nonbiological motion stimuli. PMID:25722951

  4. The Perception of the Higher Derivatives of Visual Motion.

    DTIC Science & Technology

    1986-06-24

    uniform velocity in one run with a target mov- ing with either an accelerating or decelerating motion on another run , and had to decide on which of...the two runs the motion was uniform. It was found that sensitivity to acceleration (as indicated by proportion of correct dis- criminations) decreased...20 subjects had 8 In an experiment by Runeson (1975), one target (the stan- tracking runs with each of the three tvpes of moving target. The third

  5. What you feel is what you see: inverse dynamics estimation underlies the resistive sensation of a delayed cursor.

    PubMed

    Takamuku, Shinya; Gomi, Hiroaki

    2015-07-22

    How our central nervous system (CNS) learns and exploits relationships between force and motion is a fundamental issue in computational neuroscience. While several lines of evidence have suggested that the CNS predicts motion states and signals from motor commands for control and perception (forward dynamics), it remains controversial whether it also performs the 'inverse' computation, i.e. the estimation of force from motion (inverse dynamics). Here, we show that the resistive sensation we experience while moving a delayed cursor, perceived purely from the change in visual motion, provides evidence of the inverse computation. To clearly specify the computational process underlying the sensation, we systematically varied the visual feedback and examined its effect on the strength of the sensation. In contrast to the prevailing theory that sensory prediction errors modulate our perception, the sensation did not correlate with errors in cursor motion due to the delay. Instead, it correlated with the amount of exposure to the forward acceleration of the cursor. This indicates that the delayed cursor is interpreted as a mechanical load, and the sensation represents its visually implied reaction force. Namely, the CNS automatically computes inverse dynamics, using visually detected motions, to monitor the dynamic forces involved in our actions. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  6. Human discrimination of visual direction of motion with and without smooth pursuit eye movements

    NASA Technical Reports Server (NTRS)

    Krukowski, Anton E.; Pirog, Kathleen A.; Beutter, Brent R.; Brooks, Kevin R.; Stone, Leland S.

    2003-01-01

    It has long been known that ocular pursuit of a moving target has a major influence on its perceived speed (Aubert, 1886; Fleischl, 1882). However, little is known about the effect of smooth pursuit on the perception of target direction. Here we compare the precision of human visual-direction judgments under two oculomotor conditions (pursuit vs. fixation). We also examine the impact of stimulus duration (200 ms vs. 800 ms) and absolute direction (cardinal vs. oblique). Our main finding is that direction discrimination thresholds in the fixation and pursuit conditions are indistinguishable. Furthermore, the two oculomotor conditions showed oblique effects of similar magnitudes. These data suggest that the neural direction signals supporting perception are the same with or without pursuit, despite remarkably different retinal stimulation. During fixation, the stimulus information is restricted to large, purely peripheral retinal motion, while during steady-state pursuit, the stimulus information consists of small, unreliable foveal retinal motion and a large efference-copy signal. A parsimonious explanation of our findings is that the signal limiting the precision of direction judgments is a neural estimate of target motion in head-centered (or world-centered) coordinates (i.e., a combined retinal and eye motion signal) as found in the medial superior temporal area (MST), and not simply an estimate of retinal motion as found in the middle temporal area (MT).

  7. Self-motion perception in autism is compromised by visual noise but integrated optimally across multiple senses

    PubMed Central

    Zaidel, Adam; Goin-Kochel, Robin P.; Angelaki, Dora E.

    2015-01-01

    Perceptual processing in autism spectrum disorder (ASD) is marked by superior low-level task performance and inferior complex-task performance. This observation has led to theories of defective integration in ASD of local parts into a global percept. Despite mixed experimental results, this notion maintains widespread influence and has also motivated recent theories of defective multisensory integration in ASD. Impaired ASD performance in tasks involving classic random dot visual motion stimuli, corrupted by noise as a means to manipulate task difficulty, is frequently interpreted to support this notion of global integration deficits. By manipulating task difficulty independently of visual stimulus noise, here we test the hypothesis that heightened sensitivity to noise, rather than integration deficits, may characterize ASD. We found that although perception of visual motion through a cloud of dots was unimpaired without noise, the addition of stimulus noise significantly affected adolescents with ASD, more than controls. Strikingly, individuals with ASD demonstrated intact multisensory (visual–vestibular) integration, even in the presence of noise. Additionally, when vestibular motion was paired with pure visual noise, individuals with ASD demonstrated a different strategy than controls, marked by reduced flexibility. This result could be simulated by using attenuated (less reliable) and inflexible (not experience-dependent) Bayesian priors in ASD. These findings question widespread theories of impaired global and multisensory integration in ASD. Rather, they implicate increased sensitivity to sensory noise and less use of prior knowledge in ASD, suggesting increased reliance on incoming sensory information. PMID:25941373

  8. Contribution of self-motion perception to acoustic target localization.

    PubMed

    Pettorossi, V E; Brosch, M; Panichi, R; Botti, F; Grassi, S; Troiani, D

    2005-05-01

    The findings of this study suggest that acoustic spatial perception during head movement is achieved by the vestibular system, which is responsible for the correct dynamic of acoustic target pursuit. The ability to localize sounds in space during whole-body rotation relies on the auditory localization system, which recognizes the position of sound in a head-related frame, and on the sensory systems, namely the vestibular system, which perceive head and body movement. The aim of this study was to analyse the contribution of head motion cues to the spatial representation of acoustic targets in humans. Healthy subjects standing on a rotating platform in the dark were asked to pursue with a laser pointer an acoustic target which was horizontally rotated while the body was kept stationary or maintained stationary while the whole body was rotated. The contribution of head motion to the spatial acoustic representation could be inferred by comparing the gains and phases of the pursuit in the two experimental conditions when the frequency was varied. During acoustic target rotation there was a reduction in the gain and an increase in the phase lag, while during whole-body rotations the gain tended to increase and the phase remained constant. The different contributions of the vestibular and acoustic systems were confirmed by analysing the acoustic pursuit during asymmetric body rotation. In this particular condition, in which self-motion perception gradually diminished, an increasing delay in target pursuit was observed.

  9. Time-Perception Network and Default Mode Network Are Associated with Temporal Prediction in a Periodic Motion Task

    PubMed Central

    Carvalho, Fabiana M.; Chaim, Khallil T.; Sanchez, Tiago A.; de Araujo, Draulio B.

    2016-01-01

    The updating of prospective internal models is necessary to accurately predict future observations. Uncertainty-driven internal model updating has been studied using a variety of perceptual paradigms, and have revealed engagement of frontal and parietal areas. In a distinct literature, studies on temporal expectations have also characterized a time-perception network, which relies on temporal orienting of attention. However, the updating of prospective internal models is highly dependent on temporal attention, since temporal attention must be reoriented according to the current environmental demands. In this study, we used functional magnetic resonance imaging (fMRI) to evaluate to what extend the continuous manipulation of temporal prediction would recruit update-related areas and the time-perception network areas. We developed an exogenous temporal task that combines rhythm cueing and time-to-contact principles to generate implicit temporal expectation. Two patterns of motion were created: periodic (simple harmonic oscillation) and non-periodic (harmonic oscillation with variable acceleration). We found that non-periodic motion engaged the exogenous temporal orienting network, which includes the ventral premotor and inferior parietal cortices, and the cerebellum, as well as the presupplementary motor area, which has previously been implicated in internal model updating, and the motion-sensitive area MT+. Interestingly, we found a right-hemisphere preponderance suggesting the engagement of explicit timing mechanisms. We also show that the periodic motion condition, when compared to the non-periodic motion, activated a particular subset of the default-mode network (DMN) midline areas, including the left dorsomedial prefrontal cortex (DMPFC), anterior cingulate cortex (ACC), and bilateral posterior cingulate cortex/precuneus (PCC/PC). It suggests that the DMN plays a role in processing contextually expected information and supports recent evidence that the DMN may reflect the validation of prospective internal models and predictive control. Taken together, our findings suggest that continuous manipulation of temporal predictions engages representations of temporal prediction as well as task-independent updating of internal models. PMID:27313526

  10. Vestibular signals in macaque extrastriate visual cortex are functionally appropriate for heading perception

    PubMed Central

    Liu, Sheng; Angelaki, Dora E.

    2009-01-01

    Visual and vestibular signals converge onto the dorsal medial superior temporal area (MSTd) of the macaque extrastriate visual cortex, which is thought to be involved in multisensory heading perception for spatial navigation. Peripheral otolith information, however, is ambiguous and cannot distinguish linear accelerations experienced during self-motion from those due to changes in spatial orientation relative to gravity. Here we show that, unlike peripheral vestibular sensors but similar to lobules 9 and 10 of the cerebellar vermis (nodulus and uvula), MSTd neurons respond selectively to heading and not to changes in orientation relative to gravity. In support of a role in heading perception, MSTd vestibular responses are also dominated by velocity-like temporal dynamics, which might optimize sensory integration with visual motion information. Unlike the cerebellar vermis, however, MSTd neurons also carry a spatial orientation-independent rotation signal from the semicircular canals, which could be useful in compensating for the effects of head rotation on the processing of optic flow. These findings show that vestibular signals in MSTd are appropriately processed to support a functional role in multisensory heading perception. PMID:19605631

  11. Illusory object motion in the centre of a radial pattern: The Pursuit-Pursuing illusion.

    PubMed

    Ito, Hiroyuki

    2012-01-01

    A circular object placed in the centre of a radial pattern consisting of thin sectors was found to cause a robust motion illusion. During eye-movement pursuit of a moving target, the presently described stimulus produced illusory background-object motion in the same direction as that of the eye movement. In addition, the display induced illusory stationary perception of a moving object against the whole display motion. In seven experiments, the characteristics of the illusion were examined in terms of luminance relationships and figural characteristics of the radial pattern. Some potential explanations for these findings are discussed.

  12. Motion versus position in the perception of head-centred movement.

    PubMed

    Freeman, Tom C A; Sumnall, Jane H

    2002-01-01

    Abstract. Observers can recover motion with respect to the head during an eye movement by comparing signals encoding retinal motion and the velocity of pursuit. Evidently there is a mismatch between these signals because perceived head-centred motion is not always veridical. One example is the Filehne illusion, in which a stationary object appears to move in the opposite direction to pursuit. Like the motion aftereffect, the phenomenal experience of the Filehne illusion is one in which the stimulus moves but does not seem to go anywhere. This raises problems when measuring the illusion by motion nulling because the more traditional technique confounds perceived motion with changes in perceived position. We devised a new nulling technique using global-motion stimuli that degraded familiar position cues but preserved cues to motion. Stimuli consisted of random-dot patterns comprising signal and noise dots that moved at the same retinal 'base' speed. Noise moved in random directions. In an eye-stationary speed-matching experiment we found noise slowed perceived retinal speed as 'coherence strength' (ie percentage of signal) was reduced. The effect occurred over the two-octave range of base speeds studied and well above direction threshold. When the same stimuli were combined with pursuit, observers were able to null the Filehne illusion by adjusting coherence. A power law relating coherence to retinal base speed fit the data well with a negative exponent. Eye-movement recordings showed that pursuit was quite accurate. We then tested the hypothesis that the stimuli found at the null-points appeared to move at the same retinal speed. Two observers supported the hypothesis, a third partially, and a fourth showed a small linear trend. In addition, the retinal speed found by the traditional Filehne technique was similar to the matches obtained with the global-motion stimuli. The results provide support for the idea that speed is the critical cue in head-centred motion perception.

  13. Cross-Category Adaptation: Objects Produce Gender Adaptation in the Perception of Faces

    PubMed Central

    Javadi, Amir Homayoun; Wee, Natalie

    2012-01-01

    Adaptation aftereffects have been found for low-level visual features such as colour, motion and shape perception, as well as higher-level features such as gender, race and identity in domains such as faces and biological motion. It is not yet clear if adaptation effects in humans extend beyond this set of higher order features. The aim of this study was to investigate whether objects highly associated with one gender, e.g. high heels for females or electric shavers for males can modulate gender perception of a face. In two separate experiments, we adapted subjects to a series of objects highly associated with one gender and subsequently asked participants to judge the gender of an ambiguous face. Results showed that participants are more likely to perceive an ambiguous face as male after being exposed to objects highly associated to females and vice versa. A gender adaptation aftereffect was obtained despite the adaptor and test stimuli being from different global categories (objects and faces respectively). These findings show that our perception of gender from faces is highly affected by our environment and recent experience. This suggests two possible mechanisms: (a) that perception of the gender associated with an object shares at least some brain areas with those responsible for gender perception of faces and (b) adaptation to gender, which is a high-level concept, can modulate brain areas that are involved in facial gender perception through top-down processes. PMID:23049942

  14. Motion planning for autonomous vehicle based on radial basis function neural network in unstructured environment.

    PubMed

    Chen, Jiajia; Zhao, Pan; Liang, Huawei; Mei, Tao

    2014-09-18

    The autonomous vehicle is an automated system equipped with features like environment perception, decision-making, motion planning, and control and execution technology. Navigating in an unstructured and complex environment is a huge challenge for autonomous vehicles, due to the irregular shape of road, the requirement of real-time planning, and the nonholonomic constraints of vehicle. This paper presents a motion planning method, based on the Radial Basis Function (RBF) neural network, to guide the autonomous vehicle in unstructured environments. The proposed algorithm extracts the drivable region from the perception grid map based on the global path, which is available in the road network. The sample points are randomly selected in the drivable region, and a gradient descent method is used to train the RBF network. The parameters of the motion-planning algorithm are verified through the simulation and experiment. It is observed that the proposed approach produces a flexible, smooth, and safe path that can fit any road shape. The method is implemented on autonomous vehicle and verified against many outdoor scenes; furthermore, a comparison of proposed method with the existing well-known Rapidly-exploring Random Tree (RRT) method is presented. The experimental results show that the proposed method is highly effective in planning the vehicle path and offers better motion quality.

  15. Motion Planning for Autonomous Vehicle Based on Radial Basis Function Neural Network in Unstructured Environment

    PubMed Central

    Chen, Jiajia; Zhao, Pan; Liang, Huawei; Mei, Tao

    2014-01-01

    The autonomous vehicle is an automated system equipped with features like environment perception, decision-making, motion planning, and control and execution technology. Navigating in an unstructured and complex environment is a huge challenge for autonomous vehicles, due to the irregular shape of road, the requirement of real-time planning, and the nonholonomic constraints of vehicle. This paper presents a motion planning method, based on the Radial Basis Function (RBF) neural network, to guide the autonomous vehicle in unstructured environments. The proposed algorithm extracts the drivable region from the perception grid map based on the global path, which is available in the road network. The sample points are randomly selected in the drivable region, and a gradient descent method is used to train the RBF network. The parameters of the motion-planning algorithm are verified through the simulation and experiment. It is observed that the proposed approach produces a flexible, smooth, and safe path that can fit any road shape. The method is implemented on autonomous vehicle and verified against many outdoor scenes; furthermore, a comparison of proposed method with the existing well-known Rapidly-exploring Random Tree (RRT) method is presented. The experimental results show that the proposed method is highly effective in planning the vehicle path and offers better motion quality. PMID:25237902

  16. Age Differences in Visual-Auditory Self-Motion Perception during a Simulated Driving Task

    PubMed Central

    Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L.

    2016-01-01

    Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e., optic flow) and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e., engine, tire, and wind sounds). Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion. PMID:27199829

  17. The Effect of Selected Cinemagraphic Elements on Audience Perception of Mediated Concepts.

    ERIC Educational Resources Information Center

    Orr, Quinn

    This study is to explore cinemagraphic and visual elements and their inter-relations through the reinterpretation of previous research and literature. The cinemagraphic elements of visual images (camera angle, camera motion, subject motion, color, and lighting) work as a language requiring a proper grammar for the messages to be conveyed in their…

  18. Studies of human dynamic space orientation using techniques of control theory

    NASA Technical Reports Server (NTRS)

    Young, L. R.

    1974-01-01

    Studies of human orientation and manual control in high order systems are summarized. Data cover techniques for measuring and altering orientation perception, role of non-visual motion sensors, particularly the vestibular and tactile sensors, use of motion cues in closed loop control of simple stable and unstable systems, and advanced computer controlled display systems.

  19. Evidence against the temporal subsampling account of illusory motion reversal

    PubMed Central

    Kline, Keith A.; Eagleman, David M.

    2010-01-01

    An illusion of reversed motion may occur sporadically while viewing continuous smooth motion. This has been suggested as evidence of discrete temporal sampling by the visual system in analogy to the sampling that generates the wagon–wheel effect on film. In an alternative theory, the illusion is not the result of discrete sampling but instead of perceptual rivalry between appropriately activated and spuriously activated motion detectors. Results of the current study demonstrate that illusory reversals of two spatially overlapping and orthogonal motions often occur separately, providing evidence against the possibility that illusory motion reversal (IMR) is caused by temporal sampling within a visual region. Further, we find that IMR occurs with non-uniform and non-periodic stimuli—an observation that is not accounted for by the temporal sampling hypothesis. We propose, that a motion aftereffect is superimposed on the moving stimulus, sporadically allowing motion detectors for the reverse direction to dominate perception. PMID:18484852

  20. Spatiotemporal Integration and Object Perception in Infancy: Perceiving Unity versus Form.

    ERIC Educational Resources Information Center

    Van de Walle, Gretchen A.; Spelke, Elizabeth S.

    1996-01-01

    Investigated 5-month-olds' perception of an object whose center was occluded and whose ends were visible only in succession. Found that infants perceived the object as one connected whole when the ends underwent common motion but not when the ends were stationary. Results suggest that infants perceive object unity but not object form. (Author/BC)

  1. Relating Attention to Visual Mechanisms

    DTIC Science & Technology

    1989-02-28

    VI., Hillsdale, NJ:Erlbaum. Biederman , I. ( 1987 ) Recognition-by-components: A theory of human image understanding. Psychological Review, 94:115-147...perception (Coren, 1969; Festinger, Coren & Rivers, 1970; Brussell & Festinger, 1973; Brussell, 1973), motion perception (Dick, Ullman & Sagi, 1987 ...1985; Peterson, 1986; Hochberg & Peterson, 1987 ). These studies vary in the success with which they isolate a particular computation and some suffer

  2. Atypical basic movement kinematics in autism spectrum conditions

    PubMed Central

    Blakemore, Sarah-Jayne; Press, Clare

    2013-01-01

    Individuals with autism spectrum conditions have difficulties in understanding and responding appropriately to others. Additionally, they demonstrate impaired perception of biological motion and problems with motor control. Here we investigated whether individuals with autism move with an atypical kinematic profile, which might help to explain perceptual and motor impairments, and in principle may contribute to some of their higher level social problems. We recorded trajectory, velocity, acceleration and jerk while adult participants with autism and a matched control group conducted horizontal sinusoidal arm movements. Additionally, participants with autism took part in a biological motion perception task in which they classified observed movements as ‘natural’ or ‘unnatural’. Results show that individuals with autism moved with atypical kinematics; they did not minimize jerk to the same extent as the matched typical control group, and moved with greater acceleration and velocity. The degree to which kinematics were atypical was correlated with a bias towards perceiving biological motion as ‘unnatural’ and with the severity of autism symptoms as measured by the Autism Diagnostic Observation Schedule. We suggest that fundamental differences in movement kinematics in autism might help to explain their problems with motor control. Additionally, developmental experience of their own atypical kinematic profiles may lead to disrupted perception of others’ actions. PMID:23983031

  3. Parietal cortex mediates perceptual Gestalt grouping independent of stimulus size.

    PubMed

    Grassi, Pablo R; Zaretskaya, Natalia; Bartels, Andreas

    2016-06-01

    The integration of local moving elements into a unified gestalt percept has previously been linked to the posterior parietal cortex. There are two possible interpretations for the lack of involvement of other occipital regions. The first is that parietal cortex is indeed uniquely functionally specialized to perform grouping. Another possibility is that other visual regions can perform grouping as well, but that the large spatial separation of the local elements used previously exceeded their neurons' receptive field (RF) sizes, preventing their involvement. In this study we distinguished between these two alternatives. We measured whole-brain activity using fMRI in response to a bistable motion illusion that induced mutually exclusive percepts of either an illusory global Gestalt or of local elements. The stimulus was presented in two sizes, a large version known to activate IPS only, and a version sufficiently small to fit into the RFs of mid-level dorsal regions such as V5/MT. We found that none of the separately localized motion regions apart from parietal cortex showed a preference for global Gestalt perception, even for the smaller version of the stimulus. This outcome suggests that grouping-by-motion is mediated by a specialized size-invariant mechanism with parietal cortex as its anatomical substrate. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Auditory perception of a human walker.

    PubMed

    Cottrell, David; Campbell, Megan E J

    2014-01-01

    When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.

  5. Perception and control of rotorcraft flight

    NASA Technical Reports Server (NTRS)

    Owen, Dean H.

    1991-01-01

    Three topics which can be applied to rotorcraft flight are examined: (1) the nature of visual information; (2) what visual information is informative about; and (3) the control of visual information. The anchorage of visual perception is defined as the distribution of structure in the surrounding optical array or the distribution of optical structure over the retinal surface. A debate was provoked about whether the referent of visual event perception, and in turn control, is optical motion, kinetics, or dynamics. The interface of control theory and visual perception is also considered. The relationships among these problems is the basis of this article.

  6. Implied motion because of instability in Hokusai Manga activates the human motion-sensitive extrastriate visual cortex: an fMRI study of the impact of visual art.

    PubMed

    Osaka, Naoyuki; Matsuyoshi, Daisuke; Ikeda, Takashi; Osaka, Mariko

    2010-03-10

    The recent development of cognitive neuroscience has invited inference about the neurosensory events underlying the experience of visual arts involving implied motion. We report functional magnetic resonance imaging study demonstrating activation of the human extrastriate motion-sensitive cortex by static images showing implied motion because of instability. We used static line-drawing cartoons of humans by Hokusai Katsushika (called 'Hokusai Manga'), an outstanding Japanese cartoonist as well as famous Ukiyoe artist. We found 'Hokusai Manga' with implied motion by depicting human bodies that are engaged in challenging tonic posture significantly activated the motion-sensitive visual cortex including MT+ in the human extrastriate cortex, while an illustration that does not imply motion, for either humans or objects, did not activate these areas under the same tasks. We conclude that motion-sensitive extrastriate cortex would be a critical region for perception of implied motion in instability.

  7. Directional asymmetries and age effects in human self-motion perception.

    PubMed

    Roditi, Rachel E; Crane, Benjamin T

    2012-06-01

    Directional asymmetries in vestibular reflexes have aided the diagnosis of vestibular lesions; however, potential asymmetries in vestibular perception have not been well defined. This investigation sought to measure potential asymmetries in human vestibular perception. Vestibular perception thresholds were measured in 24 healthy human subjects between the ages of 21 and 68 years. Stimuli consisted of a single cycle of sinusoidal acceleration in a single direction lasting 1 or 2 s (1 or 0.5 Hz), delivered in sway (left-right), surge (forward-backward), heave (up-down), or yaw rotation. Subject identified self-motion directions were analyzed using a forced choice technique, which permitted thresholds to be independently determined for each direction. Non-motion stimuli were presented to measure possible response bias. A significant directional asymmetry in the dynamic response occurred in 27% of conditions tested within subjects, and in at least one type of motion in 92% of subjects. Directional asymmetries were usually consistent when retested in the same subject but did not occur consistently in one direction across the population with the exception of heave at 0.5 Hz. Responses during null stimuli presentation suggested that asymmetries were not due to biased guessing. Multiple models were applied and compared to determine if sensitivities were direction specific. Using Akaike information criterion, it was found that the model with direction specific sensitivities better described the data in 86% of runs when compared with a model that used the same sensitivity for both directions. Mean thresholds for yaw were 1.3±0.9°/s at 0.5 Hz and 0.9±0.7°/s at 1 Hz and were independent of age. Thresholds for surge and sway were 1.7±0.8 cm/s at 0.5 Hz and 0.7±0.3 cm/s at 1.0 Hz for subjects <50 and were significantly higher in subjects >50 years old. Heave thresholds were higher and were independent of age.

  8. Perception of visual apparent motion is modulated by a gap within concurrent auditory glides, even when it is illusory

    PubMed Central

    Wang, Qingcui; Guo, Lu; Bao, Ming; Chen, Lihan

    2015-01-01

    Auditory and visual events often happen concurrently, and how they group together can have a strong effect on what is perceived. We investigated whether/how intra- or cross-modal temporal grouping influenced the perceptual decision of otherwise ambiguous visual apparent motion. To achieve this, we juxtaposed auditory gap transfer illusion with visual Ternus display. The Ternus display involves a multi-element stimulus that can induce either of two different percepts of apparent motion: ‘element motion’ (EM) or ‘group motion’ (GM). In “EM,” the endmost disk is seen as moving back and forth while the middle disk at the central position remains stationary; while in “GM,” both disks appear to move laterally as a whole. The gap transfer illusion refers to the illusory subjective transfer of a short gap (around 100 ms) from the long glide to the short continuous glide when the two glides intercede at the temporal middle point. In our experiments, observers were required to make a perceptual discrimination of Ternus motion in the presence of concurrent auditory glides (with or without a gap inside). Results showed that a gap within a short glide imposed a remarkable effect on separating visual events, and led to a dominant perception of GM as well. The auditory configuration with gap transfer illusion triggered the same auditory capture effect. Further investigations showed that visual interval which coincided with the gap interval (50–230 ms) in the long glide was perceived to be shorter than that within both the short glide and the ‘gap-transfer’ auditory configurations in the same physical intervals (gaps). The results indicated that auditory temporal perceptual grouping takes priority over the cross-modal interaction in determining the final readout of the visual perception, and the mechanism of selective attention on auditory events also plays a role. PMID:26042055

  9. Discrimination of curvature from motion during smooth pursuit eye movements and fixation.

    PubMed

    Ross, Nicholas M; Goettker, Alexander; Schütz, Alexander C; Braun, Doris I; Gegenfurtner, Karl R

    2017-09-01

    Smooth pursuit and motion perception have mainly been investigated with stimuli moving along linear trajectories. Here we studied the quality of pursuit movements to curved motion trajectories in human observers and examined whether the pursuit responses would be sensitive enough to discriminate various degrees of curvature. In a two-interval forced-choice task subjects pursued a Gaussian blob moving along a curved trajectory and then indicated in which interval the curve was flatter. We also measured discrimination thresholds for the same curvatures during fixation. Motion curvature had some specific effects on smooth pursuit properties: trajectories with larger amounts of curvature elicited lower open-loop acceleration, lower pursuit gain, and larger catch-up saccades compared with less curved trajectories. Initially, target motion curvatures were underestimated; however, ∼300 ms after pursuit onset pursuit responses closely matched the actual curved trajectory. We calculated perceptual thresholds for curvature discrimination, which were on the order of 1.5 degrees of visual angle (°) for a 7.9° curvature standard. Oculometric sensitivity to curvature discrimination based on the whole pursuit trajectory was quite similar to perceptual performance. Oculometric thresholds based on smaller time windows were higher. Thus smooth pursuit can quite accurately follow moving targets with curved trajectories, but temporal integration over longer periods is necessary to reach perceptual thresholds for curvature discrimination. NEW & NOTEWORTHY Even though motion trajectories in the real world are frequently curved, most studies of smooth pursuit and motion perception have investigated linear motion. We show that pursuit initially underestimates the curvature of target motion and is able to reproduce the target curvature ∼300 ms after pursuit onset. Temporal integration of target motion over longer periods is necessary for pursuit to reach the level of precision found in perceptual discrimination of curvature. Copyright © 2017 the American Physiological Society.

  10. Perception of object trajectory: parsing retinal motion into self and object movement components.

    PubMed

    Warren, Paul A; Rushton, Simon K

    2007-08-16

    A moving observer needs to be able to estimate the trajectory of other objects moving in the scene. Without the ability to do so, it would be difficult to avoid obstacles or catch a ball. We hypothesized that neural mechanisms sensitive to the patterns of motion generated on the retina during self-movement (optic flow) play a key role in this process, "parsing" motion due to self-movement from that due to object movement. We investigated this "flow parsing" hypothesis by measuring the perceived trajectory of a moving probe placed within a flow field that was consistent with movement of the observer. In the first experiment, the flow field was consistent with an eye rotation; in the second experiment, it was consistent with a lateral translation of the eyes. We manipulated the distance of the probe in both experiments and assessed the consequences. As predicted by the flow parsing hypothesis, manipulating the distance of the probe had differing effects on the perceived trajectory of the probe in the two experiments. The results were consistent with the scene geometry and the type of simulated self-movement. In a third experiment, we explored the contribution of local and global motion processing to the results of the first two experiments. The data suggest that the parsing process involves global motion processing, not just local motion contrast. The findings of this study support a role for optic flow processing in the perception of object movement during self-movement.

  11. Basic quantitative assessment of visual performance in patients with very low vision.

    PubMed

    Bach, Michael; Wilke, Michaela; Wilhelm, Barbara; Zrenner, Eberhart; Wilke, Robert

    2010-02-01

    A variety of approaches to developing visual prostheses are being pursued: subretinal, epiretinal, via the optic nerve, or via the visual cortex. This report presents a method of comparing their efficacy at genuinely improving visual function, starting at no light perception (NLP). A test battery (a computer program, Basic Assessment of Light and Motion [BaLM]) was developed in four basic visual dimensions: (1) light perception (light/no light), with an unstructured large-field stimulus; (2) temporal resolution, with single versus double flash discrimination; (3) localization of light, where a wedge extends from the center into four possible directions; and (4) motion, with a coarse pattern moving in one of four directions. Two- or four-alternative, forced-choice paradigms were used. The participants' responses were self-paced and delivered with a keypad. The feasibility of the BaLM was tested in 73 eyes of 51 patients with low vision. The light and time test modules discriminated between NLP and light perception (LP). The localization and motion modules showed no significant response for NLP but discriminated between LP and hand movement (HM). All four modules reached their ceilings in the acuity categories higher than HM. BaLM results systematically differed between the very-low-acuity categories NLP, LP, and HM. Light and time yielded similar results, as did localization and motion; still, for assessing the visual prostheses with differing temporal characteristics, they are not redundant. The results suggest that this simple test battery provides a quantitative assessment of visual function in the very-low-vision range from NLP to HM.

  12. Separating neural activity associated with emotion and implied motion: An fMRI study.

    PubMed

    Kolesar, Tiffany A; Kornelsen, Jennifer; Smith, Stephen D

    2017-02-01

    Previous research provides evidence for an emo-motoric neural network allowing emotion to modulate activity in regions of the nervous system related to movement. However, recent research suggests that these results may be due to the movement depicted in the stimuli. The purpose of the current study was to differentiate the unique neural activity of emotion and implied motion using functional MRI. Thirteen healthy participants viewed 4 sets of images: (a) negative stimuli implying movement, (b) negative stimuli not implying movement, (c) neutral stimuli implying movement, and (d) neutral stimuli not implying movement. A main effect for implied motion was found, primarily in regions associated with multimodal integration (bilateral insula and cingulate), and visual areas that process motion (bilateral middle temporal gyrus). A main effect for emotion was found primarily in occipital and parietal regions, indicating that emotion enhances visual perception. Surprisingly, emotion also activated the left precentral gyrus, a motor region. These results demonstrate that emotion elicits activity above and beyond that evoked by the perception of implied movement, but that the neural representations of these characteristics overlap. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. Effect of contrast on the perception of direction of a moving pattern

    NASA Technical Reports Server (NTRS)

    Stone, L. S.; Watson, A. B.; Mulligan, J. B.

    1989-01-01

    A series of experiments examining the effect of contrast on the perception of moving plaids was performed to test the hypothesis that the human visual system determines the direction of a moving plaid in a two-staged process: decomposition into component motion followed by application of the intersection-of-contraints rule. Although there is recent evidence that the first tenet of the hypothesis is correct, i.e., that plaid motion is initially decomposed into the motion of the individual grating components, the nature of the second-stage combination rule has not yet been established. It was found that when the gratings within the plaid are of different contrast the preceived direction is not predicted by the intersection-of-constraints rule. There is a strong (up to 20 deg) bias in the direction of the higher-constrast grating. A revised model, which incorporates a contrast-dependent weighting of perceived grating speed as observed for one-dimensional patterns, can quantitatively predict most of the results. The results are then discussed in the context of various models of human visual motion processing and of physiological responses of neurons in the primate visual system.

  14. Ageing vision and falls: a review.

    PubMed

    Saftari, Liana Nafisa; Kwon, Oh-Sang

    2018-04-23

    Falls are the leading cause of accidental injury and death among older adults. One of three adults over the age of 65 years falls annually. As the size of elderly population increases, falls become a major concern for public health and there is a pressing need to understand the causes of falls thoroughly. While it is well documented that visual functions such as visual acuity, contrast sensitivity, and stereo acuity are correlated with fall risks, little attention has been paid to the relationship between falls and the ability of the visual system to perceive motion in the environment. The omission of visual motion perception in the literature is a critical gap because it is an essential function in maintaining balance. In the present article, we first review existing studies regarding visual risk factors for falls and the effect of ageing vision on falls. We then present a group of phenomena such as vection and sensory reweighting that provide information on how visual motion signals are used to maintain balance. We suggest that the current list of visual risk factors for falls should be elaborated by taking into account the relationship between visual motion perception and balance control.

  15. Binocular Perception of 2D Lateral Motion and Guidance of Coordinated Motor Behavior.

    PubMed

    Fath, Aaron J; Snapp-Childs, Winona; Kountouriotis, Georgios K; Bingham, Geoffrey P

    2016-04-01

    Zannoli, Cass, Alais, and Mamassian (2012) found greater audiovisual lag between a tone and disparity-defined stimuli moving laterally (90-170 ms) than for disparity-defined stimuli moving in depth or luminance-defined stimuli moving laterally or in depth (50-60 ms). We tested if this increased lag presents an impediment to visually guided coordination with laterally moving objects. Participants used a joystick to move a virtual object in several constant relative phases with a laterally oscillating stimulus. Both the participant-controlled object and the target object were presented using a disparity-defined display that yielded information through changes in disparity over time (CDOT) or using a luminance-defined display that additionally provided information through monocular motion and interocular velocity differences (IOVD). Performance was comparable for both disparity-defined and luminance-defined displays in all relative phases. This suggests that, despite lag, perception of lateral motion through CDOT is generally sufficient to guide coordinated motor behavior.

  16. He throws like a girl (but only when he's sad): emotion affects sex-decoding of biological motion displays.

    PubMed

    Johnson, Kerri L; McKay, Lawrie S; Pollick, Frank E

    2011-05-01

    Gender stereotypes have been implicated in sex-typed perceptions of facial emotion. Such interpretations were recently called into question because facial cues of emotion are confounded with sexually dimorphic facial cues. Here we examine the role of visual cues and gender stereotypes in perceptions of biological motion displays, thus overcoming the morphological confounding inherent in facial displays. In four studies, participants' judgments revealed gender stereotyping. Observers accurately perceived emotion from biological motion displays (Study 1), and this affected sex categorizations. Angry displays were overwhelmingly judged to be men; sad displays were judged to be women (Studies 2-4). Moreover, this pattern remained strong when stimuli were equated for velocity (Study 3). We argue that these results were obtained because perceivers applied gender stereotypes of emotion to infer sex category (Study 4). Implications for both vision sciences and social psychology are discussed. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Illusory object motion in the centre of a radial pattern: The Pursuit–Pursuing illusion

    PubMed Central

    Ito, Hiroyuki

    2012-01-01

    A circular object placed in the centre of a radial pattern consisting of thin sectors was found to cause a robust motion illusion. During eye-movement pursuit of a moving target, the presently described stimulus produced illusory background-object motion in the same direction as that of the eye movement. In addition, the display induced illusory stationary perception of a moving object against the whole display motion. In seven experiments, the characteristics of the illusion were examined in terms of luminance relationships and figural characteristics of the radial pattern. Some potential explanations for these findings are discussed. PMID:23145267

  18. Is perception of vertical impaired in individuals with chronic stroke with a history of 'pushing'?

    PubMed

    Mansfield, Avril; Fraser, Lindsey; Rajachandrakumar, Roshanth; Danells, Cynthia J; Knorr, Svetlana; Campos, Jennifer

    2015-03-17

    Post-stroke 'pushing' behaviour appears to be caused by impaired perception of vertical in the roll plane. While pushing behaviour typically resolves with stroke recovery, it is not known if misperception of vertical persists. The purpose of this study was to determine if perception of vertical is impaired amongst stroke survivors with a history of pushing behaviour. Fourteen individuals with chronic stroke (7 with history of pushing) and 10 age-matched healthy controls participated. Participants sat upright on a chair surrounded by a curved projection screen in a laboratory mounted on a motion base. Subjective visual vertical (SVV) was assessed using a 30 trial, forced-choice protocol. For each trial participants viewed a line projected on the screen and indicated if the line was tilted to the right or the left. For the subjective postural vertical (SPV), participants wore a blindfold and the motion base was tilted to the left or right by 10-20°. Participants were asked to adjust the angular movements of the motion base until they felt upright. SPV was not different between groups. SVV was significantly more biased towards the contralesional side for participants with history of pushing (-3.6 ± 4.1°) than those without (-0.1 ± 1.4°). Two individuals with history of pushing had SVV or SPV outside the maximum for healthy controls. Impaired vertical perception may persist in some individuals with prior post-stroke pushing, despite resolution of pushing behaviours, which could have consequences for functional mobility and falls. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  19. Face processing in autism: Reduced integration of cross-feature dynamics.

    PubMed

    Shah, Punit; Bird, Geoffrey; Cook, Richard

    2016-02-01

    Characteristic problems with social interaction have prompted considerable interest in the face processing of individuals with Autism Spectrum Disorder (ASD). Studies suggest that reduced integration of information from disparate facial regions likely contributes to difficulties recognizing static faces in this population. Recent work also indicates that observers with ASD have problems using patterns of facial motion to judge identity and gender, and may be less able to derive global motion percepts. These findings raise the possibility that feature integration deficits also impact the perception of moving faces. To test this hypothesis, we examined whether observers with ASD exhibit susceptibility to a new dynamic face illusion, thought to index integration of moving facial features. When typical observers view eye-opening and -closing in the presence of asynchronous mouth-opening and -closing, the concurrent mouth movements induce a strong illusory slowing of the eye transitions. However, we find that observers with ASD are not susceptible to this illusion, suggestive of weaker integration of cross-feature dynamics. Nevertheless, observers with ASD and typical controls were equally able to detect the physical differences between comparison eye transitions. Importantly, this confirms that observers with ASD were able to fixate the eye-region, indicating that the striking group difference has a perceptual, not attentional origin. The clarity of the present results contrasts starkly with the modest effect sizes and equivocal findings seen throughout the literature on static face perception in ASD. We speculate that differences in the perception of facial motion may be a more reliable feature of this condition. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Effects of inverting contour and features on processing for static and dynamic face perception: an MEG study.

    PubMed

    Miki, Kensaku; Takeshima, Yasuyuki; Watanabe, Shoko; Honda, Yukiko; Kakigi, Ryusuke

    2011-04-06

    We investigated the effects of inverting facial contour (hair and chin) and features (eyes, nose and mouth) on processing for static and dynamic face perception using magnetoencephalography (MEG). We used apparent motion, in which the first stimulus (S1) was replaced by a second stimulus (S2) with no interstimulus interval and subjects perceived visual motion, and presented three conditions as follows: (1) U&U: Upright contour and Upright features, (2) U&I: Upright contour and Inverted features, and (3) I&I: Inverted contour and Inverted features. In static face perception (S1 onset), the peak latency of the fusiform area's activity, which was related to static face perception, was significantly longer for U&I and I&I than for U&U in the right hemisphere and for U&I than for U&U and I&I in the left. In dynamic face perception (S2 onset), the strength (moment) of the occipitotemporal area's activity, which was related to dynamic face perception, was significantly larger for I&I than for U&U and U&I in the right hemisphere, but not the left. These results can be summarized as follows: (1) in static face perception, the activity of the right fusiform area was more affected by the inversion of features while that of the left fusiform area was more affected by the disruption of the spatial relation between the contour and features, and (2) in dynamic face perception, the activity of the right occipitotemporal area was affected by the inversion of the facial contour. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Strong motion deficits in dyslexia associated with DCDC2 gene alteration.

    PubMed

    Cicchini, Guido Marco; Marino, Cecilia; Mascheretti, Sara; Perani, Daniela; Morrone, Maria Concetta

    2015-05-27

    Dyslexia is a specific impairment in reading that affects 1 in 10 people. Previous studies have failed to isolate a single cause of the disorder, but several candidate genes have been reported. We measured motion perception in two groups of dyslexics, with and without a deletion within the DCDC2 gene, a risk gene for dyslexia. We found impairment for motion particularly strong at high spatial frequencies in the population carrying the deletion. The data suggest that deficits in motion processing occur in a specific genotype, rather than the entire dyslexia population, contributing to the large variability in impairment of motion thresholds in dyslexia reported in the literature. Copyright © 2015 the authors 0270-6474/15/358059-06$15.00/0.

  2. Implied motion language can influence visual spatial memory.

    PubMed

    Vinson, David W; Engelen, Jan; Zwaan, Rolf A; Matlock, Teenie; Dale, Rick

    2017-07-01

    How do language and vision interact? Specifically, what impact can language have on visual processing, especially related to spatial memory? What are typically considered errors in visual processing, such as remembering the location of an object to be farther along its motion trajectory than it actually is, can be explained as perceptual achievements that are driven by our ability to anticipate future events. In two experiments, we tested whether the prior presentation of motion language influences visual spatial memory in ways that afford greater perceptual prediction. Experiment 1 showed that motion language influenced judgments for the spatial memory of an object beyond the known effects of implied motion present in the image itself. Experiment 2 replicated this finding. Our findings support a theory of perception as prediction.

  3. Role of orientation reference selection in motion sickness

    NASA Technical Reports Server (NTRS)

    Peterka, Robert J.; Black, F. Owen

    1987-01-01

    The objectives of this proposal were developed to further explore and quantify the orientation reference selection abilities of subjects and the relation, if any, between motion sickness and orientation reference selection. The overall objectives of this proposal are to determine (1) if motion sickness susceptibility is related to sensory orientation reference selection abilities of subjects, (2) if abnormal vertical canal-otolith function is the source of these abnormal posture control strategies and if it can be quantified by vestibular and oculomotor reflex measurements, and (3) if quantifiable measures of perception of vestibular and visual motion cues can be related to motion sickness susceptibility and to orientation reference selection ability demonstrated by tests which systematically control the sensory imformation available for orientation.

  4. Neurons compute internal models of the physical laws of motion.

    PubMed

    Angelaki, Dora E; Shaikh, Aasef G; Green, Andrea M; Dickman, J David

    2004-07-29

    A critical step in self-motion perception and spatial awareness is the integration of motion cues from multiple sensory organs that individually do not provide an accurate representation of the physical world. One of the best-studied sensory ambiguities is found in visual processing, and arises because of the inherent uncertainty in detecting the motion direction of an untextured contour moving within a small aperture. A similar sensory ambiguity arises in identifying the actual motion associated with linear accelerations sensed by the otolith organs in the inner ear. These internal linear accelerometers respond identically during translational motion (for example, running forward) and gravitational accelerations experienced as we reorient the head relative to gravity (that is, head tilt). Using new stimulus combinations, we identify here cerebellar and brainstem motion-sensitive neurons that compute a solution to the inertial motion detection problem. We show that the firing rates of these populations of neurons reflect the computations necessary to construct an internal model representation of the physical equations of motion.

  5. The Bicycle Illusion: Sidewalk Science Informs the Integration of Motion and Shape Perception

    ERIC Educational Resources Information Center

    Masson, Michael E. J.; Dodd, Michael D.; Enns, James T.

    2009-01-01

    The authors describe a new visual illusion first discovered in a natural setting. A cyclist riding beside a pair of sagging chains that connect fence posts appears to move up and down with the chains. In this illusion, a static shape (the chains) affects the perception of a moving shape (the bicycle), and this influence involves assimilation…

  6. Causal capture effects in chimpanzees (Pan troglodytes).

    PubMed

    Matsuno, Toyomi; Tomonaga, Masaki

    2017-01-01

    Extracting a cause-and-effect structure from the physical world is an important demand for animals living in dynamically changing environments. Human perceptual and cognitive mechanisms are known to be sensitive and tuned to detect and interpret such causal structures. In contrast to rigorous investigations of human causal perception, the phylogenetic roots of this perception are not well understood. In the present study, we aimed to investigate the susceptibility of nonhuman animals to mechanical causality by testing whether chimpanzees perceived an illusion called causal capture (Scholl & Nakayama, 2002). Causal capture is a phenomenon in which a type of bistable visual motion of objects is perceived as causal collision due to a bias from a co-occurring causal event. In our experiments, we assessed the susceptibility of perception of a bistable stream/bounce motion event to a co-occurring causal event in chimpanzees. The results show that, similar to in humans, causal "bounce" percepts were significantly increased in chimpanzees with the addition of a task-irrelevant causal bounce event that was synchronously presented. These outcomes suggest that the perceptual mechanisms behind the visual interpretation of causal structures in the environment are evolutionarily shared between human and nonhuman animals. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Retrieval-Induced Inhibition in Short-Term Memory.

    PubMed

    Kang, Min-Suk; Choi, Joongrul

    2015-07-01

    We used a visual illusion called motion repulsion as a model system for investigating competition between two mental representations. Subjects were asked to remember two random-dot-motion displays presented in sequence and then to report the motion directions for each. Remembered motion directions were shifted away from the actual motion directions, an effect similar to the motion repulsion observed during perception. More important, the item retrieved second showed greater repulsion than the item retrieved first. This suggests that earlier retrieval exerted greater inhibition on the other item being held in short-term memory. This retrieval-induced motion repulsion could be explained neither by reduced cognitive resources for maintaining short-term memory nor by continued inhibition between short-term memory representations. These results indicate that retrieval of memory representations inhibits other representations in short-term memory. We discuss mechanisms of retrieval-induced inhibition and their implications for the structure of memory. © The Author(s) 2015.

  8. Two motion systems with common and separate pathways for color and luminance.

    PubMed Central

    Gorea, A; Papathomas, T V; Kovacs, I

    1993-01-01

    We present psychological experiments that reveal two motion systems, a specific and an unspecific one. The specific system prevails at medium to high temporal frequencies. It comprises at least two separate motion pathways that are selective for color and for luminance and that do not interact until after the motion signal is extracted separately in each. By contrast, the unspecific system prevails at low temporal frequencies and it combines color and luminance signals at an earlier stage, before motion extraction. The successful implementation of an efficient and accurate technique for assessing equiluminance corroborates further the main findings. These results offer a general framework for understanding the nature of interactions between color and luminance signals in motion perception and suggest that previously proposed dichotomies in motion processing may be encompassed by the specific/unspecific dichotomy proposed here. Images Fig. 2 Fig. 4 PMID:8248227

  9. Illusory visual motion stimulus elicits postural sway in migraine patients

    PubMed Central

    Imaizumi, Shu; Honma, Motoyasu; Hibino, Haruo; Koyama, Shinichi

    2015-01-01

    Although the perception of visual motion modulates postural control, it is unknown whether illusory visual motion elicits postural sway. The present study examined the effect of illusory motion on postural sway in patients with migraine, who tend to be sensitive to it. We measured postural sway for both migraine patients and controls while they viewed static visual stimuli with and without illusory motion. The participants’ postural sway was measured when they closed their eyes either immediately after (Experiment 1), or 30 s after (Experiment 2), viewing the stimuli. The patients swayed more than the controls when they closed their eyes immediately after viewing the illusory motion (Experiment 1), and they swayed less than the controls when they closed their eyes 30 s after viewing it (Experiment 2). These results suggest that static visual stimuli with illusory motion can induce postural sway that may last for at least 30 s in patients with migraine. PMID:25972832

  10. Space and motion perception and discomfort in air travel.

    PubMed

    Ramos, Renato T; de Mattos, Danielle A; Rebouças, J Thales S; Ranvaud, Ronald D

    2012-12-01

    The perception of comfort during air trips is determined by several factors. External factors like cabin design and environmental parameters (temperature, humidity, air pressure, noise, and vibration) interact with individual characteristics (anxiety traits, fear of flying, and personality) from arrival at the airport to landing at the destination. In this study, we investigated the influence of space and motion discomfort (SMD), fear of heights, and anxiety on comfort perception during all phases of air travel. We evaluated 51 frequent air travelers through a modified version of the Flight Anxiety Situations Questionnaire (FAS), in which new items were added and where the subjects were asked to report their level of discomfort or anxiety (not fear) for each phase of air travel (Chronbach's alpha = 0.974). Correlations were investigated among these scales: State-Trait Anxiety Inventory (STAI), Cohen's Acrophobia Questionnaire, and the Situational Characteristics Questionnaire (SitQ, designed to estimate SMD levels). Scores of SitQ correlated with discomfort in situations involving space and movement perception (Pearson's rho = 0.311), while discomfort was associated with cognitive mechanisms related to scores in the anxiety scales (Pearson's rho = 0.375). Anxiety traits were important determinants of comfort perception before and after flight, while the influence of SMD was more significant during the time spent in the aircraft cabin. SMD seems to be an important modulator of comfort perception in air travel. Its influence on physical well being and probably on cognitive performance, with possible effects on flight safety, deserves further investigation.

  11. Experience affects the use of ego-motion signals during 3D shape perception.

    PubMed

    Jain, Anshul; Backus, Benjamin T

    2010-12-29

    Experience has long-term effects on perceptual appearance (Q. Haijiang, J. A. Saunders, R. W. Stone, & B. T. Backus, 2006). We asked whether experience affects the appearance of structure-from-motion stimuli when the optic flow is caused by observer ego-motion. Optic flow is an ambiguous depth cue: a rotating object and its oppositely rotating, depth-inverted dual generate similar flow. However, the visual system exploits ego-motion signals to prefer the percept of an object that is stationary over one that rotates (M. Wexler, F. Panerai, I. Lamouret, & J. Droulez, 2001). We replicated this finding and asked whether this preference for stationarity, the "stationarity prior," is modulated by experience. During training, two groups of observers were exposed to objects with identical flow, but that were either stationary or moving as determined by other cues. The training caused identical test stimuli to be seen preferentially as stationary or moving by the two groups, respectively. We then asked whether different priors can exist independently at different locations in the visual field. Observers were trained to see objects either as stationary or as moving at two different locations. Observers' stationarity bias at the two respective locations was modulated in the directions consistent with training. Thus, the utilization of extraretinal ego-motion signals for disambiguating optic flow signals can be updated as the result of experience, consistent with the updating of a Bayesian prior for stationarity.

  12. Temporal ventriloquism along the path of apparent motion: speed perception under different spatial grouping principles.

    PubMed

    Ogulmus, Cansu; Karacaoglu, Merve; Kafaligonul, Hulusi

    2018-03-01

    The coordination of intramodal perceptual grouping and crossmodal interactions plays a critical role in constructing coherent multisensory percepts. However, the basic principles underlying such coordinating mechanisms still remain unclear. By taking advantage of an illusion called temporal ventriloquism and its influences on perceived speed, we investigated how audiovisual interactions in time are modulated by the spatial grouping principles of vision. In our experiments, we manipulated the spatial grouping principles of proximity, uniform connectedness, and similarity/common fate in apparent motion displays. Observers compared the speed of apparent motions across different sound timing conditions. Our results revealed that the effects of sound timing (i.e., temporal ventriloquism effects) on perceived speed also existed in visual displays containing more than one object and were modulated by different spatial grouping principles. In particular, uniform connectedness was found to modulate these audiovisual interactions in time. The effect of sound timing on perceived speed was smaller when horizontal connecting bars were introduced along the path of apparent motion. When the objects in each apparent motion frame were not connected or connected with vertical bars, the sound timing was more influential compared to the horizontal bar conditions. Overall, our findings here suggest that the effects of sound timing on perceived speed exist in different spatial configurations and can be modulated by certain intramodal spatial grouping principles such as uniform connectedness.

  13. Man-systems evaluation of moving base vehicle simulation motion cues. [human acceleration perception involving visual feedback

    NASA Technical Reports Server (NTRS)

    Kirkpatrick, M.; Brye, R. G.

    1974-01-01

    A motion cue investigation program is reported that deals with human factor aspects of high fidelity vehicle simulation. General data on non-visual motion thresholds and specific threshold values are established for use as washout parameters in vehicle simulation. A general purpose similator is used to test the contradictory cue hypothesis that acceleration sensitivity is reduced during a vehicle control task involving visual feedback. The simulator provides varying acceleration levels. The method of forced choice is based on the theory of signal detect ability.

  14. Exposure to Organic Solvents Used in Dry Cleaning Reduces Low and High Level Visual Function

    PubMed Central

    Jiménez Barbosa, Ingrid Astrid

    2015-01-01

    Purpose To investigate whether exposure to occupational levels of organic solvents in the dry cleaning industry is associated with neurotoxic symptoms and visual deficits in the perception of basic visual features such as luminance contrast and colour, higher level processing of global motion and form (Experiment 1), and cognitive function as measured in a visual search task (Experiment 2). Methods The Q16 neurotoxic questionnaire, a commonly used measure of neurotoxicity (by the World Health Organization), was administered to assess the neurotoxic status of a group of 33 dry cleaners exposed to occupational levels of organic solvents (OS) and 35 age-matched non dry-cleaners who had never worked in the dry cleaning industry. In Experiment 1, to assess visual function, contrast sensitivity, colour/hue discrimination (Munsell Hue 100 test), global motion and form thresholds were assessed using computerised psychophysical tests. Sensitivity to global motion or form structure was quantified by varying the pattern coherence of global dot motion (GDM) and Glass pattern (oriented dot pairs) respectively (i.e., the percentage of dots/dot pairs that contribute to the perception of global structure). In Experiment 2, a letter visual-search task was used to measure reaction times (as a function of the number of elements: 4, 8, 16, 32, 64 and 100) in both parallel and serial search conditions. Results Dry cleaners exposed to organic solvents had significantly higher scores on the Q16 compared to non dry-cleaners indicating that dry cleaners experienced more neurotoxic symptoms on average. The contrast sensitivity function for dry cleaners was significantly lower at all spatial frequencies relative to non dry-cleaners, which is consistent with previous studies. Poorer colour discrimination performance was also noted in dry cleaners than non dry-cleaners, particularly along the blue/yellow axis. In a new finding, we report that global form and motion thresholds for dry cleaners were also significantly higher and almost double than that obtained from non dry-cleaners. However, reaction time performance on both parallel and serial visual search was not different between dry cleaners and non dry-cleaners. Conclusions Exposure to occupational levels of organic solvents is associated with neurotoxicity which is in turn associated with both low level deficits (such as the perception of contrast and discrimination of colour) and high level visual deficits such as the perception of global form and motion, but not visual search performance. The latter finding indicates that the deficits in visual function are unlikely to be due to changes in general cognitive performance. PMID:25933026

  15. Synchronous and asynchronous perceptual bindings of colour and motion following identical stimulations.

    PubMed

    McIntyre, Morgan E; Arnold, Derek H

    2018-05-01

    When a moving surface alternates in colour and direction, perceptual couplings of colour and motion can differ from their physical correspondence. Periods of motion tend to be perceptually bound with physically delayed colours - a colour/motion perceptual asynchrony. This can be eliminated by motion transparency. Here we show that the colour/motion perceptual asynchrony is not invariably eliminated by motion transparency. Nor is it an inevitable consequence given a particular physical input. Instead, it can emerge when moving surfaces are perceived as alternating in direction, even if those surfaces seem transparent, and it is eliminated when surfaces are perceived as moving invariably. For a given observer either situation can result from exposure to a common input. Our findings suggest that neural events that promote the perception of motion reversals are causal of the colour/motion perceptual asynchrony. Moreover, they suggest that motion transparency and coherence can be signalled simultaneously by subpopulations of direction-selective neurons, with this conflict instantaneously resolved by a competitive winner-takes-all interaction, which can instantiate or eliminate colour/motion perceptual asynchrony. Copyright © 2017. Published by Elsevier Ltd.

  16. Interaction of Perceptual Grouping and Crossmodal Temporal Capture in Tactile Apparent-Motion

    PubMed Central

    Chen, Lihan; Shi, Zhuanghua; Müller, Hermann J.

    2011-01-01

    Previous studies have shown that in tasks requiring participants to report the direction of apparent motion, task-irrelevant mono-beeps can “capture” visual motion perception when the beeps occur temporally close to the visual stimuli. However, the contributions of the relative timing of multimodal events and the event structure, modulating uni- and/or crossmodal perceptual grouping, remain unclear. To examine this question and extend the investigation to the tactile modality, the current experiments presented tactile two-tap apparent-motion streams, with an SOA of 400 ms between successive, left-/right-hand middle-finger taps, accompanied by task-irrelevant, non-spatial auditory stimuli. The streams were shown for 90 seconds, and participants' task was to continuously report the perceived (left- or rightward) direction of tactile motion. In Experiment 1, each tactile stimulus was paired with an auditory beep, though odd-numbered taps were paired with an asynchronous beep, with audiotactile SOAs ranging from −75 ms to 75 ms. Perceived direction of tactile motion varied systematically with audiotactile SOA, indicative of a temporal-capture effect. In Experiment 2, two audiotactile SOAs—one short (75 ms), one long (325 ms)—were compared. The long-SOA condition preserved the crossmodal event structure (so the temporal-capture dynamics should have been similar to that in Experiment 1), but both beeps now occurred temporally close to the taps on one side (even-numbered taps). The two SOAs were found to produce opposite modulations of apparent motion, indicative of an influence of crossmodal grouping. In Experiment 3, only odd-numbered, but not even-numbered, taps were paired with auditory beeps. This abolished the temporal-capture effect and, instead, a dominant percept of apparent motion from the audiotactile side to the tactile-only side was observed independently of the SOA variation. These findings suggest that asymmetric crossmodal grouping leads to an attentional modulation of apparent motion, which inhibits crossmodal temporal-capture effects. PMID:21383834

  17. The Experience of Force: The Role of Haptic Experience of Forces in Visual Perception of Object Motion and Interactions, Mental Simulation, and Motion-Related Judgments

    ERIC Educational Resources Information Center

    White, Peter A.

    2012-01-01

    Forces are experienced in actions on objects. The mechanoreceptor system is stimulated by proximal forces in interactions with objects, and experiences of force occur in a context of information yielded by other sensory modalities, principally vision. These experiences are registered and stored as episodic traces in the brain. These stored…

  18. The Perception of Facial Expressions and Stimulus Motion by Two- and Five-Month-Old Infants Using Holographic Stimuli.

    ERIC Educational Resources Information Center

    Nelson, Charles A.; Horowitz, Frances Degen

    1983-01-01

    Holograms of faces were used to study two- and five-month-old infants' discriminations of changes in facial expression and pose when the stimulus was seen to move or to remain stationary. While no evidence was found suggesting that infants preferred the moving face, evidence indicated that motion contrasts facilitate face recognition. (Author/RH)

  19. Transitions between central and peripheral vision create spatial/temporal distortions: a hypothesis concerning the perceived break of the curveball.

    PubMed

    Shapiro, Arthur; Lu, Zhong-Lin; Huang, Chang-Bing; Knight, Emily; Ennis, Robert

    2010-10-13

    The human visual system does not treat all parts of an image equally: the central segments of an image, which fall on the fovea, are processed with a higher resolution than the segments that fall in the visual periphery. Even though the differences between foveal and peripheral resolution are large, these differences do not usually disrupt our perception of seamless visual space. Here we examine a motion stimulus in which the shift from foveal to peripheral viewing creates a dramatic spatial/temporal discontinuity. The stimulus consists of a descending disk (global motion) with an internal moving grating (local motion). When observers view the disk centrally, they perceive both global and local motion (i.e., observers see the disk's vertical descent and the internal spinning). When observers view the disk peripherally, the internal portion appears stationary, and the disk appears to descend at an angle. The angle of perceived descent increases as the observer views the stimulus from further in the periphery. We examine the first- and second-order information content in the display with the use of a three-dimensional Fourier analysis and show how our results can be used to describe perceived spatial/temporal discontinuities in real-world situations. The perceived shift of the disk's direction in the periphery is consistent with a model in which foveal processing separates first- and second-order motion information while peripheral processing integrates first- and second-order motion information. We argue that the perceived distortion may influence real-world visual observations. To this end, we present a hypothesis and analysis of the perception of the curveball and rising fastball in the sport of baseball. The curveball is a physically measurable phenomenon: the imbalance of forces created by the ball's spin causes the ball to deviate from a straight line and to follow a smooth parabolic path. However, the curveball is also a perceptual puzzle because batters often report that the flight of the ball undergoes a dramatic and nearly discontinuous shift in position as the ball nears home plate. We suggest that the perception of a discontinuous shift in position results from differences between foveal and peripheral processing.

  20. Recognition of tennis serve performed by a digital player: comparison among polygon, shadow, and stick-figure models.

    PubMed

    Ida, Hirofumi; Fukuhara, Kazunobu; Ishii, Motonobu

    2012-01-01

    The objective of this study was to assess the cognitive effect of human character models on the observer's ability to extract relevant information from computer graphics animation of tennis serve motions. Three digital human models (polygon, shadow, and stick-figure) were used to display the computationally simulated serve motions, which were perturbed at the racket-arm by modulating the speed (slower or faster) of one of the joint rotations (wrist, elbow, or shoulder). Twenty-one experienced tennis players and 21 novices made discrimination responses about the modulated joint and also specified the perceived swing speeds on a visual analogue scale. The result showed that the discrimination accuracies of the experienced players were both above and below chance level depending on the modulated joint whereas those of the novices mostly remained at chance or guessing levels. As far as the experienced players were concerned, the polygon model decreased the discrimination accuracy as compared with the stick-figure model. This suggests that the complicated pictorial information may have a distracting effect on the recognition of the observed action. On the other hand, the perceived swing speed of the perturbed motion relative to the control was lower for the stick-figure model than for the polygon model regardless of the skill level. This result suggests that the simplified visual information can bias the perception of the motion speed toward slower. It was also shown that the increasing the joint rotation speed increased the perceived swing speed, although the resulting racket velocity had little correlation with this speed sensation. Collectively, observer's recognition of the motion pattern and perception of the motion speed can be affected by the pictorial information of the human model as well as by the perturbation processing applied to the observed motion.

  1. Motion mechanisms with different spatiotemporal characteristics identified by an MAE technique with superimposed gratings.

    PubMed

    Shioiri, Satoshi; Matsumiya, Kazumichi

    2009-05-29

    We investigated spatiotemporal characteristics of motion mechanisms using a new type of motion aftereffect (MAE) we found. Our stimulus comprised two superimposed sinusoidal gratings with different spatial frequencies. After exposure to the moving stimulus, observers perceived the MAE in the static test in the direction opposite to that of the high spatial frequency grating even when low spatial frequency motion was perceived during adaptation. In contrast, in the flicker test, the MAE was perceived in the direction opposite to that of the low spatial frequency grating. These MAEs indicate that two different motion systems contribute to motion perception and can be isolated by using different test stimuli. Using a psychophysical technique based on the MAE, we investigated the differences between the two motion mechanisms. The results showed that the static MAE is the aftereffect of the motion system with a high spatial and low temporal frequency tuning (slow motion detector) and the flicker MAE is the aftereffect of the motion system with a low spatial and high temporal frequency tuning (fast motion detector). We also revealed that the two motion detectors differ in orientation tuning, temporal frequency tuning, and sensitivity to relative motion.

  2. Micro-calibration of space and motion by photoreceptors synchronized in parallel with cortical oscillations: A unified theory of visual perception.

    PubMed

    Jerath, Ravinder; Cearley, Shannon M; Barnes, Vernon A; Jensen, Mike

    2018-01-01

    A fundamental function of the visual system is detecting motion, yet visual perception is poorly understood. Current research has determined that the retina and ganglion cells elicit responses for motion detection; however, the underlying mechanism for this is incompletely understood. Previously we proposed that retinogeniculo-cortical oscillations and photoreceptors work in parallel to process vision. Here we propose that motion could also be processed within the retina, and not in the brain as current theory suggests. In this paper, we discuss: 1) internal neural space formation; 2) primary, secondary, and tertiary roles of vision; 3) gamma as the secondary role; and 4) synchronization and coherence. Movement within the external field is instantly detected by primary processing within the space formed by the retina, providing a unified view of the world from an internal point of view. Our new theory begins to answer questions about: 1) perception of space, erect images, and motion, 2) purpose of lateral inhibition, 3) speed of visual perception, and 4) how peripheral color vision occurs without a large population of cones located peripherally in the retina. We explain that strong oscillatory activity influences on brain activity and is necessary for: 1) visual processing, and 2) formation of the internal visuospatial area necessary for visual consciousness, which could allow rods to receive precise visual and visuospatial information, while retinal waves could link the lateral geniculate body with the cortex to form a neural space formed by membrane potential-based oscillations and photoreceptors. We propose that vision is tripartite, with three components that allow a person to make sense of the world, terming them "primary, secondary, and tertiary roles" of vision. Finally, we propose that Gamma waves that are higher in strength and volume allow communication among the retina, thalamus, and various areas of the cortex, and synchronization brings cortical faculties to the retina, while the thalamus is the link that couples the retina to the rest of the brain through activity by gamma oscillations. This novel theory lays groundwork for further research by providing a theoretical understanding that expands upon the functions of the retina, photoreceptors, and retinal plexus to include parallel processing needed to form the internal visual space that we perceive as the external world. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Multiple Causal Links Between Magnocellular-Dorsal Pathway Deficit and Developmental Dyslexia.

    PubMed

    Gori, Simone; Seitz, Aaron R; Ronconi, Luca; Franceschini, Sandro; Facoetti, Andrea

    2016-10-17

    Although impaired auditory-phonological processing is the most popular explanation of developmental dyslexia (DD), the literature shows that the combination of several causes rather than a single factor contributes to DD. Functioning of the visual magnocellular-dorsal (MD) pathway, which plays a key role in motion perception, is a much debated, but heavily suspected factor contributing to DD. Here, we employ a comprehensive approach that incorporates all the accepted methods required to test the relationship between the MD pathway dysfunction and DD. The results of 4 experiments show that (1) Motion perception is impaired in children with dyslexia in comparison both with age-match and with reading-level controls; (2) pre-reading visual motion perception-independently from auditory-phonological skill-predicts future reading development, and (3) targeted MD trainings-not involving any auditory-phonological stimulation-leads to improved reading skill in children and adults with DD. Our findings demonstrate, for the first time, a causal relationship between MD deficits and DD, virtually closing a 30-year long debate. Since MD dysfunction can be diagnosed much earlier than reading and language disorders, our findings pave the way for low resource-intensive, early prevention programs that could drastically reduce the incidence of DD. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. Direction detection thresholds of passive self-motion in artistic gymnasts.

    PubMed

    Hartmann, Matthias; Haller, Katia; Moser, Ivan; Hossner, Ernst-Joachim; Mast, Fred W

    2014-04-01

    In this study, we compared direction detection thresholds of passive self-motion in the dark between artistic gymnasts and controls. Twenty-four professional female artistic gymnasts (ranging from 7 to 20 years) and age-matched controls were seated on a motion platform and asked to discriminate the direction of angular (yaw, pitch, roll) and linear (leftward-rightward) motion. Gymnasts showed lower thresholds for the linear leftward-rightward motion. Interestingly, there was no difference for the angular motions. These results show that the outstanding self-motion abilities in artistic gymnasts are not related to an overall higher sensitivity in self-motion perception. With respect to vestibular processing, our results suggest that gymnastic expertise is exclusively linked to superior interpretation of otolith signals when no change in canal signals is present. In addition, thresholds were overall lower for the older (14-20 years) than for the younger (7-13 years) participants, indicating the maturation of vestibular sensitivity from childhood to adolescence.

  5. Patterns of fMRI activity dissociate overlapping functional brain areas that respond to biological motion.

    PubMed

    Peelen, Marius V; Wiggett, Alison J; Downing, Paul E

    2006-03-16

    Accurate perception of the actions and intentions of other people is essential for successful interactions in a social environment. Several cortical areas that support this process respond selectively in fMRI to static and dynamic displays of human bodies and faces. Here we apply pattern-analysis techniques to arrive at a new understanding of the neural response to biological motion. Functionally defined body-, face-, and motion-selective visual areas all responded significantly to "point-light" human motion. Strikingly, however, only body selectivity was correlated, on a voxel-by-voxel basis, with biological motion selectivity. We conclude that (1) biological motion, through the process of structure-from-motion, engages areas involved in the analysis of the static human form; (2) body-selective regions in posterior fusiform gyrus and posterior inferior temporal sulcus overlap with, but are distinct from, face- and motion-selective regions; (3) the interpretation of region-of-interest findings may be substantially altered when multiple patterns of selectivity are considered.

  6. Spatial Alignment and Response Hand in Geometric and Motion Illusions

    PubMed Central

    Scocchia, Lisa; Paroli, Michela; Stucchi, Natale A.; Sedda, Anna

    2017-01-01

    Perception of visual illusions is susceptible to manipulation of their spatial properties. Further, illusions can sometimes affect visually guided actions, especially the movement planning phase. Remarkably, visual properties of objects related to actions, such as affordances, can prime more accurate perceptual judgements. In spite of the amount of knowledge available on affordances and on the influence of illusions on actions (or lack of thereof), virtually nothing is known about the reverse: the influence of action-related parameters on the perception of visual illusions. Here, we tested a hypothesis that the response mode (that can be linked to action-relevant features) can affect perception of the Poggendorff (geometric) and of the Vanishing Point (motion) illusion. We explored the role of hand dominance (right dominant versus left non-dominant hand) and its interaction with stimulus spatial alignment (i.e., congruency between visual stimulus and the hand used for responses). Seventeen right-handed participants performed our tasks with their right and left hands, and the stimuli were presented in regular and mirror-reversed views. It turned out that the regular version of the Poggendorff display generates a stronger illusion compared to the mirror version, and that participants are less accurate and show more variability when they use their left hand in responding to the Vanishing Point. In summary, our results show that there is a marginal effect of hand precision in motion related illusions, which is absent for geometrical illusions. In the latter, attentional anisometry seems to play a greater role in generating the illusory effect. Taken together, our findings suggest that changes in the response mode (here: manual action-related parameters) do not necessarily affect illusion perception. Therefore, although intuitively speaking there should be at least unidirectional effects of perception on action, and possible interactions between the two systems, this simple study still suggests their relative independence, except for the case when the less skilled (non-dominant) hand and arguably more deliberate responses are used. PMID:28769830

  7. A Model of Human Orientation and Self Motion Perception during Body Acceleration: The Orientation Modeling System

    DTIC Science & Technology

    2016-09-28

    previous research and modeling results. The OMS and Perception Toolbox were used to perform a case study of an F18 mishap. Model results imply that...request documents from DTIC. Change of Address Organizations receiving reports from the U.S. Army Aeromedical Research Laboratory on automatic...54  Coriolis head movement during a coordinated turn. .............................................55  Case Study

  8. When eyes drive hand: Influence of non-biological motion on visuo-motor coupling.

    PubMed

    Thoret, Etienne; Aramaki, Mitsuko; Bringoux, Lionel; Ystad, Sølvi; Kronland-Martinet, Richard

    2016-01-26

    Many studies stressed that the human movement execution but also the perception of motion are constrained by specific kinematics. For instance, it has been shown that the visuo-manual tracking of a spotlight was optimal when the spotlight motion complies with biological rules such as the so-called 1/3 power law, establishing the co-variation between the velocity and the trajectory curvature of the movement. The visual or kinesthetic perception of a geometry induced by motion has also been shown to be constrained by such biological rules. In the present study, we investigated whether the geometry induced by the visuo-motor coupling of biological movements was also constrained by the 1/3 power law under visual open loop control, i.e. without visual feedback of arm displacement. We showed that when someone was asked to synchronize a drawing movement with a visual spotlight following a circular shape, the geometry of the reproduced shape was fooled by visual kinematics that did not respect the 1/3 power law. In particular, elliptical shapes were reproduced when the circle is trailed with a kinematics corresponding to an ellipse. Moreover, the distortions observed here were larger than in the perceptual tasks stressing the role of motor attractors in such a visuo-motor coupling. Finally, by investigating the direct influence of visual kinematics on the motor reproduction, our result conciliates previous knowledge on sensorimotor coupling of biological motions with external stimuli and gives evidence to the amodal encoding of biological motion. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  9. Naturalistic FMRI mapping reveals superior temporal sulcus as the hub for the distributed brain network for social perception.

    PubMed

    Lahnakoski, Juha M; Glerean, Enrico; Salmi, Juha; Jääskeläinen, Iiro P; Sams, Mikko; Hari, Riitta; Nummenmaa, Lauri

    2012-01-01

    Despite the abundant data on brain networks processing static social signals, such as pictures of faces, the neural systems supporting social perception in naturalistic conditions are still poorly understood. Here we delineated brain networks subserving social perception under naturalistic conditions in 19 healthy humans who watched, during 3-T functional magnetic resonance imaging (fMRI), a set of 137 short (approximately 16 s each, total 27 min) audiovisual movie clips depicting pre-selected social signals. Two independent raters estimated how well each clip represented eight social features (faces, human bodies, biological motion, goal-oriented actions, emotion, social interaction, pain, and speech) and six filler features (places, objects, rigid motion, people not in social interaction, non-goal-oriented action, and non-human sounds) lacking social content. These ratings were used as predictors in the fMRI analysis. The posterior superior temporal sulcus (STS) responded to all social features but not to any non-social features, and the anterior STS responded to all social features except bodies and biological motion. We also found four partially segregated, extended networks for processing of specific social signals: (1) a fronto-temporal network responding to multiple social categories, (2) a fronto-parietal network preferentially activated to bodies, motion, and pain, (3) a temporo-amygdalar network responding to faces, social interaction, and speech, and (4) a fronto-insular network responding to pain, emotions, social interactions, and speech. Our results highlight the role of the pSTS in processing multiple aspects of social information, as well as the feasibility and efficiency of fMRI mapping under conditions that resemble the complexity of real life.

  10. Interocular velocity difference contributes to stereomotion speed perception

    NASA Technical Reports Server (NTRS)

    Brooks, Kevin R.

    2002-01-01

    Two experiments are presented assessing the contributions of the rate of change of disparity (CD) and interocular velocity difference (IOVD) cues to stereomotion speed perception. Using a two-interval forced-choice paradigm, the perceived speed of directly approaching and receding stereomotion and of monocular lateral motion in random dot stereogram (RDS) targets was measured. Prior adaptation using dysjunctively moving random dot stimuli induced a velocity aftereffect (VAE). The degree of interocular correlation in the adapting images was manipulated to assess the effectiveness of each cue. While correlated adaptation involved a conventional RDS stimulus, containing both IOVD and CD cues, uncorrelated adaptation featured an independent dot array in each monocular half-image, and hence lacked a coherent disparity signal. Adaptation produced a larger VAE for stereomotion than for monocular lateral motion, implying effects at neural sites beyond that of binocular combination. For motion passing through the horopter, correlated and uncorrelated adaptation stimuli produced equivalent stereomotion VAEs. The possibility that these results were due to the adaptation of a CD mechanism through random matches in the uncorrelated stimulus was discounted in a control experiment. Here both simultaneous and sequential adaptation of left and right eyes produced similar stereomotion VAEs. Motion at uncrossed disparities was also affected by both correlated and uncorrelated adaptation stimuli, but showed a significantly greater VAE in response to the former. These results show that (1) there are two separate, specialised mechanisms for encoding stereomotion: one through IOVD, the other through CD; (2) the IOVD cue dominates the perception of stereomotion speed for stimuli passing through the horopter; and (3) at a disparity pedestal both the IOVD and the CD cues have a significant influence.

  11. Why do parallel cortical systems exist for the perception of static form and moving form?

    PubMed

    Grossberg, S

    1991-02-01

    This article analyzes computational properties that clarify why the parallel cortical systems V1----V2, V1----MT, and V1----V2----MT exist for the perceptual processing of static visual forms and moving visual forms. The article describes a symmetry principle, called FM symmetry, that is predicted to govern the development of these parallel cortical systems by computing all possible ways of symmetrically gating sustained cells with transient cells and organizing these sustained-transient cells into opponent pairs of on-cells and off-cells whose output signals are insensitive to direction of contrast. This symmetric organization explains how the static form system (static BCS) generates emergent boundary segmentations whose outputs are insensitive to direction of contrast and insensitive to direction of motion, whereas the motion form system (motion BCS) generates emergent boundary segmentations whose outputs are insensitive to direction of contrast but sensitive to direction of motion. FM symmetry clarifies why the geometries of static and motion form perception differ--for example, why the opposite orientation of vertical is horizontal (90 degrees), but the opposite direction of up is down (180 degrees). Opposite orientations and directions are embedded in gated dipole opponent processes that are capable of antagonistic rebound. Negative afterimages, such as the MacKay and waterfall illusions, are hereby explained as are aftereffects of long-range apparent motion. These antagonistic rebounds help to control a dynamic balance between complementary perceptual states of resonance and reset. Resonance cooperatively links features into emergent boundary segmentations via positive feedback in a CC loop, and reset terminates a resonance when the image changes, thereby preventing massive smearing of percepts. These complementary preattentive states of resonance and reset are related to analogous states that govern attentive feature integration, learning, and memory search in adaptive resonance theory. The mechanism used in the V1----MT system to generate a wave of apparent motion between discrete flashes may also be used in other cortical systems to generate spatial shifts of attention. The theory suggests how the V1----V2----MT cortical stream helps to compute moving form in depth and how long-range apparent motion of illusory contours occurs. These results collectively argue against vision theories that espouse independent processing modules. Instead, specialized subsystems interact to overcome computational uncertainties and complementary deficiencies, to cooperatively bind features into context-sensitive resonances, and to realize symmetry principles that are predicted to govern the development of the visual cortex.

  12. Separate visual representations for perception and for visually guided behavior

    NASA Technical Reports Server (NTRS)

    Bridgeman, Bruce

    1989-01-01

    Converging evidence from several sources indicates that two distinct representations of visual space mediate perception and visually guided behavior, respectively. The two maps of visual space follow different rules; spatial values in either one can be biased without affecting the other. Ordinarily the two maps give equivalent responses because both are veridically in register with the world; special techniques are required to pull them apart. One such technique is saccadic suppression: small target displacements during saccadic eye movements are not preceived, though the displacements can change eye movements or pointing to the target. A second way to separate cognitive and motor-oriented maps is with induced motion: a slowly moving frame will make a fixed target appear to drift in the opposite direction, while motor behavior toward the target is unchanged. The same result occurs with stroboscopic induced motion, where the frame jump abruptly and the target seems to jump in the opposite direction. A third method of separating cognitive and motor maps, requiring no motion of target, background or eye, is the Roelofs effect: a target surrounded by an off-center rectangular frame will appear to be off-center in the direction opposite the frame. Again the effect influences perception, but in half of the subjects it does not influence pointing to the target. This experience also reveals more characteristics of the maps and their interactions with one another, the motor map apparently has little or no memory, and must be fed from the biased cognitive map if an enforced delay occurs between stimulus presentation and motor response. In designing spatial displays, the results mean that what you see isn't necessarily what you get. Displays must be designed with either perception or visually guided behavior in mind.

  13. Spectral fingerprints of large-scale cortical dynamics during ambiguous motion perception.

    PubMed

    Helfrich, Randolph F; Knepper, Hannah; Nolte, Guido; Sengelmann, Malte; König, Peter; Schneider, Till R; Engel, Andreas K

    2016-11-01

    Ambiguous stimuli have been widely used to study the neuronal correlates of consciousness. Recently, it has been suggested that conscious perception might arise from the dynamic interplay of functionally specialized but widely distributed cortical areas. While previous research mainly focused on phase coupling as a correlate of cortical communication, more recent findings indicated that additional coupling modes might coexist and possibly subserve distinct cortical functions. Here, we studied two coupling modes, namely phase and envelope coupling, which might differ in their origins, putative functions and dynamics. Therefore, we recorded 128-channel EEG while participants performed a bistable motion task and utilized state-of-the-art source-space connectivity analysis techniques to study the functional relevance of different coupling modes for cortical communication. Our results indicate that gamma-band phase coupling in extrastriate visual cortex might mediate the integration of visual tokens into a moving stimulus during ambiguous visual stimulation. Furthermore, our results suggest that long-range fronto-occipital gamma-band envelope coupling sustains the horizontal percept during ambiguous motion perception. Additionally, our results support the idea that local parieto-occipital alpha-band phase coupling controls the inter-hemispheric information transfer. These findings provide correlative evidence for the notion that synchronized oscillatory brain activity reflects the processing of sensory input as well as the information integration across several spatiotemporal scales. The results indicate that distinct coupling modes are involved in different cortical computations and that the rich spatiotemporal correlation structure of the brain might constitute the functional architecture for cortical processing and specific multi-site communication. Hum Brain Mapp 37:4099-4111, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  14. Perception of linear acceleration in weightlessness

    NASA Technical Reports Server (NTRS)

    Arrott, Anthony P.; Young, Laurence R.; Merfeld, Daniel M.

    1991-01-01

    Tests of the perception and use of linear acceleration sensory information were performed on the science crews of the Spacelab 1 (SL-1) and D-1 missions using linear 'sleds' in-flight (D-1) and pre-post flight. The time delay between the acceleration step stimulus and the subjective response was consistently reduced during weightlessness, but was neither statistically significant nor of functional importance. Increased variability of responses when going from one environment to the other was apparent from measurements on the first day of the mission and in the first days post-flight. Subjective reports of perceived motion during sinusoidal oscillation in weightlessness were qualitatively similar to reports on earth. In a closed-loop motion nulling task, enhanced performance was observed post-flight in all crewmembers tested in the Y or Z axes.

  15. Perception of linear acceleration in weightlessness

    NASA Technical Reports Server (NTRS)

    Arrott, A. P.; Young, L. R.; Merfeld, D. M.

    1990-01-01

    Tests of the perception and use of linear acceleration sensory information were performed on the science crews of the Spacelab 1 (SL-1) and D-1 missions using linear "sleds" in-flight (D-1) and pre-post flight. The time delay between the acceleration step stimulus and the subjective response was consistently reduced during weightlessness, but was neither statistically significant nor of functional importance. Increased variability of responses when going from one environment to the other was apparent from measurements on the first day of the mission and in the first days post-flight. Subjective reports of perceived motion during sinusoidal oscillation in weightlessness were qualitatively similar to reports on earth. In a closed-loop motion nulling task, enhanced performance was observed post-flight in all crewmembers tested in the Y or Z axes.

  16. Effect Of Contrast On Perceived Motion Of A Plaid

    NASA Technical Reports Server (NTRS)

    Stone, L. S.; Watson, A. B.; Mulligan, J. B.

    1992-01-01

    Report desribes series of experiments examining effect of contrast on perception of moving plaids. Each plaid pattern used in experiments was sum of two drifting sinusoidal gratings of different orientations. One of many studies helping to show how brain processes visual information on moving patterns. When gratings forming plaid differ in contrast, apparent direction of motion of plaid biased up to 20 degrees toward direction of grating of higher contrast.

  17. Defining the computational structure of the motion detector in Drosophila

    PubMed Central

    Clark, Damon A.; Bursztyn, Limor; Horowitz, Mark; Schnitzer, Mark J.; Clandinin, Thomas R.

    2011-01-01

    SUMMARY Many animals rely on visual motion detection for survival. Motion information is extracted from spatiotemporal intensity patterns on the retina, a paradigmatic neural computation. A phenomenological model, the Hassenstein-Reichardt Correlator (HRC), relates visual inputs to neural and behavioral responses to motion, but the circuits that implement this computation remain unknown. Using cell-type specific genetic silencing, minimal motion stimuli, and in vivo calcium imaging, we examine two critical HRC inputs. These two pathways respond preferentially to light and dark moving edges. We demonstrate that these pathways perform overlapping but complementary subsets of the computations underlying the HRC. A numerical model implementing differential weighting of these operations displays the observed edge preferences. Intriguingly, these pathways are distinguished by their sensitivities to a stimulus correlation that corresponds to an illusory percept, “reverse phi”, that affects many species. Thus, this computational architecture may be widely used to achieve edge selectivity in motion detection. PMID:21689602

  18. Anisotropic responses to motion toward and away from the eye

    NASA Technical Reports Server (NTRS)

    Perrone, John A.

    1986-01-01

    When a rigid object moves toward the eye, it is usually perceived as being rigid. However, in the case of motion away from the eye, the motion and structure of the object are perceived nonveridically, with the percept tending to reflect the nonrigid transformations that are present in the retinal image. This difference in response to motion to and from the observer was quantified in an experiment using wire-frame computer-generated boxes which moved toward and away from the eye. Two theoretical systems are developed by which uniform three-dimensional velocity can be recovered from an expansion pattern of nonuniform velocity vectors. It is proposed that the human visual system uses two similar systems for processing motion in depth. The mechanism used for motion away from the eye produces perceptual errors because it is not suited to objects with a depth component.

  19. Sporadic frame dropping impact on quality perception

    NASA Astrophysics Data System (ADS)

    Pastrana-Vidal, Ricardo R.; Gicquel, Jean Charles; Colomes, Catherine; Cherifi, Hocine

    2004-06-01

    Over the past few years there has been an increasing interest in real time video services over packet networks. When considering quality, it is essential to quantify user perception of the received sequence. Severe motion discontinuities are one of the most common degradations in video streaming. The end-user perceives a jerky motion when the discontinuities are uniformly distributed over time and an instantaneous fluidity break is perceived when the motion loss is isolated or irregularly distributed. Bit rate adaptation techniques, transmission errors in the packet networks or restitution strategy could be the origin of this perceived jerkiness. In this paper we present a psychovisual experiment performed to quantify the effect of sporadically dropped pictures on the overall perceived quality. First, the perceptual detection thresholds of generated temporal discontinuities were measured. Then, the quality function was estimated in relation to a single frame dropping for different durations. Finally, a set of tests was performed to quantify the effect of several impairments distributed over time. We have found that the detection thresholds are content, duration and motion dependent. The assessment results show how quality is impaired by a single burst of dropped frames in a 10 sec sequence. The effect of several bursts of discarded frames, irregularly distributed over the time is also discussed.

  20. Experience affects the use of ego-motion signals during 3D shape perception

    PubMed Central

    Jain, Anshul; Backus, Benjamin T.

    2011-01-01

    Experience has long-term effects on perceptual appearance (Q. Haijiang, J. A. Saunders, R. W. Stone, & B. T. Backus, 2006). We asked whether experience affects the appearance of structure-from-motion stimuli when the optic flow is caused by observer ego-motion. Optic flow is an ambiguous depth cue: a rotating object and its oppositely rotating, depth-inverted dual generate similar flow. However, the visual system exploits ego-motion signals to prefer the percept of an object that is stationary over one that rotates (M. Wexler, F. Panerai, I. Lamouret, & J. Droulez, 2001). We replicated this finding and asked whether this preference for stationarity, the “stationarity prior,” is modulated by experience. During training, two groups of observers were exposed to objects with identical flow, but that were either stationary or moving as determined by other cues. The training caused identical test stimuli to be seen preferentially as stationary or moving by the two groups, respectively. We then asked whether different priors can exist independently at different locations in the visual field. Observers were trained to see objects either as stationary or as moving at two different locations. Observers’ stationarity bias at the two respective locations was modulated in the directions consistent with training. Thus, the utilization of extraretinal ego-motion signals for disambiguating optic flow signals can be updated as the result of experience, consistent with the updating of a Bayesian prior for stationarity. PMID:21191132

  1. Stereo-motion cooperation and the use of motion disparity in the visual perception of 3-D structure.

    PubMed

    Cornilleau-Pérès, V; Droulez, J

    1993-08-01

    When an observer views a moving scene binocularly, both motion parallax and binocular disparity provide depth information. In Experiments 1A-1C, we measured sensitivity to surface curvature when these depth cues were available either individually or simultaneously. When the depth cues yielded comparable sensitivity to surface curvature, we found that curvature detection was easier with the cues present simultaneously, rather than individually. For 2 of the 6 subjects, this effect was stronger when the component of frontal translation of the surface was vertical, rather than horizontal. No such anisotropy was found for the 4 other subjects. If a moving object is observed binocularly, the patterns of optic flow are different on the left and right retinae. We have suggested elsewhere (Cornilleau-Pérès & Droulez, in press) that this motion disparity might be used as a visual cue for the perception of a 3-D structure. Our model consisted in deriving binocular disparity from the left and right distributions of vertical velocities, rather than from luminous intensities, as has been done in classical studies on stereoscopic vision. The model led to some predictions concerning the detection of surface curvature from motion disparity in the presence or absence of intensity-based disparity (classically termed binocular disparity). In a second set of experiments, we attempted to test these predictions, and we failed to validate our theoretical scheme from a physiological point of view.

  2. Aging and the Visual Perception of Motion Direction: Solving the Aperture Problem.

    PubMed

    Shain, Lindsey M; Norman, J Farley

    2018-07-01

    An experiment required younger and older adults to estimate coherent visual motion direction from multiple motion signals, where each motion signal was locally ambiguous with respect to the true direction of pattern motion. Thus, accurate performance required the successful integration of motion signals across space (i.e., accurate performance required solution of the aperture problem) . The observers viewed arrays of either 64 or 9 moving line segments; because these lines moved behind apertures, their individual local motions were ambiguous with respect to direction (i.e., were subject to the aperture problem). Following 2.4 seconds of pattern motion on each trial (true motion directions ranged over the entire range of 360° in the fronto-parallel plane), the observers estimated the coherent direction of motion. There was an effect of direction, such that cardinal directions of pattern motion were judged with less error than oblique directions. In addition, a large effect of aging occurred-The average absolute errors of the older observers were 46% and 30.4% higher in magnitude than those exhibited by the younger observers for the 64 and 9 aperture conditions, respectively. Finally, the observers' precision markedly deteriorated as the number of apertures was reduced from 64 to 9.

  3. Stochastic correlative firing for figure-ground segregation.

    PubMed

    Chen, Zhe

    2005-03-01

    Segregation of sensory inputs into separate objects is a central aspect of perception and arises in all sensory modalities. The figure-ground segregation problem requires identifying an object of interest in a complex scene, in many cases given binaural auditory or binocular visual observations. The computations required for visual and auditory figure-ground segregation share many common features and can be cast within a unified framework. Sensory perception can be viewed as a problem of optimizing information transmission. Here we suggest a stochastic correlative firing mechanism and an associative learning rule for figure-ground segregation in several classic sensory perception tasks, including the cocktail party problem in binaural hearing, binocular fusion of stereo images, and Gestalt grouping in motion perception.

  4. Is Vestibular Self-Motion Perception Controlled by the Velocity Storage? Insights from Patients with Chronic Degeneration of the Vestibulo-Cerebellum

    PubMed Central

    Bertolini, Giovanni; Ramat, Stefano; Bockisch, Christopher J.; Marti, Sarah; Straumann, Dominik; Palla, Antonella

    2012-01-01

    Background The rotational vestibulo-ocular reflex (rVOR) generates compensatory eye movements in response to rotational head accelerations. The velocity-storage mechanism (VSM), which is controlled by the vestibulo-cerebellar nodulus and uvula, determines the rVOR time constant. In healthy subjects, it has been suggested that self-motion perception in response to earth-vertical axis rotations depends on the VSM in a similar way as reflexive eye movements. We aimed at further investigating this hypothesis and speculated that if the rVOR and rotational self-motion perception share a common VSM, alteration in the latter, such as those occurring after a loss of the regulatory control by vestibulo-cerebellar structures, would result in similar reflexive and perceptual response changes. We therefore set out to explore both responses in patients with vestibulo-cerebellar degeneration. Methodology/Principal Findings Reflexive eye movements and perceived rotational velocity were simultaneously recorded in 14 patients with chronic vestibulo-cerebellar degeneration (28–81yrs) and 12 age-matched healthy subjects (30–72yrs) after the sudden deceleration (90°/s2) from constant-velocity (90°/s) rotations about the earth-vertical yaw and pitch axes. rVOR and perceived rotational velocity data were analyzed using a two-exponential model with a direct pathway, representing semicircular canal activity, and an indirect pathway, implementing the VSM. We found that VSM time constants of rVOR and perceived rotational velocity co-varied in cerebellar patients and in healthy controls (Pearson correlation coefficient for yaw 0.95; for pitch 0.93, p<0.01). When constraining model parameters to use the same VSM time constant for rVOR and perceived rotational velocity, moreover, no significant deterioration of the quality of fit was found for both populations (variance-accounted-for >0.8). Conclusions/Significance Our results confirm that self-motion perception in response to rotational velocity-steps may be controlled by the same velocity storage network that controls reflexive eye movements and that no additional, e.g. cortical, mechanisms are required to explain perceptual dynamics. PMID:22719833

  5. Anticipatory smooth eye movements with random-dot kinematograms

    PubMed Central

    Santos, Elio M.; Gnang, Edinah K.; Kowler, Eileen

    2012-01-01

    Anticipatory smooth eye movements were studied in response to expectations of motion of random-dot kinematograms (RDKs). Dot lifetime was limited (52–208 ms) to prevent selection and tracking of the motion of local elements and to disrupt the perception of an object moving across space. Anticipatory smooth eye movements were found in response to cues signaling the future direction of global RDK motion, either prior to the onset of the RDK or prior to a change in its direction of motion. Cues signaling the lifetime of the dots were not effective. These results show that anticipatory smooth eye movements can be produced by expectations of global motion and do not require a sustained representation of an object or set of objects moving across space. At the same time, certain properties of global motion (direction) were more sensitive to cues than others (dot lifetime), suggesting that the rules by which prediction operates to influence pursuit may go beyond simple associations between cues and the upcoming motion of targets. PMID:23027686

  6. A neural model of motion processing and visual navigation by cortical area MST.

    PubMed

    Grossberg, S; Mingolla, E; Pack, C

    1999-12-01

    Cells in the dorsal medial superior temporal cortex (MSTd) process optic flow generated by self-motion during visually guided navigation. A neural model shows how interactions between well-known neural mechanisms (log polar cortical magnification, Gaussian motion-sensitive receptive fields, spatial pooling of motion-sensitive signals and subtractive extraretinal eye movement signals) lead to emergent properties that quantitatively simulate neurophysiological data about MSTd cell properties and psychophysical data about human navigation. Model cells match MSTd neuron responses to optic flow stimuli placed in different parts of the visual field, including position invariance, tuning curves, preferred spiral directions, direction reversals, average response curves and preferred locations for stimulus motion centers. The model shows how the preferred motion direction of the most active MSTd cells can explain human judgments of self-motion direction (heading), without using complex heading templates. The model explains when extraretinal eye movement signals are needed for accurate heading perception, and when retinal input is sufficient, and how heading judgments depend on scene layouts and rotation rates.

  7. Fluctuations of visual awareness: Combining motion-induced blindness with binocular rivalry

    PubMed Central

    Jaworska, Katarzyna; Lages, Martin

    2014-01-01

    Binocular rivalry (BR) and motion-induced blindness (MIB) are two phenomena of visual awareness where perception alternates between multiple states despite constant retinal input. Both phenomena have been extensively studied, but the underlying processing remains unclear. It has been suggested that BR and MIB involve the same neural mechanism, but how the two phenomena compete for visual awareness in the same stimulus has not been systematically investigated. Here we introduce BR in a dichoptic stimulus display that can also elicit MIB and examine fluctuations of visual awareness over the course of each trial. Exploiting this paradigm we manipulated stimulus characteristics that are known to influence MIB and BR. In two experiments we found that effects on multistable percepts were incompatible with the idea of a common oscillator. The results suggest instead that local and global stimulus attributes can affect the dynamics of each percept differently. We conclude that the two phenomena of visual awareness share basic temporal characteristics but are most likely influenced by processing at different stages within the visual system. PMID:25240063

  8. Perceptual advantage for category-relevant perceptual dimensions: the case of shape and motion.

    PubMed

    Folstein, Jonathan R; Palmeri, Thomas J; Gauthier, Isabel

    2014-01-01

    Category learning facilitates perception along relevant stimulus dimensions, even when tested in a discrimination task that does not require categorization. While this general phenomenon has been demonstrated previously, perceptual facilitation along dimensions has been documented by measuring different specific phenomena in different studies using different kinds of objects. Across several object domains, there is support for acquired distinctiveness, the stretching of a perceptual dimension relevant to learned categories. Studies using faces and studies using simple separable visual dimensions have also found evidence of acquired equivalence, the shrinking of a perceptual dimension irrelevant to learned categories, and categorical perception, the local stretching across the category boundary. These later two effects are rarely observed with complex non-face objects. Failures to find these effects with complex non-face objects may have been because the dimensions tested previously were perceptually integrated. Here we tested effects of category learning with non-face objects categorized along dimensions that have been found to be processed by different areas of the brain, shape and motion. While we replicated acquired distinctiveness, we found no evidence for acquired equivalence or categorical perception.

  9. Using virtual reality to augment perception, enhance sensorimotor adaptation, and change our minds.

    PubMed

    Wright, W Geoffrey

    2014-01-01

    Technological advances that involve human sensorimotor processes can have both intended and unintended effects on the central nervous system (CNS). This mini review focuses on the use of virtual environments (VE) to augment brain functions by enhancing perception, eliciting automatic motor behavior, and inducing sensorimotor adaptation. VE technology is becoming increasingly prevalent in medical rehabilitation, training simulators, gaming, and entertainment. Although these VE applications have often been shown to optimize outcomes, whether it be to speed recovery, reduce training time, or enhance immersion and enjoyment, there are inherent drawbacks to environments that can potentially change sensorimotor calibration. Across numerous VE studies over the years, we have investigated the effects of combining visual and physical motion on perception, motor control, and adaptation. Recent results from our research involving exposure to dynamic passive motion within a visually-depicted VE reveal that short-term exposure to augmented sensorimotor discordance can result in systematic aftereffects that last beyond the exposure period. Whether these adaptations are advantageous or not, remains to be seen. Benefits as well as risks of using VE-driven sensorimotor stimulation to enhance brain processes will be discussed.

  10. On-chip visual perception of motion: a bio-inspired connectionist model on FPGA.

    PubMed

    Torres-Huitzil, César; Girau, Bernard; Castellanos-Sánchez, Claudio

    2005-01-01

    Visual motion provides useful information to understand the dynamics of a scene to allow intelligent systems interact with their environment. Motion computation is usually restricted by real time requirements that need the design and implementation of specific hardware architectures. In this paper, the design of hardware architecture for a bio-inspired neural model for motion estimation is presented. The motion estimation is based on a strongly localized bio-inspired connectionist model with a particular adaptation of spatio-temporal Gabor-like filtering. The architecture is constituted by three main modules that perform spatial, temporal, and excitatory-inhibitory connectionist processing. The biomimetic architecture is modeled, simulated and validated in VHDL. The synthesis results on a Field Programmable Gate Array (FPGA) device show the potential achievement of real-time performance at an affordable silicon area.

  11. Perceived self-orientation and self-motion in microgravity, after landing and during preflight adaptation training

    NASA Technical Reports Server (NTRS)

    Harm, D. L.; Parker, D. E.

    1993-01-01

    The research described in this paper is intended to support development and evaluation of preflight adaptation training (PAT) apparatus and procedures. Successful training depends on appropriate manipulation of visual and inertial stimuli that control perception of self-motion and self-orientation. For one part of this process, astronauts are trained to report their self-motion and self-orientation experiences. Before their space mission, they are exposed to the altered sensory environments produced by the PAT trainers. During and after the mission, they report their motion and orientation experiences. Subsequently, they are again exposed to the PAT trainers and are asked to describe relationships between their experiences in microgravity and following entry and their experiences in the trainers.

  12. Modeling Visual, Vestibular and Oculomotor Interactions in Self-Motion Estimation

    NASA Technical Reports Server (NTRS)

    Perrone, John

    1997-01-01

    A computational model of human self-motion perception has been developed in collaboration with Dr. Leland S. Stone at NASA Ames Research Center. The research included in the grant proposal sought to extend the utility of this model so that it could be used for explaining and predicting human performance in a greater variety of aerospace applications. This extension has been achieved along with physiological validation of the basic operation of the model.

  13. A Theoretical and Experimental Analysis of the Outside World Perception Process

    NASA Technical Reports Server (NTRS)

    Wewerinke, P. H.

    1978-01-01

    The outside scene is often an important source of information for manual control tasks. Important examples of these are car driving and aircraft control. This paper deals with modelling this visual scene perception process on the basis of linear perspective geometry and the relative motion cues. Model predictions utilizing psychophysical threshold data from base-line experiments and literature of a variety of visual approach tasks are compared with experimental data. Both the performance and workload results illustrate that the model provides a meaningful description of the outside world perception process, with a useful predictive capability.

  14. Psychophysical scaling of circular vection (CV) produced by optokinetic (OKN) motion: individual differences and effects of practice.

    PubMed

    Kennedy, R S; Hettinger, L J; Harm, D L; Ordy, J M; Dunlap, W P

    1996-01-01

    Vection (V) refers to the compelling visual illusion of self-motion experienced by stationary individuals when viewing moving visual surrounds. The phenomenon is of theoretical interest because of its relevance for understanding the neural basis of ordinary self-motion perception, and of practical importance because it is the experience that makes simulation, virtual reality displays, and entertainment devices more vicarious. This experiment was performed to address whether an optokinetically induced vection illusion exhibits monotonic and stable psychometric properties and whether individuals differ reliably in these (V) perceptions. Subjects were exposed to varying velocities of the circular vection (CV) display in an optokinetic (OKN) drum 2 meters in diameter in 5 one-hour daily sessions extending over a 1 week period. For grouped data, psychophysical scalings of velocity estimates showed that exponents in a Stevens' type power function were essentially linear (slope = 0.95) and largely stable over sessions. Latencies were slightly longer for the slowest and fastest induction stimuli, and the trend over sessions for average latency was longer as a function of practice implying time course adaptation effects. Test-retest reliabilities for individual slope and intercept measures were moderately strong (r = 0.45) and showed no evidence of superdiagonal form. This implies stability of the individual circularvection (CV) sensitivities. Because the individual CV scores were stable, reliabilities were improved by averaging 4 sessions in order to provide a stronger retest reliability (r = 0.80). Individual latency responses were highly reliable (r = 0.80). Mean CV latency and motion sickness symptoms were greater in males than in females. These individual differences in CV could be predictive of other outcomes, such as susceptibility to disorientation or motion sickness, and for CNS localization of visual-vestibular interactions in the experience of self-motion.

  15. Imagined Self-Motion Differs from Perceived Self-Motion: Evidence from a Novel Continuous Pointing Method

    PubMed Central

    Campos, Jennifer L.; Siegle, Joshua H.; Mohler, Betty J.; Bülthoff, Heinrich H.; Loomis, Jack M.

    2009-01-01

    Background The extent to which actual movements and imagined movements maintain a shared internal representation has been a matter of much scientific debate. Of the studies examining such questions, few have directly compared actual full-body movements to imagined movements through space. Here we used a novel continuous pointing method to a) provide a more detailed characterization of self-motion perception during actual walking and b) compare the pattern of responding during actual walking to that which occurs during imagined walking. Methodology/Principal Findings This continuous pointing method requires participants to view a target and continuously point towards it as they walk, or imagine walking past it along a straight, forward trajectory. By measuring changes in the pointing direction of the arm, we were able to determine participants' perceived/imagined location at each moment during the trajectory and, hence, perceived/imagined self-velocity during the entire movement. The specific pattern of pointing behaviour that was revealed during sighted walking was also observed during blind walking. Specifically, a peak in arm azimuth velocity was observed upon target passage and a strong correlation was observed between arm azimuth velocity and pointing elevation. Importantly, this characteristic pattern of pointing was not consistently observed during imagined self-motion. Conclusions/Significance Overall, the spatial updating processes that occur during actual self-motion were not evidenced during imagined movement. Because of the rich description of self-motion perception afforded by continuous pointing, this method is expected to have significant implications for several research areas, including those related to motor imagery and spatial cognition and to applied fields for which mental practice techniques are common (e.g. rehabilitation and athletics). PMID:19907655

  16. Detection of visual events along the apparent motion trace in patients with paranoid schizophrenia.

    PubMed

    Sanders, Lia Lira Olivier; Muckli, Lars; de Millas, Walter; Lautenschlager, Marion; Heinz, Andreas; Kathmann, Norbert; Sterzer, Philipp

    2012-07-30

    Dysfunctional prediction in sensory processing has been suggested as a possible causal mechanism in the development of delusions in patients with schizophrenia. Previous studies in healthy subjects have shown that while the perception of apparent motion can mask visual events along the illusory motion trace, such motion masking is reduced when events are spatio-temporally compatible with the illusion, and, therefore, predictable. Here we tested the hypothesis that this specific detection advantage for predictable target stimuli on the apparent motion trace is reduced in patients with paranoid schizophrenia. Our data show that, although target detection along the illusory motion trace is generally impaired, both patients and healthy control participants detect predictable targets more often than unpredictable targets. Patients had a stronger motion masking effect when compared to controls. However, patients showed the same advantage in the detection of predictable targets as healthy control subjects. Our findings reveal stronger motion masking but intact prediction of visual events along the apparent motion trace in patients with paranoid schizophrenia and suggest that the sensory prediction mechanism underlying apparent motion is not impaired in paranoid schizophrenia. Copyright © 2012. Published by Elsevier Ireland Ltd.

  17. Acoustic facilitation of object movement detection during self-motion

    PubMed Central

    Calabro, F. J.; Soto-Faraco, S.; Vaina, L. M.

    2011-01-01

    In humans, as well as most animal species, perception of object motion is critical to successful interaction with the surrounding environment. Yet, as the observer also moves, the retinal projections of the various motion components add to each other and extracting accurate object motion becomes computationally challenging. Recent psychophysical studies have demonstrated that observers use a flow-parsing mechanism to estimate and subtract self-motion from the optic flow field. We investigated whether concurrent acoustic cues for motion can facilitate visual flow parsing, thereby enhancing the detection of moving objects during simulated self-motion. Participants identified an object (the target) that moved either forward or backward within a visual scene containing nine identical textured objects simulating forward observer translation. We found that spatially co-localized, directionally congruent, moving auditory stimuli enhanced object motion detection. Interestingly, subjects who performed poorly on the visual-only task benefited more from the addition of moving auditory stimuli. When auditory stimuli were not co-localized to the visual target, improvements in detection rates were weak. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow can operate on multisensory object representations. PMID:21307050

  18. Physiologic adaptation of man in space; Proceedings of the Seventh International Man in Space Symposium, Houston, TX, Feb. 10-13, 1986

    NASA Technical Reports Server (NTRS)

    Holland, Albert W. (Editor)

    1987-01-01

    Topics discussed in this volume include space motion sickness, cardiovascular adaptation, fluid shifts, extravehicular activity, general physiology, perception, vestibular response modifications, vestibular physiology, and pharmacology. Papers are presented on the clinical characterization and etiology of space motion sickness, ultrasound techniques in space medicine, fluid shifts in weightlessness, Space Shuttle inflight and postflight fluid shifts measured by leg volume changes, and the probability of oxygen toxicity in an 8-psi space suit. Consideration is also given to the metabolic and hormonal status of crewmembers in short-term space flights, adaptive changes in perception of body orientation and mental image rotation in microgravity, the effects of a visual-vestibular stimulus on the vestibulo-ocular reflex, rotation tests in the weightless phase of parabolic flight, and the mechanisms of antimotion sickness drugs.

  19. Motion processing with two eyes in three dimensions.

    PubMed

    Rokers, Bas; Czuba, Thaddeus B; Cormack, Lawrence K; Huk, Alexander C

    2011-02-11

    The movement of an object toward or away from the head is perhaps the most critical piece of information an organism can extract from its environment. Such 3D motion produces horizontally opposite motions on the two retinae. Little is known about how or where the visual system combines these two retinal motion signals, relative to the wealth of knowledge about the neural hierarchies involved in 2D motion processing and binocular vision. Canonical conceptions of primate visual processing assert that neurons early in the visual system combine monocular inputs into a single cyclopean stream (lacking eye-of-origin information) and extract 1D ("component") motions; later stages then extract 2D pattern motion from the cyclopean output of the earlier stage. Here, however, we show that 3D motion perception is in fact affected by the comparison of opposite 2D pattern motions between the two eyes. Three-dimensional motion sensitivity depends systematically on pattern motion direction when dichoptically viewing gratings and plaids-and a novel "dichoptic pseudoplaid" stimulus provides strong support for use of interocular pattern motion differences by precluding potential contributions from conventional disparity-based mechanisms. These results imply the existence of eye-of-origin information in later stages of motion processing and therefore motivate the incorporation of such eye-specific pattern-motion signals in models of motion processing and binocular integration.

  20. Network Interactions Explain Sensitivity to Dynamic Faces in the Superior Temporal Sulcus.

    PubMed

    Furl, Nicholas; Henson, Richard N; Friston, Karl J; Calder, Andrew J

    2015-09-01

    The superior temporal sulcus (STS) in the human and monkey is sensitive to the motion of complex forms such as facial and bodily actions. We used functional magnetic resonance imaging (fMRI) to explore network-level explanations for how the form and motion information in dynamic facial expressions might be combined in the human STS. Ventral occipitotemporal areas selective for facial form were localized in occipital and fusiform face areas (OFA and FFA), and motion sensitivity was localized in the more dorsal temporal area V5. We then tested various connectivity models that modeled communication between the ventral form and dorsal motion pathways. We show that facial form information modulated transmission of motion information from V5 to the STS, and that this face-selective modulation likely originated in OFA. This finding shows that form-selective motion sensitivity in the STS can be explained in terms of modulation of gain control on information flow in the motion pathway, and provides a substantial constraint for theories of the perception of faces and biological motion. © The Author 2014. Published by Oxford University Press.

  1. Evaluation of simulation motion fidelity criteria in the vertical and directional axes

    NASA Technical Reports Server (NTRS)

    Schroeder, Jeffery A.

    1993-01-01

    An evaluation of existing motion fidelity criteria was conducted on the NASA Ames Vertical Motion Simulator. Experienced test pilots flew single-axis repositioning tasks in both the vertical and the directional axes. Using a first-order approximation of a hovering helicopter, tasks were flown with variations only in the filters that attenuate the commands to the simulator motion system. These filters had second-order high-pass characteristics, and the variations were made in the filter gain and natural frequency. The variations spanned motion response characteristics from nearly full math-model motion to fixed-base. Between configurations, pilots recalibrated their motion response perception by flying the task with full motion. Pilots subjectively rated the motion fidelity of subsequent configurations relative to this full motion case, which was considered the standard for comparison. The results suggested that the existing vertical-axis criterion was accurate for combinations of gain and natural frequency changes. However, if only the gain or the natural frequency was changed, the rated motion fidelity was better than the criterion predicted. In the vertical axis, the objective and subjective results indicated that a larger gain reduction was tolerated than the existing criterion allowed. The limited data collected in the yaw axis revealed that pilots had difficulty in distinguishing among the variations in the pure yaw motion cues.

  2. New human-centered linear and nonlinear motion cueing algorithms for control of simulator motion systems

    NASA Astrophysics Data System (ADS)

    Telban, Robert J.

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. To address this, new human-centered motion cueing algorithms were developed. A revised "optimal algorithm" uses time-invariant filters developed by optimal control, incorporating human vestibular system models. The "nonlinear algorithm" is a novel approach that is also formulated by optimal control, but can also be updated in real time. It incorporates a new integrated visual-vestibular perception model that includes both visual and vestibular sensation and the interaction between the stimuli. A time-varying control law requires the matrix Riccati equation to be solved in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. As a result of unsatisfactory sensation, an augmented turbulence cue was added to the vertical mode for both the optimal and nonlinear algorithms. The relative effectiveness of the algorithms, in simulating aircraft maneuvers, was assessed with an eleven-subject piloted performance test conducted on the NASA Langley Visual Motion Simulator (VMS). Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach are less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.

  3. A Bayesian model of stereopsis depth and motion direction discrimination.

    PubMed

    Read, J C A

    2002-02-01

    The extraction of stereoscopic depth from retinal disparity, and motion direction from two-frame kinematograms, requires the solution of a correspondence problem. In previous psychophysical work [Read and Eagle (2000) Vision Res 40: 3345-3358], we compared the performance of the human stereopsis and motion systems with correlated and anti-correlated stimuli. We found that, although the two systems performed similarly for narrow-band stimuli, broadband anti-correlated kinematograms produced a strong perception of reversed motion, whereas the stereograms appeared merely rivalrous. I now model these psychophysical data with a computational model of the correspondence problem based on the known properties of visual cortical cells. Noisy retinal images are filtered through a set of Fourier channels tuned to different spatial frequencies and orientations. Within each channel, a Bayesian analysis incorporating a prior preference for small disparities is used to assess the probability of each possible match. Finally, information from the different channels is combined to arrive at a judgement of stimulus disparity. Each model system--stereopsis and motion--has two free parameters: the amount of noise they are subject to, and the strength of their preference for small disparities. By adjusting these parameters independently for each system, qualitative matches are produced to psychophysical data, for both correlated and anti-correlated stimuli, across a range of spatial frequency and orientation bandwidths. The motion model is found to require much higher noise levels and a weaker preference for small disparities. This makes the motion model more tolerant of poor-quality reverse-direction false matches encountered with anti-correlated stimuli, matching the strong perception of reversed motion that humans experience with these stimuli. In contrast, the lower noise level and tighter prior preference used with the stereopsis model means that it performs close to chance with anti-correlated stimuli, in accordance with human psychophysics. Thus, the key features of the experimental data can be reproduced assuming that the motion system experiences more effective noise than the stereoscopy system and imposes a less stringent preference for small disparities.

  4. Transitions between Central and Peripheral Vision Create Spatial/Temporal Distortions: A Hypothesis Concerning the Perceived Break of the Curveball

    PubMed Central

    Shapiro, Arthur; Lu, Zhong-Lin; Huang, Chang-Bing; Knight, Emily; Ennis, Robert

    2010-01-01

    Background The human visual system does not treat all parts of an image equally: the central segments of an image, which fall on the fovea, are processed with a higher resolution than the segments that fall in the visual periphery. Even though the differences between foveal and peripheral resolution are large, these differences do not usually disrupt our perception of seamless visual space. Here we examine a motion stimulus in which the shift from foveal to peripheral viewing creates a dramatic spatial/temporal discontinuity. Methodology/Principal Findings The stimulus consists of a descending disk (global motion) with an internal moving grating (local motion). When observers view the disk centrally, they perceive both global and local motion (i.e., observers see the disk's vertical descent and the internal spinning). When observers view the disk peripherally, the internal portion appears stationary, and the disk appears to descend at an angle. The angle of perceived descent increases as the observer views the stimulus from further in the periphery. We examine the first- and second-order information content in the display with the use of a three-dimensional Fourier analysis and show how our results can be used to describe perceived spatial/temporal discontinuities in real-world situations. Conclusions/Significance The perceived shift of the disk's direction in the periphery is consistent with a model in which foveal processing separates first- and second-order motion information while peripheral processing integrates first- and second-order motion information. We argue that the perceived distortion may influence real-world visual observations. To this end, we present a hypothesis and analysis of the perception of the curveball and rising fastball in the sport of baseball. The curveball is a physically measurable phenomenon: the imbalance of forces created by the ball's spin causes the ball to deviate from a straight line and to follow a smooth parabolic path. However, the curveball is also a perceptual puzzle because batters often report that the flight of the ball undergoes a dramatic and nearly discontinuous shift in position as the ball nears home plate. We suggest that the perception of a discontinuous shift in position results from differences between foveal and peripheral processing. PMID:20967247

  5. Type of featural attention differentially modulates hMT+ responses to illusory motion aftereffects.

    PubMed

    Castelo-Branco, Miguel; Kozak, Lajos R; Formisano, Elia; Teixeira, João; Xavier, João; Goebel, Rainer

    2009-11-01

    Activity in the human motion complex (hMT(+)/V5) is related to the perception of motion, be it either real surface motion or an illusion of motion such as apparent motion (AM) or motion aftereffect (MAE). It is a long-lasting debate whether illusory motion-related activations in hMT(+) represent the motion itself or attention to it. We have asked whether hMT(+) responses to MAEs are present when shifts in arousal are suppressed and attention is focused on concurrent motion versus nonmotion features. Significant enhancement of hMT(+) activity was observed during MAEs when attention was focused either on concurrent spatial angle or color features. This observation was confirmed by direct comparison of adapting (MAE inducing) versus nonadapting conditions. In contrast, this effect was diminished when subjects had to report on concomitant speed changes of superimposed AM. The same finding was observed for concomitant orthogonal real motion (RM), suggesting that selective attention to concurrent illusory or real motion was interfering with the saliency of MAE signals in hMT(+). We conclude that MAE-related changes in the global activity of hMT(+) are present provided selective attention is not focused on an interfering feature such as concurrent motion. Accordingly, there is a genuine MAE-related motion signal in hMT(+) that is neither explained by shifts in arousal nor by selective attention.

  6. Critical role of foreground stimuli in perceiving visually induced self-motion (vection).

    PubMed

    Nakamura, S; Shimojo, S

    1999-01-01

    The effects of a foreground stimulus on vection (illusory perception of self-motion induced by a moving background stimulus) were examined in two experiments. The experiments reveal that the presentation of a foreground pattern with a moving background stimulus may affect vection. The foreground stimulus facilitated vection strength when it remained stationary or moved slowly in the opposite direction to that of the background stimulus. On the other hand, there was a strong inhibition of vection when the foreground stimulus moved slowly with, or quickly against, the background. These results suggest that foreground stimuli, as well as background stimuli, play an important role in perceiving self-motion.

  7. Autogenic-feedback training - A treatment for motion and space sickness

    NASA Technical Reports Server (NTRS)

    Cowings, Patricia S.

    1990-01-01

    A training method for preventing the occurrence of motion sickness in humans, called autogenic-feedback training (AFT), is described. AFT is based on a combination of biofeedback and autogenic therapy which involves training physiological self-regulation as an alternative to pharmacological management. AFT was used to reliably increase tolerance to motion-sickness-inducing tests in both men and women ranging in age from 18 to 54 years. The effectiveness of AFT is found to be significantly higher than that of protective adaptation training. Data obtained show that there is no apparent effect from AFT on measures of vestibular perception and no side effects.

  8. Perception of linear acceleration in weightlessness

    NASA Technical Reports Server (NTRS)

    Arrott, A. P.; Young, L. R.

    1987-01-01

    Eye movements and subjective detection of acceleration were measured on human experimental subjects during vestibular sled acceleration during the D1 Spacelab Mission. Methods and results are reported on the time to detection of small acceleration steps, the threshold for detection of linear acceleration, perceived motion path, and CLOAT. A consistently shorter time to detection of small acceleration steps is found. Subjective reports of perceived motion during sinusoidal oscillation in weightlessness were qualitatively similar to reports on earth.

  9. Synaptic Correlates of Low-Level Perception in V1.

    PubMed

    Gerard-Mercier, Florian; Carelli, Pedro V; Pananceau, Marc; Troncoso, Xoana G; Frégnac, Yves

    2016-04-06

    The computational role of primary visual cortex (V1) in low-level perception remains largely debated. A dominant view assumes the prevalence of higher cortical areas and top-down processes in binding information across the visual field. Here, we investigated the role of long-distance intracortical connections in form and motion processing by measuring, with intracellular recordings, their synaptic impact on neurons in area 17 (V1) of the anesthetized cat. By systematically mapping synaptic responses to stimuli presented in the nonspiking surround of V1 receptive fields, we provide the first quantitative characterization of the lateral functional connectivity kernel of V1 neurons. Our results revealed at the population level two structural-functional biases in the synaptic integration and dynamic association properties of V1 neurons. First, subthreshold responses to oriented stimuli flashed in isolation in the nonspiking surround exhibited a geometric organization around the preferred orientation axis mirroring the psychophysical "association field" for collinear contour perception. Second, apparent motion stimuli, for which horizontal and feedforward synaptic inputs summed in-phase, evoked dominantly facilitatory nonlinear interactions, specifically during centripetal collinear activation along the preferred orientation axis, at saccadic-like speeds. This spatiotemporal integration property, which could constitute the neural correlate of a human perceptual bias in speed detection, suggests that local (orientation) and global (motion) information is already linked within V1. We propose the existence of a "dynamic association field" in V1 neurons, whose spatial extent and anisotropy are transiently updated and reshaped as a function of changes in the retinal flow statistics imposed during natural oculomotor exploration. The computational role of primary visual cortex in low-level perception remains debated. The expression of this "pop-out" perception is often assumed to require attention-related processes, such as top-down feedback from higher cortical areas. Using intracellular techniques in the anesthetized cat and novel analysis methods, we reveal unexpected structural-functional biases in the synaptic integration and dynamic association properties of V1 neurons. These structural-functional biases provide a substrate, within V1, for contour detection and, more unexpectedly, global motion flow sensitivity at saccadic speed, even in the absence of attentional processes. We argue for the concept of a "dynamic association field" in V1 neurons, whose spatial extent and anisotropy changes with retinal flow statistics, and more generally for a renewed focus on intracortical computation. Copyright © 2016 the authors 0270-6474/16/363925-18$15.00/0.

  10. Speed and direction changes induce the perception of animacy in 7-month-old infants

    PubMed Central

    Träuble, Birgit; Pauen, Sabina; Poulin-Dubois, Diane

    2014-01-01

    A large body of research has documented infants’ ability to classify animate and inanimate objects based on static or dynamic information. It has been shown that infants less than 1 year of age transfer animacy-specific expectations from dynamic point-light displays to static images. The present study examined whether basic motion cues that typically trigger judgments of perceptual animacy in older children and adults lead 7-month-olds to infer an ambiguous object’s identity from dynamic information. Infants were tested with a novel paradigm that required inferring the animacy status of an ambiguous moving shape. An ambiguous shape emerged from behind a screen and its identity could only be inferred from its motion. Its motion pattern varied distinctively between scenes: it either changed speed and direction in an animate way, or it moved along a straight path at a constant speed (i.e., in an inanimate way). At test, the identity of the shape was revealed and it was either consistent or inconsistent with its motion pattern. Infants looked longer on trials with the inconsistent outcome. We conclude that 7-month-olds’ representations of animates and inanimates include category-specific associations between static and dynamic attributes. Moreover, these associations seem to hold for simple dynamic cues that are considered minimal conditions for animacy perception. PMID:25346712

  11. M.I.T./Canadian vestibular experiments on the Spacelab-1 mission: 6. Vestibular reactions to lateral acceleration following ten days of weightlessness

    NASA Technical Reports Server (NTRS)

    Arrott, A. P.; Young, L. R.

    1986-01-01

    Tests of otolith function were performed pre-flight and post-flight on the science crew of the first Spacelab Mission with a rail-mounted linear acceleration sled. Four tests were performed using horizontal lateral (y-axis) acceleration: perception of linear motion, a closed loop nulling task, dynamic ocular torsion, and lateral eye deviations. The motion perception test measured the time to detect the onset and direction of near threshold accelerations. Post-flight measures of threshold and velocity constant obtained during the days immediately following the mission showed no consistent pattern of change among the four crewmen compared to their pre-flight baseline other than an increased variability of response. In the closed loop nulling task, crewmen controlled the motion of the sled and attempted to null a computer-generated random disturbance motion. When performed in the light, no difference in ability was noted between pre-flight and post-flight. In the dark, however, two of the four crewmen exhibited somewhat enhanced performance post-flight. Dynamic ocular torsion was measured in response to sinusoidal lateral acceleration which produces a gravitionertial stimulus equivalent to lateral head tilt without rotational movement of the head. Results available for two crewmen suggest a decreased amplitude of sinusoidal ocular torsion when measured on the day of landing (R+0) and an increasing amplitude when measured during the week following the mission.

  12. Observation and imitation of actions performed by humans, androids, and robots: an EMG study

    PubMed Central

    Hofree, Galit; Urgen, Burcu A.; Winkielman, Piotr; Saygin, Ayse P.

    2015-01-01

    Understanding others’ actions is essential for functioning in the physical and social world. In the past two decades research has shown that action perception involves the motor system, supporting theories that we understand others’ behavior via embodied motor simulation. Recently, empirical approach to action perception has been facilitated by using well-controlled artificial stimuli, such as robots. One broad question this approach can address is what aspects of similarity between the observer and the observed agent facilitate motor simulation. Since humans have evolved among other humans and animals, using artificial stimuli such as robots allows us to probe whether our social perceptual systems are specifically tuned to process other biological entities. In this study, we used humanoid robots with different degrees of human-likeness in appearance and motion along with electromyography (EMG) to measure muscle activity in participants’ arms while they either observed or imitated videos of three agents produce actions with their right arm. The agents were a Human (biological appearance and motion), a Robot (mechanical appearance and motion), and an Android (biological appearance and mechanical motion). Right arm muscle activity increased when participants imitated all agents. Increased muscle activation was found also in the stationary arm both during imitation and observation. Furthermore, muscle activity was sensitive to motion dynamics: activity was significantly stronger for imitation of the human than both mechanical agents. There was also a relationship between the dynamics of the muscle activity and motion dynamics in stimuli. Overall our data indicate that motor simulation is not limited to observation and imitation of agents with a biological appearance, but is also found for robots. However we also found sensitivity to human motion in the EMG responses. Combining data from multiple methods allows us to obtain a more complete picture of action understanding and the underlying neural computations. PMID:26150782

  13. Slushy weightings for the optimal pilot model. [considering visual tracking task

    NASA Technical Reports Server (NTRS)

    Dillow, J. D.; Picha, D. G.; Anderson, R. O.

    1975-01-01

    A pilot model is described which accounts for the effect of motion cues in a well defined visual tracking task. The effect of visual and motion cues are accounted for in the model in two ways. First, the observation matrix in the pilot model is structured to account for the visual and motion inputs presented to the pilot. Secondly, the weightings in the quadratic cost function associated with the pilot model are modified to account for the pilot's perception of the variables he considers important in the task. Analytic results obtained using the pilot model are compared to experimental results and in general good agreement is demonstrated. The analytic model yields small improvements in tracking performance with the addition of motion cues for easily controlled task dynamics and large improvements in tracking performance with the addition of motion cues for difficult task dynamics.

  14. Technology evaluation of man-rated acceleration test equipment for vestibular research

    NASA Technical Reports Server (NTRS)

    Taback, I.; Kenimer, R. L.; Butterfield, A. J.

    1983-01-01

    The considerations for eliminating acceleration noise cues in horizontal, linear, cyclic-motion sleds intended for both ground and shuttle-flight applications are addressed. the principal concerns are the acceleration transients associated with change in direction-of-motion for the carriage. The study presents a design limit for acceleration cues or transients based upon published measurements for thresholds of human perception to linear cyclic motion. The sources and levels for motion transients are presented based upon measurements obtained from existing sled systems. The approaches to a noise-free system recommends the use of air bearings for the carriage support and moving-coil linear induction motors operating at low frequency as the drive system. Metal belts running on air bearing pulleys provide an alternate approach to the driving system. The appendix presents a discussion of alternate testing techniques intended to provide preliminary type data by means of pendulums, linear motion devices and commercial air bearing tables.

  15. Independent Deficits of Visual Word and Motion Processing in Aging and Early Alzheimer's Disease

    PubMed Central

    Velarde, Carla; Perelstein, Elizabeth; Ressmann, Wendy; Duffy, Charles J.

    2013-01-01

    We tested whether visual processing impairments in aging and Alzheimer's disease (AD) reflect uniform posterior cortical decline, or independent disorders of visual processing for reading and navigation. Young and older normal controls were compared to early AD patients using psychophysical measures of visual word and motion processing. We find elevated perceptual thresholds for letters and word discrimination from young normal controls, to older normal controls, to early AD patients. Across subject groups, visual motion processing showed a similar pattern of increasing thresholds, with the greatest impact on radial pattern motion perception. Combined analyses show that letter, word, and motion processing impairments are independent of each other. Aging and AD may be accompanied by independent impairments of visual processing for reading and navigation. This suggests separate underlying disorders and highlights the need for comprehensive evaluations to detect early deficits. PMID:22647256

  16. Apparent motion determined by surface layout not by disparity or three-dimensional distance.

    PubMed

    He, Z J; Nakayama, K

    1994-01-13

    The most meaningful events ecologically, including the motion of objects, occur in relation to or on surfaces. We run along the ground, cars travel on roads, balls roll across lawns, and so on. Even though there are other motions, such as flying of birds, it is likely that motion along surfaces is more frequent and more significant biologically. To examine whether events occurring in relation to surfaces have a preferred status in terms of visual representation, we asked whether the phenomenon of apparent motion would show a preference for motion attached to surfaces. We used a competitive three-dimensional motion paradigm and found that there is a preference to see motion between tokens placed within the same disparity as opposed to different planes. Supporting our surface-layout hypothesis, the effect of disparity was eliminated either by slanting the tokens so that they were all seen within the same surface plane or by inserting a single slanted background surface upon which the tokens could rest. Additionally, a highly curved stereoscopic surface led to the perception of a more circuitous motion path defined by that surface, instead of the shortest path in three-dimensional space.

  17. Self-organizing neural integration of pose-motion features for human action recognition

    PubMed Central

    Parisi, German I.; Weber, Cornelius; Wermter, Stefan

    2015-01-01

    The visual recognition of complex, articulated human movements is fundamental for a wide range of artificial systems oriented toward human-robot communication, action classification, and action-driven perception. These challenging tasks may generally involve the processing of a huge amount of visual information and learning-based mechanisms for generalizing a set of training actions and classifying new samples. To operate in natural environments, a crucial property is the efficient and robust recognition of actions, also under noisy conditions caused by, for instance, systematic sensor errors and temporarily occluded persons. Studies of the mammalian visual system and its outperforming ability to process biological motion information suggest separate neural pathways for the distinct processing of pose and motion features at multiple levels and the subsequent integration of these visual cues for action perception. We present a neurobiologically-motivated approach to achieve noise-tolerant action recognition in real time. Our model consists of self-organizing Growing When Required (GWR) networks that obtain progressively generalized representations of sensory inputs and learn inherent spatio-temporal dependencies. During the training, the GWR networks dynamically change their topological structure to better match the input space. We first extract pose and motion features from video sequences and then cluster actions in terms of prototypical pose-motion trajectories. Multi-cue trajectories from matching action frames are subsequently combined to provide action dynamics in the joint feature space. Reported experiments show that our approach outperforms previous results on a dataset of full-body actions captured with a depth sensor, and ranks among the best results for a public benchmark of domestic daily actions. PMID:26106323

  18. A neural model of visual figure-ground segregation from kinetic occlusion.

    PubMed

    Barnes, Timothy; Mingolla, Ennio

    2013-01-01

    Freezing is an effective defense strategy for some prey, because their predators rely on visual motion to distinguish objects from their surroundings. An object moving over a background progressively covers (deletes) and uncovers (accretes) background texture while simultaneously producing discontinuities in the optic flow field. These events unambiguously specify kinetic occlusion and can produce a crisp edge, depth perception, and figure-ground segmentation between identically textured surfaces--percepts which all disappear without motion. Given two abutting regions of uniform random texture with different motion velocities, one region appears to be situated farther away and behind the other (i.e., the ground) if its texture is accreted or deleted at the boundary between the regions, irrespective of region and boundary velocities. Consequently, a region with moving texture appears farther away than a stationary region if the boundary is stationary, but it appears closer (i.e., the figure) if the boundary is moving coherently with the moving texture. A computational model of visual areas V1 and V2 shows how interactions between orientation- and direction-selective cells first create a motion-defined boundary and then signal kinetic occlusion at that boundary. Activation of model occlusion detectors tuned to a particular velocity results in the model assigning the adjacent surface with a matching velocity to the far depth. A weak speed-depth bias brings faster-moving texture regions forward in depth in the absence of occlusion (shearing motion). These processes together reproduce human psychophysical reports of depth ordering for key cases of kinetic occlusion displays. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Modeling human behaviors and reactions under dangerous environment.

    PubMed

    Kang, J; Wright, D K; Qin, S F; Zhao, Y

    2005-01-01

    This paper describes the framework of a real-time simulation system to model human behavior and reactions in dangerous environments. The system utilizes the latest 3D computer animation techniques, combined with artificial intelligence, robotics and psychology, to model human behavior, reactions and decision making under expected/unexpected dangers in real-time in virtual environments. The development of the system includes: classification on the conscious/subconscious behaviors and reactions of different people; capturing different motion postures by the Eagle Digital System; establishing 3D character animation models; establishing 3D models for the scene; planning the scenario and the contents; and programming within Virtools Dev. Programming within Virtools Dev is subdivided into modeling dangerous events, modeling character's perceptions, modeling character's decision making, modeling character's movements, modeling character's interaction with environment and setting up the virtual cameras. The real-time simulation of human reactions in hazardous environments is invaluable in military defense, fire escape, rescue operation planning, traffic safety studies, and safety planning in chemical factories, the design of buildings, airplanes, ships and trains. Currently, human motion modeling can be realized through established technology, whereas to integrate perception and intelligence into virtual human's motion is still a huge undertaking. The challenges here are the synchronization of motion and intelligence, the accurate modeling of human's vision, smell, touch and hearing, the diversity and effects of emotion and personality in decision making. There are three types of software platforms which could be employed to realize the motion and intelligence within one system, and their advantages and disadvantages are discussed.

  20. How visual illusions illuminate complementary brain processes: illusory depth from brightness and apparent motion of illusory contours

    PubMed Central

    Grossberg, Stephen

    2014-01-01

    Neural models of perception clarify how visual illusions arise from adaptive neural processes. Illusions also provide important insights into how adaptive neural processes work. This article focuses on two illusions that illustrate a fundamental property of global brain organization; namely, that advanced brains are organized into parallel cortical processing streams with computationally complementary properties. That is, in order to process certain combinations of properties, each cortical stream cannot process complementary properties. Interactions between these streams, across multiple processing stages, overcome their complementary deficiencies to compute effective representations of the world, and to thereby achieve the property of complementary consistency. The two illusions concern how illusory depth can vary with brightness, and how apparent motion of illusory contours can occur. Illusory depth from brightness arises from the complementary properties of boundary and surface processes, notably boundary completion and surface-filling in, within the parvocellular form processing cortical stream. This illusion depends upon how surface contour signals from the V2 thin stripes to the V2 interstripes ensure complementary consistency of a unified boundary/surface percept. Apparent motion of illusory contours arises from the complementary properties of form and motion processes across the parvocellular and magnocellular cortical processing streams. This illusion depends upon how illusory contours help to complete boundary representations for object recognition, how apparent motion signals can help to form continuous trajectories for target tracking and prediction, and how formotion interactions from V2-to-MT enable completed object representations to be continuously tracked even when they move behind intermittently occluding objects through time. PMID:25389399

Top